## Introduction
In recent years I have studied to help create the SAFE network, along with brilliant Engineers. It’s been a journey and one of a typical creative endeavour. I have always figured AI or at least evolutionary programming would be a big part of that. From simple perceptrons through genetic programming and neuroevolution (NEAT in particular). Recent LLM success has certainly added an intriguing angle to these studies, mainly the notion that copying a brain may not be the best route to “intelligence”. This is fascinating, but more pleasing is the simplicity of the LLM models.
## Simplicity
Recently the tech world was stunned by generative AI models, in particular GTP, then the rest of the world followed as it became accessible via CHAT GTP. What is remarkable to me though is the simplicity of “guessing the next token (word)” when we introduce context (attention is all you need). This shocked many of the “fathers” of AI to their very core. Not in how capable it was per se, but how simply it managed to get to human level capabilities.
We always thought creativity and our intelligence separated us from machines, it turns out that may be the thing we have most in common, at least for now. That should shake everyone!
## Creativity
Creativity is an interesting concept. It involves many steps, imagining things, testing things that might work, but probably don’t and then forging a path to make the previously impossible possible. What is imagining, if not hallucinating (As the AI world has come to know it). So hallucination is good, it’s required for creativity. Stop any creative process mid way and you will have a hallucination like outcome.
Creativity also requires research, crazy ideas and something rather interesting, unbiased and limitless patience from partners. Innovation tends to come from individuals, big stuff like electricity, magnetism, relativity, QED and more stemmed from individuals. Partners are difficult as they introduce ego (their idea must be used) and they are not ready and willing when the lightbulb moments strike (late night, very early morning etc.). Having a near human level LLM with broad expertise in many areas opens doors that just did not even exist last year.
## Complexity
Professionals love complexity. It seems to boost the ego of practitioners in many areas, but nature hates it. When I say complexity I mean very narrow areas of knowledge presented in large and confusing language with many graphs and weird Greek lettered formulae or work that is so complex it is only understandable by a tiny percentage of experts. I see this as detrimental to the evolution of our species/knowledge.
Nature is complex, our brains were thought to be complex, but that complexity may be much less so in my opinion.
I think many “experts” will take modern AI and make even more complex papers and formulae from it. That will boost ego and do little for humanity in most cases. Complexity is really something we use when we cannot understand the fundamentals of life and the interaction of those fundamentals. Back to “attention is all you need” and really think about that, it’s fascinating.
The complexity of understanding the coordination of many simple artefacts to produce something as powerful as a human or as successful as an ant colony is incredibly difficult, as it involves extreme deep thinking. It also requires, as we also see in nature, many iterations with tiny changes each time. These systems will not work right until the last tiny change, then they do. At this point though, another tiny change may break them. Interestingly, when they work it’s unlikely they can be described in any detail which is where Engineers get nervous, but they shouldn’t as they are themselves actually built in this way.
This, to me, is the real area of complexity. It’s very much like a neural network that works, we cannot describe it in detail, we cannot describe and ant’s make up in detail, at least in relation to it’s function as part of the colony. No amount of Greek letters and advanced mathematics seems to be appropriate. Many people may disagree, but this is where I feel real advances can be made, should we be able to teach this kind of thinking at a deep level.
## Context with respect to LLMs
A slight deviation here. Context is important when guessing the next thing. This is what gave modern generative AI (LLMs in particular) the jump we now see. However, it’s more than context in relation to the training, i.e. the model itself. That training data is many hundreds or more exabytes of raw data that is encoded into some vector representation of context or distance between tokens (attention).
## Why are the current AI’s not that smart in some ways
This is again context. The training data may be perfect (it’s not, but assume it is), but the context from us to the AI is incredibly limited. It’s a small window in which we have this conversation. That window may allow only a few hundred messages in a conversation, and even then it’s vague in the middle of the conversation (technical issue). The context on our side, the human, is limited.
Going back to creativity, it’s a back and forth conversation, hallucinating, testing and changing. With limited context windows, current LLM’s are crippled. When we add to that the “one shot” type conversation where the AI starts the answer almost immediately, it means we get a step-by-step look at the creative process. When we see a hallucination we stop and say “hey this is broke” but I don’t believe it is. I think we stopped a creative brain before it was able to even take step 2 of a multi-step process.
There are projects to increase the human side context and perhaps even make that unlimited. There are also projects to allow the AI to iterate, but they are in early stages. This problem, though, is not complex at all and projects like memory-gtp, multi stage prompting and agents are already making headway, if not already there in alpha form.
However, I believe the AI will be able to take our individual (and hopefully private) life experience, but importantly it will not be complex models and algorithms that achieve this. It will be as simple as we have today. So simplicity is key here, it cannot be stressed enough the simplicity of these systems is what is startling us all.
## Conclusion
The SAFE network has taken a massive turn recently and gone back to its roots. These are founded in simplicity, a natural system based on the functions of an ant colony. That sophistication through what looks like a complex series of actions we cannot understand, much like the neural networks in these simple LLMs. Each neural network node is super simple too, it’s just adding them together that gives us complexity that leads to sophistication. However, we cannot understand the simple reactions when they are added together, and neither should we.
It is the simplicity of the individual components communicating that gives us not only natural capabilities, but the ability to evolve systems. Complex mechanisms themselves are not scalable, understandable or natural.
The SAFE network, likewise, is based on simplicity and recently the team have made this even simpler, but with dramatically powerful consequences. Hard to explain, but easy to see the outcomes.
This is my excitement in AI, not the ability to create complex algorithms and be applauded as some kind of super genius, but to dive deeper into creating systems that comprise super simple steps to give us even more capable systems. Then, for us all to really understand the connectivity of all things, at a fundamental level. It’s not about genius through complexity, it’s about genius through simplicity.
AI will allow us to create and evaluate simple interconnected systems in a way we never could. While others create complex components, I am ecstatic to look deeper and deeper into the cellular automata approach (again) ,enhancing creativity and advancing our species. So more ant investigation and much more fun!

Leave a comment