Recently, I joined BrightStar Labs (https://brightstarlabs.ai/) as an affiliate researcher, where I’ve been working on a class of systems known as Emergent Models (EMs). They’re not built like neural networks or traditional programs. Instead, they evolve from very simple beginnings: just a line or grid of colored cells following a small set of update rules. Over time, patterns emerge. Some of those patterns perform tasks that resemble basic forms of computation.
At first, EMs may seem abstract or even mechanical. But the more I work with them, the more they raise interesting questions. These questions aren't just about computation, but about how we understand concepts like memory, intelligence, and structure.
Intelligence Without Intent?
One of the main features of an EM is that it’s not designed in the usual sense. There's no optimizer telling it what to do, and no fixed model architecture. Instead, its behavior is shaped by initial conditions and a rule table that determines how each cell updates based on its neighbors. From this, the system can sometimes learn to solve problems, like multiplying a number by 2, or adding a number with another number.
![]() |
An example of EM43, an emergent model trained to double numbers using cellular automata. The input n=4 is encoded by a beacon of 5 zeroes followed by a red marker (R), separated from the fixed-length program by two blue cells (B). As the automaton evolves, the interaction between the program and input structure produces the output. Once over 50% of the active cells are blue, the model halts, and the output (here: 8) is decoded from the distance between the final R and the last B cell, illustrating generalization beyond the training range. Giacomo Bocchese, https://bocchesegiacomo.github.io/em43viewer/ |
What makes this interesting is that the system doesn’t "know" it’s solving anything. There’s no internal plan or strategy. Yet the outcome still lines up with what we’d expect from a deliberately programmed system. This opens the door to a different kind of question: can a system be intelligent even if it’s not trying to be?
We often associate intelligence with intent, with some idea of purpose or direction. But EMs show that it might be possible to produce intelligent-looking behavior without either. That doesn't make them conscious or aware, but it does suggest that some parts of intelligence may emerge from structural dynamics alone.
Constraint Before Purpose
One of the subtler features of EMs is how they achieve results. There’s no explicit objective or loss function, but the system often evolves into a state that satisfies some kind of constraint, like aligning markers or positioning a cell at the correct spot. This reminds me of how constraint satisfaction works in logic or even in certain types of art. It’s not about reward, but about structure.
This idea might offer an alternative way of thinking about intelligence. Maybe it’s not always about seeking goals. Sometimes, it’s about finding a structure that “fits” a set of conditions, whether or not we understand why. In this sense, EMs don’t model desire or preference, they model alignment.
A Different Lens on Computation
EMs don’t offer a full theory of thought or cognition, and they’re not meant to. But they do raise questions that traditional AI systems don’t. Instead of focusing on prediction, they focus on transformation. Instead of following a loss function, they evolve under structure. Instead of being trained for a task, they simply develop within a system of rules.
This makes them more than a technical curiosity. They give us another way to think about how complex behavior can form, and perhaps even how minds can arise without minds behind them.
(P.S. Thanks to BrightStar Labs founders Giacomo Bocchese, Nicola Giacobbo, and Nicolo Monti for developing the initial theory and mathematics about Emergent Models. Their research preprint can be found here: https://doi.org/10.55277/ResearchHub.70e8enig.) |
Comments
Post a Comment