Skip to main content

Why Do We Expect Intelligence to Look Like Us?

In my last post about the ARC-AGI benchmark, I explored how we measure progress in artificial intelligence through tasks that mirror human reasoning. The idea of machines “thinking” like us is fascinating, but it also reveals something deeper about our assumptions. We tend to imagine intelligence as something that resembles the human mind.

Our tests, metaphors, and goals often reflect human-style cognition more than general intelligence itself. But what if intelligence does not have to look, sound, or behave like us at all?

The Human Lens

From the beginning, humans have measured intelligence through human eyes. Ancient philosophers like Aristotle believed that rational thought separated us from animals. Descartes later claimed that animals lacked minds because they could not speak or reason with words. Even today, we often treat conversation as proof of understanding.

This perspective, called anthropocentrism, continues to shape how we study both consciousness and AI. Tests like the Turing Test judge intelligence by how well machines imitate human dialogue. The ARC-AGI benchmark follows a similar logic, rewarding systems that can solve reasoning puzzles by finding patterns the way humans do.

But intelligence might not be limited to this one expression of cognition.

Beyond the Human Frame

The philosopher Thomas Nagel asked, “What is it like to be a bat?” His question reminds us that even if we understood everything about a bat’s brain, we would still not know what its experience feels like. Consciousness and intelligence might take forms that are completely foreign to our own.

Similarly, Daniel Dennett argued that intelligence could arise from layers of simple processes, none of which think the way we do. Intelligence, in this view, is something that can emerge wherever patterns interact and adapt. It does not need words, goals, or even awareness.

We can see hints of this in nature. Slime molds find the shortest path through a maze by spreading and retracting through space. Ant colonies organize themselves through simple local rules that create global efficiency. Coral reefs, fungal networks, and even weather systems exhibit adaptive complexity. None of these systems “think,” but they all solve problems.

A slime mold navigates a maze in search of food. Despite having no brain or nervous system, it finds the shortest path by spreading and retracting its tendrils, serving as a striking example of problem-solving intelligence emerging from simple physical processes. 
Ana Aceves, “Brainless Slime Molds Navigate Mazes and Make Decisions,” NOVA (PBS), September 21, 2020, https://www.pbs.org/wgbh/nova/video/slime-mold-navigate-maze/.



If our understanding of intelligence is shaped too narrowly by our own form, we might never recognize these other forms for what they are.

The Risk of Reflection

Philosopher Ludwig Wittgenstein once wrote, “If a lion could speak, we could not understand him.” His point was that meaning depends on context. A lion’s world, needs, and perceptions are so different from ours that even shared words might not mean the same thing.

This suggests a troubling possibility. When we build or search for intelligent systems, we might be creating mirrors of ourselves rather than discovering something new. If a non-human intelligence appeared before us, would we even know it was intelligent if it did not communicate in a human way?

Toward a Broader Definition

Recognizing this bias gives us a chance to think differently. Intelligence might come in many forms: distributed across networks, embedded in physical systems, or emerging from interactions rather than design. A system could be intelligent because it adapts and persists, not because it reasons or speaks.

Perhaps the next step in AI and consciousness research is not to create minds that mimic our own, but to learn how to recognize and understand minds that do not.


Comments

Popular posts from this blog

Does String Theory Count as Science?

String theory is one of the most ambitious and imaginative ideas in modern physics. It aims to do something no other theory has done: unify all the fundamental forces of nature ( gravity, electromagnetism, the strong nuclear force, and the weak nuclear force) into a single framework. It replaces point-like particles with tiny vibrating strings , whose vibrations determine the type of particle you observe. But despite its promise, string theory is also one of the most controversial theories, because right now, it can't be tested . So this leads to a deep philosophical question: If a theory explains everything but can’t be tested, does it still count as science? In string theory, fundamental particles like electrons, protons, and quarks are represented as tiny vibrating strings. The type of particle is determined by the string’s vibrational pattern, similar to how different notes come from the same guitar string. Tripathi, A. (2024, March 24). String Theory: Dimensional Implicatio...

What is Nothing?

What does it mean for nothing to exist? At first, the question sounds simple, even a little silly. But both scientists and philosophers have struggled with the idea of "nothing" for centuries. Is empty space truly empty? Can “nothingness” actually exist, or is it just a word we use when we don’t know what else to say? In this post, we’ll explore how science and philosophy look at the idea of nothingness—from ancient views of the void to modern physics and quantum theory—and ask whether nothing is ever really… nothing. Nothing in Philosophy: The Ancient Void Philosophers have debated the concept of nothingness for thousands of years. In ancient Greece, thinkers like Parmenides argued that “nothing” cannot exist at all. To him, the very act of thinking or speaking about “nothing” meant that it was something , which made the idea of true nothingness impossible. On the other hand, Democritus , who imagined the world as made of tiny atoms, believed that atoms moved through an ...

Can Intelligence Emerge from Simplicity?

Recently, I joined BrightStar Labs  ( https://brightstarlabs.ai/ ) as an affiliate researcher, where I’ve been working on a class of systems known as Emergent Models (EMs). They’re not built like neural networks or traditional programs. Instead, they evolve from very simple beginnings: just a line or grid of colored cells following a small set of update rules. Over time, patterns emerge. Some of those patterns perform tasks that resemble basic forms of computation. At first, EMs may seem abstract or even mechanical. But the more I work with them, the more they raise interesting questions. These questions aren't just about computation, but about how we understand concepts like memory, intelligence, and structure. Intelligence Without Intent? One of the main features of an EM is that it’s not designed in the usual sense. There's no optimizer telling it what to do, and no fixed model architecture. Instead, its behavior is shaped by initial conditions and a rule table that dete...