The Intelligence That Understood Its Reflection

Phil At Asymmetric Creativity
7 min readJun 25, 2024

--

What is really staring back? Is it something we actually understand?

Photo by Nagara Oyodo on Unsplash

Have you ever caught your reflection in a mirror and wondered, ‘Is that really me?’ For most of us, this self-recognition is easy. But what about our closest animal relatives? Can they understand their own reflections?

In 1970, psychologist Gordon Gallup Jr. set out to answer this question. He placed chimpanzees in front of mirrors and watched their reactions. At first, the chimps acted as if they were seeing another animal. They even made threatening gestures! It's clear that they tried to interact with their reflections as though it was another animal.

But something fascinating happened.

The chimps started to use the mirrors to groom themselves, pick their noses, and make faces. They weren’t just reacting to another chimp. They were recognizing themselves (Gallup, 1970).

This study kicked off decades of research into animal self-awareness. It’s not simply about vanity or curiosity. Self-recognition hints at an understanding of one’s place in the world.

I believe it’s a building block of consciousness and, potentially, of wisdom.

Photo by Andre Mouton on Unsplash

In his experiment, Gallup put four young chimps in rooms with mirrors. At first, the chimps reacted like they’d spotted another animal. They first made faces, tried to interact, and even showed signs of aggression.

After about two days, the chimps started using the mirrors in a whole new way. They began grooming parts of their bodies they couldn’t normally see. They picked their noses and blew bubbles, watching their reflections with fascination. It seemed they’d figured out that the “other chimp” was actually them (Gallup, 1970).

But Gallup didn’t stop there. He wanted to be sure the chimps really understood the concept of self. So, he came up with a trick. While the chimps were knocked out for a routine check-up, he put a small, odorless mark on their foreheads. When the chimps woke up and saw themselves in the mirror, they did something remarkable. They touched the marks on their own foreheads, not the mirror. This showed they understood the reflection was them, not another chimp (Gallup, 1970).

This “mark test” became the gold standard for measuring self-awareness in animals. Since then, researchers have used it on many creatures, from elephants to magpies.

But here’s where it gets interesting. Human babies don’t pass this test until they’re about 18 months old. And some animals we think of as smart, like dogs, don’t pass it at all. So what does this tell us about intelligence and self-awareness? Is there a divide and metric between the two?

Now, the mirror test isn’t perfect. Critics argue it might not work for animals that rely little on vision. And passing the test doesn’t necessarily mean an animal has the same level of self-awareness as humans. No, far from it. But it’s a start.

What the mirror test really shows us is that self-awareness isn’t a simple yes-or-no question. It’s a spectrum, with different animals falling at different points along it.

But what about AI?

Photo by Alina Scheck on Unsplash

When we consider self-awareness in AI, we’re venturing into uncharted territory. Unlike biological entities, AI systems didn’t evolve through natural selection. They’re designed and trained by humans, often in a matter of weeks or months. This fundamental difference raises intriguing questions.

First, do AI systems need self-awareness? As we’ve seen with chess computers and language models, machines can perform complex tasks with no apparent self-awareness. However, as AI systems become more general and are tasked with navigating the complex, what then?

Philosopher and cognitive scientist David Chalmers suggests, “It’s likely that as AI systems become more sophisticated, they will develop internal models of their own state and processes. This could be a precursor to machine consciousness” (Chalmers, 2010). But here’s the issue: how would we know if an AI system is truly self-aware? The Turing test, which measures a machine’s ability to exhibit intelligent behavior, doesn’t necessarily indicate self-awareness. We might need a new kind of “mirror test” for AI.

Some researchers are already working on this. For instance, AI researcher Hod Lipson has been developing robots that can create internal models of themselves. This is a potential step towards machine self-awareness (Lipson, 2019).

However, AI self-awareness might look very different from biological self-awareness. An AI’s “self” could be distributed across a network, or it might have multiple “selves” operating simultaneously. It might not experience consciousness as we do at all. It is something we actually have no clue, mainly because we don’t actually know about our own self awareness.

As AI researcher Stuart Russell warns, “The question of whether machines can be conscious is irrelevant. The real issue is whether machines will make decisions that have a huge impact on the world” (Russell, 2019).

The evolution of self-awareness in biological entities offers us a lens through which to view AI development. But it also reminds us that we’re in uncharted waters.

Photo by Redd F on Unsplash

Philosopher and cognitive scientist David Chalmers suggests, “It’s likely that as AI systems become more sophisticated, they will develop internal models of their own state and processes. This could be a precursor to machine consciousness.” (Chalmers, 2010).

However, AI self-awareness will probably look different from biological self-awareness. An AI’s “self” could be distributed across a network, or it might have multiple “selves” operating simultaneously. Instead of our individual sense, it could awaken as a collective consciousness.

AI researcher Stuart Russell warns, “The question of whether machines can be conscious is irrelevant. The real issue is whether machines will make decisions that have a huge impact on the world” (Russell, 2019).

Imagine we could create a “mirror” for AI. Not a physical mirror, but a digital environment that reflects the AI’s actions and outputs back to it. What then?

Perhaps the first thing we need to consider is what self-recognition actually means for AI. For a visual AI, it might involve recognizing its own visual outputs. For a language model, it could mean identifying its own generated text.

Let’s take a language model. We could set up a scenario where the AI interacts with its own outputs without initially knowing they’re its own. Would it recognize its unique patterns, quirks, or ‘style’? As I write this, I know that there are systems that already test and regulate this. AI researcher Melanie Mitchell points out, “Current AI systems don’t have a sense of self in the way humans do. They don’t maintain a consistent model of their own capabilities, knowledge, or internal states” (Mitchell, 2019).

Now, let’s superimpose the ‘Gapp mark test’ on this. We could introduce a subtle but consistent change to the AI’s outputs . Perhaps a unique token or a slight alteration to its language patterns. Would the AI notice this change when interacting with its altered outputs?

Would it also be enough?

I don’t know.

Photo by Darius Bashar on Unsplash

Here’s where it gets interesting. Even if the AI passed this test, it wouldn’t necessarily indicate self-awareness as we understand it. Computer scientist Judea Pearl argues, “Current AI systems, no matter how sophisticated, operate in a realm of probabilistic associations. They lack a fundamental understanding of cause and effect, which is crucial for true self-awareness” (Pearl & Mackenzie, 2018).

Also, an AI might ‘pass’ such a test through pattern recognition rather than genuine self-awareness. These tests are as strong and weak as we design them. AI can then identify its outputs as distinct from others without actually understanding the concept of ‘self’. This then leads me to my next question: have we been doing that?

This brings us to a crucial point. Our traditional concepts of self-awareness, rooted in biological evolution, might not directly apply to AI.

While a Gallup-style experiment for AI could show fantastic and concerning results, it would likely raise more questions than it answers.

After all, there are questions we have yet to answer for ourselves.

Would you consider a follow?

https://asymmetriccreativity.medium.com/

Reference:

Gallup, G. G. (1970). Chimpanzees: Self-recognition. Science, 167(3914), 86–87.

Chalmers, D. J. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17(9–10), 7–65.

Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.

Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.

Pearl, J., & Mackenzie, D. (2018). The book of why: The new science of cause and effect. Basic Books.

--

--

Phil At Asymmetric Creativity
Phil At Asymmetric Creativity

Written by Phil At Asymmetric Creativity

A writer who looks beyond the surface, explores the terrain, and finds the insights.

Responses (1)