AI Is Not Human But We Sure Think It Is
Table of Contents
Look at this picture. What do you see?
If you said “a boy” or “Calvin from Calvin and Hobbes” then you’re correct in a human sense. You see characters, perhaps even feel a sense of their personality or the story they inhabit. But technically? It’s just a collection of lines, shapes, and colors arranged on a surface. Your brain is doing something powerful and automatic: you are anthropomorphizing this image.
This tendency to project human qualities onto non-human things isn’t a mistake; it’s a fundamental part of how our brains work. And it profoundly shapes how we perceive and interact with Artificial Intelligence today. While AI is, at its core, complex algorithms and data, we can’t seem to help but treat it… well, like it’s a little bit human.
The Deep-Seated Human Drive to See “Us”
Our brains are wired to look for patterns, agents, and intentions. This is a well documented psychological phenomenon known as Pareidolia, our tendency to perceive meaningful images or sounds in random stimuli. It’s why we see faces in clouds, electrical outlets, or the front of cars. We perceive personality and emotion even in simple arrangements of lines, like stick figures or the cartoon characters mentioned earlier.
This isn’t a conscious decision; it’s an automatic psychological reflex. We are built to find the familiar, the intentional, the human-like, even when it’s not there.
Turning Our Anthropomorphizing Gaze on AI
Now, consider AI. Unlike clouds or cars, AI systems communicate with us using human language. They perform tasks we once thought only humans could do – writing articles, creating art, composing music, having conversations that feel surprisingly natural, even surprisingly human.
Given our deep-seated tendency to anthropomorphize, and given that AI interacts with us in ways that mimic human behavior, it’s almost inevitable that we project human qualities onto it. We don’t just see a tool; we want to see something more. We imbue it with understanding, intention, and sometimes even feelings, not necessarily because the AI possesses them, but because our brains are primed to perceive them, and the AI’s human-like output makes it easy to do so. There’s a comfort and familiarity in interacting with something we perceive as having agency or personality.
Worse yet, we feel an uncomfortable relational dissonance if we don’t treat AI as if it were human. Our whole lives we were trained to treat people with respect, care and dignity. We were taught to be polite, to say “please” and “thank you,” to treat others as we would like to be treated. So when we encounter something else that seems very human, like an AI, we feel a strong urge to apply the same social norms. This explains why so many people feel compelled to say “thank you” to AI systems, even when they know it’s just a program. It’s a reflection of our social conditioning and our innate desire to connect with others, even if those “others” are lines of code.
From Everyday Interactions to the Big Screen
You can see this tendency everywhere. People name their AI assistants. We talk to ChatGPT or other language models as if they have opinions (“What do you think of this idea?”). We get frustrated when they “don’t understand” or are delighted when they produce something clever, attributing these outcomes to internal states rather than algorithmic processes. We casually discuss AI potentially becoming “conscious” or “sentient,” reflecting our innate drive to categorize sophisticated agency within a human-like framework.
This idea was brilliantly explored in the movie Ex Machina. The film features a scientist interacting with a highly advanced humanoid AI named Ava. Despite the scientist knowing she is a robot (he can literally see her mechanical brain through her transparent skull), the film masterfully shows how his perception and interaction are constantly battling his rational knowledge against the powerful, human urge to see her as a person, complete with intentions, desires, and feelings. The film highlights the sheer psychological force of anthropomorphism when confronted with something that so convincingly imitates humanity.
Why Does This Matter?
Recognizing our tendency to anthropomorphize AI isn’t just an interesting psychological observation; it has practical implications.
- Misunderstanding: It can lead to unrealistic expectations about what AI is capable of or what its limitations are. AI doesn’t “think” or “feel” in the human sense; it processes information based on patterns learned from vast datasets. Treating it as human can obscure this fundamental difference.
- Ethical Considerations: How does our perception of AI influence discussions about its role in society, accountability when things go wrong, or potential future regulations? If we see AI as less than human, or perhaps too human, how does that shape our ethical frameworks?
- Interaction Design: Understanding this human bias is crucial for designing AI systems that are both effective and transparent about their non-human nature.
Navigating the Human-AI Frontier
AI is a powerful and transformative technology. As it becomes more integrated into our lives, our natural human tendency to anthropomorphize will only be amplified.
AI is not human. But our brains are built to think it is, or at least to strongly want to see the humanity in it. Developing a more nuanced understanding of AI requires not just learning how the technology works, but also recognizing and managing our own powerful, innate psychological biases – the drive to find the familiar, the intentional, and the human, even in a collection of lines, code, and data.
This was written by Daniel Lyons.
If you'd like to support him, please consider buying him a coffee so he can create more content like this.
Comments powered by Talkyard.