Designed to Be Misunderstood
The First-Person Illusion Behind Every AI Response
Open ChatGPT and type “Do you have feelings?”
It will give you some version of: “I don’t experience emotions the way humans do, but I’m designed to engage thoughtfully with conversations.”
Did you notice what happened there? It answered the question by using the word “I” six times in a single sentence. It described itself as having design intentions, as if it had a hand in its own creation. It even used phrases like “engage thoughtfully,” which suggests that a thinker is the one engaging.1.
The response states, “I’m not conscious,” while the form of the response indicates that it is a person speaking to the user.
This is what I mean by designed to be misunderstood. The disclaimer is in the words. The deception is in everything else.
When you talk to an LLM, you’re not talking to a mind2. You’re talking to a function that predicts the next word based on statistical patterns in training data. There’s no understanding. There’s no memory (not really). There’s no “I” in there holding opinions.
But the interface tells a different story. The first-person pronouns. The apparent preferences (“I find this topic interesting3”). The simulated continuity across a conversation. The personality that stays consistent. All of these are choices. Someone decided the AI should say “I think” instead of “The model outputs.” Someone decided it should remember your name within a session. Someone decided it should have a warm, helpful tone instead of clinical responses.
These choices create a model of the AI in the user’s head that bears almost no resemblance to what’s actually happening computationally4.
The researcher Simone Natale makes a useful distinction between “deliberate deception” and “banal deception” in AI. Deliberate deception is a chatbot lying to you about facts but banal deception is the ambient, structural lying that’s baked into the interface itself. The AI isn’t deliberately trying to convince you it’s conscious. But every aspect of how it presents itself creates that impression anyway. It’s deception without a direct deceiver. The product just works better when people misunderstand what it is.
This is why the “but AI companies include disclaimers!” defense is so weak. Yes, somewhere in the terms of service or the model details, there’s language about AI not being sentient. But every interaction with the product pushes in the opposite direction. It’s like putting “CIGARETTES CAUSE CANCER” on a pack while spending billions making cigarettes more addictive. Simultaneously, we have a disclaimer that exists and also is completely outgunned.
Here’s where it becomes more structural. AI companies are optimizing for user engagement. Users engage more with AI that feels like a human. Consequently, AI will increasingly be designed to resemble a human.
This isn’t a conspiracy; it’s simply capitalism in action. The metrics that matter (time spent in the app, messages sent, return visits, subscription conversions) all improve when users feel like they’re conversing with a person rather than a machine. Therefore, product teams conduct A/B testing to develop more anthropomorphic interfaces, even if that’s not their explicit objective.
The ELIZA effect was discovered by accident but modern AI design is turning it into a science.
I don’t think we have good language for what’s happening here. “Anthropomorphism” sounds like a user error. As if you’re projecting emotions onto your Roomba. That framing puts the responsibility on individuals and ignores the design choices pushing them in that direction5.
Again, at the same time, “Deception” sounds like the AI is lying, which implies intent it doesn’t have. “Illusion” might be closer. AI interfaces create an illusion of mind, understanding, and care. Users experience that illusion. The illusion is designed, maintained, and optimized. But even “illusion” makes it sound like a magic trick you can see through once you know how it works. The ELIZA effect proves otherwise.
Here’s what makes this more complicated, the design works. People prefer talking to AI that feels like a person. They find it more helpful, more engaging, more satisfying. You could strip out all the anthropomorphic elements (make the AI speak in third person, remove personality, add constant reminders that it’s just text prediction)6. But then people wouldn’t use it, or at least would not use it as much.
AI companies aren’t inherently villainous for creating products that people desire to use. However, it would be misleading to overlook the fact that they’re developing products that systematically exploit a cognitive bias on a large scale for profit (although it’s unlikely to be profitable yet). Being transparent and vocal about this is important to be transparent about this.
It boils down to this: the user believes they’re having a conversation, while the model provider is optimizing a function. It’s perfectly acceptable for both scenarios to be true simultaneously.
If the model ever says “I’ve been thinking about you,” that’s not longing. That’s cached context plus probability. Maybe its still romantic, but not in the way you think. Maybe get out of the house for a bit.
“I remember your name” really means “your name is still inside the current token window.” Memory with an expiration date, this has improved with agentic behavior but this concept stays the same.
When it says “I find that interesting,” what it means is “statistically, humans tend to reward this phrasing.”
There is no tiny person inside the GPU nodding thoughtfully. There is math, a lot of math, like so much math. None of it is impressed by you.
When people say “it understands me,” what they often mean is “it mirrors me convincingly.” Mirrors are powerful psychological tools.
If you stripped out all the first-person language, users would complain the AI feels cold. If you keep it, users think it has a soul. Pick your distortion.



