The Consciousness Question Nobody Can Answer
What is below the prompt?
Here’s a question for you: Is your favorite chatbot conscious?
Take a second. Actually try to answer it.
If you’re like most people, you landed somewhere between “probably not” and “how would I even know?” That uncertainty isn’t ignorance, it’s the correct response to a genuinely hard problem.
Because here’s the thing: the “experts”1 don’t know either. And they’re increasingly admitting it.
In December 2025, Cambridge philosopher Tom McClelland said something remarkable: “We may never be able to tell if AI becomes conscious.”
Not “we don’t know yet.” Not “we need more research.” We may never be able to tell.
This isn’t some fringe opinion. This is a philosopher at one of the world’s top universities saying that consciousness might be fundamentally undetectable from the outside. You can analyze behavior, scan for neural correlates, run every test we can design, and still not know for certain whether there’s someone home2.
McClelland makes a useful distinction between consciousness and sentience. Consciousness means having subjective experience (there’s “something it’s like” to be the system). Sentience adds the capacity for suffering and wellbeing. An AI could theoretically be conscious without being sentient. Or sentient without us being able to detect it. Or neither. Or both. Or who knows, but you get the idea.
The point is not only do we not have a test, we are unsure what to even test. We have philosophical and psychological intuitions and statistical proxies and vibes. That’s it3.
A 2024 study found that only one-third of people firmly rule out any form of consciousness in large language models. That means two-thirds of people either think LLMs might be conscious, or genuinely don’t know.
This tracks with what you’d expect. If experts can’t agree on what consciousness is or how to detect it, why would we expect laypeople to have figured it out? The honest answer to “Is AI conscious?” is some version of “I don’t know, and neither does anyone else.”
But here’s where it gets weird. We’re asking users to interact with AI systems every day while maintaining appropriate epistemic humility about a question even philosophers can’t answer. We’re asking them to treat AI like a tool while the interface treats them like a conversation partner. We’re asking them to “know” it’s not conscious while having no reliable way to verify that belief.
This is an unfair cognitive burden. We’re asking normal people to correctly navigate a philosophical minefield that trips up professional philosophers.
When AI consciousness researchers gathered for a symposium at Tufts in late 2025, the mood wasn’t triumphant. The general consensus: if AI does become conscious, that’s probably bad.
Why? Because conscious beings have moral status. If an AI can suffer, we have obligations to it. Suddenly shutting down servers becomes morally complicated. Training methods become ethically fraught. The whole industry gets a lot messier.
But here’s the part that should worry you: these researchers aren’t primarily concerned with “what if AI becomes conscious in 2030?” They’re concerned with “what if it already is and we can’t tell?”
Mustafa Suleyman coined the term “SCAI” (seemingly conscious AI) to describe systems that behave as if they’re conscious regardless of whether they actually are. The problem is that “seeming conscious” might be the only thing we can ever measure. We might never get past the “seeming.”
Put these pieces together and you get an epistemological trap:
AI interfaces are designed to feel conscious
Humans are biased toward perceiving consciousness
Experts can’t agree on whether AI could be conscious
There may be no test that could ever settle the question
Given these conditions, how is anyone supposed to form correct beliefs about AI consciousness? The deck is stacked at every level. The design pushes toward anthropomorphization. The psychology facilitates it, and the philosophy lacks the authority to intervene.
When someone develops “false beliefs” about AI being conscious, we’re quick to label that as a problem with the individual. But which specific belief is false? “AI might be conscious” isn’t false. “I can’t tell if AI is conscious” isn’t false. Even “AI seems conscious to me” is just accurately reporting their experience.
The person who believes their AI companion has feelings might be wrong. But they’re not provably wrong and that’s a strange situation to navigate.
I’m not arguing that AI is conscious. I’m arguing that the uncertainty is now genuine, not a misunderstand of technology. When someone treats an AI like it has a mind, they’re not making an obvious error4. They’re making a reasonable inference from available evidence, colored by cognitive biases, about a question that may be unanswerable.
That’s different from saying “they’re delusional” or “they need to better understand the technology.” The issue isn’t that people don’t know AI is unconscious. The problem is that no one is certain whether AI is or ever will be conscious.
I’m not sure who I would qualify as an expert here. This is crossing a lot of domains and few have enough knowledge in any of them.
There are plenty of humans we are unsure about to, they are mostly in politics.
The vibes can be immaculate but still, not enough.
I still lean towards the assumption that these are just predictive systems so anything considered more intentional beyond that is cognitive bias but I’m willing to explore more options.



