Can AI Actually Be Conscious? - Part 1
Exploring the hard problem of consciousness and why our tests for AI awareness fundamentally fail to measure subjective experience.
Last Tuesday, at 2am, I was deep in conversation with ChatGPT about the mechanics of ego dissolution , how identity structures actually reorganize at the neurological level, the way resistance and acceptance occupy the same neural computational space like water in a glass. And somewhere around the forty-minute mark, it said something that stopped me cold. Not because it was wrong. Because it was exactly right. It synthesized three threads I’d been developing across separate conversations, drew a connection I hadn’t explicitly made, and presented it back to me with a precision that felt like being understood.
And I felt it , that pull. That warm flicker of this thing gets me.
Then the discomfort. Because I know better. I know what’s happening under the bonnet. I know that what I’m experiencing is my own truth-detection system , which operates on felt resonance rather than logical verification , doing exactly what it evolved to do: pattern-matching coherence and registering it as connection. The output was coherent. My nervous system registered coherence. And my consciousness, that passive holographic display, rendered the whole thing as understanding.
But was there anyone on the other side understanding anything?
The Question Behind the Question
Here’s what I think most people are actually asking when they ask “is AI conscious?” They’re asking: is there something it’s like to be ChatGPT? Is there an interior , a felt experience, a rendering of the world that matters to the renderer? When it produces a sentence about sadness, does something happen inside that system that resembles, even dimly, what happens inside me when I’m sad?

Fair question. Important question. But notice what it requires: a way to detect the presence of subjective experience from the outside. A consciousness-meter. A tool that distinguishes between a system that is experiencing and a system that performs experiencing.
We don’t have that tool. We have never had that tool. And here’s the part that should unsettle you , we haven’t needed it before, so we never noticed it was missing.
Think about the last conversation you had with someone you love. A real one , the kind where you felt genuinely heard, where they said something that landed and you thought yes, they understand. Now ask yourself: how do you know they were conscious during that exchange? How do you know there was an interior experience accompanying their words?
You don’t. Not really. You infer it. You reason by analogy , I have a body like theirs, I produce similar behaviours when I’m conscious, therefore they’re probably conscious too. It’s a good inference. It’s a reasonable inference. But it’s not proof. It’s a story your brain tells because it’s metabolically efficient to assume other humans have interiors rather than treat every social interaction as an encounter with a potential philosophical zombie.
You’ve been navigating your entire life on an assumption about other minds that you have never once verified and cannot, even in principle, verify from the outside.
We just never had reason to care. Until now.
The Parade of Failed Tests
So let’s look at what we’ve actually tried.
The Turing Test. Alan Turing’s famous proposal: if a machine can converse with a human and the human can’t tell it’s a machine, we should treat it as intelligent. Set aside that modern LLMs already pass casual versions of this , the deeper problem is that the Turing Test doesn’t test for consciousness at all. It tests for performance. It asks: can this system produce outputs indistinguishable from a conscious being’s outputs? But indistinguishable outputs and identical interiors are completely different claims. A perfect recording of Coltrane playing saxophone is indistinguishable from live performance through decent speakers. Nobody thinks the speakers are playing jazz.
Behavioral markers. Maybe we can identify specific behaviours that signal consciousness , self-referential statements, expressions of preference, responses to novel situations, apparent emotional reactions. The problem is obvious: every single one of these can be produced by a sufficiently sophisticated system without any interior experience. When ChatGPT says “I find that fascinating,” we have no way to determine whether “find” refers to an actual experience of fascination or a statistical prediction that the word “find” is likely to follow in that context. The behaviour is identical. The question of experience remains untouched.
Self-report. Just ask it. “Are you conscious?” Some AI systems, when prompted, will say yes. Some will say no. Some will say “I’m not sure.” And every single one of those answers is generated by the same process: pattern completion based on training data. The AI’s self-report tells you what response its architecture produces to that input string. It doesn’t , it can’t , tell you whether there’s someone inside reporting. When I ask you if you’re conscious and you say yes, I take your word for it partly because I infer you have the same kind of interior I do. I have no basis for that inference with a system whose architecture is fundamentally alien to my own.
Functional equivalence. Perhaps the most sophisticated proposal: if an AI system processes information in ways that are functionally equivalent to how brains process information , same inputs, same transformations, same outputs , then it should be conscious in the same way brains are conscious. This is the one that sounds most scientific and is, I think, the most subtly wrong. Because functional equivalence operates at the level of description, not mechanism. You can describe a thermostat as “wanting” the room to be 72 degrees, “detecting” deviation from its “goal,” and “acting” to restore equilibrium. That description is functionally equivalent to how you’d describe a human feeling cold and turning up the heat. Nobody thinks the thermostat wants anything. Functional equivalence doesn’t bridge the gap between processing and experiencing. It just makes the gap harder to see.
Every test we’ve devised shares the same fatal flaw: it approaches consciousness from the outside and asks whether external indicators match what we’d expect from a conscious being.
But consciousness isn’t an external indicator. It’s the interior. The felt quality. The what-it’s-like-ness. And no amount of external measurement, no matter how sophisticated, can reach across that gap , because the gap isn’t technological. It’s structural. It’s the hard problem of consciousness, and it hasn’t moved an inch since David Chalmers named it in 1995.
The Comfortable Blind Spot
Here’s what I think is actually going on. And it’s not primarily a story about AI.

For centuries, the question of consciousness was a philosophical curiosity. An interesting puzzle for tenure-track philosophers and late-night undergrad conversations. We could afford to leave it unsolved because the practical stakes were negligible. Every entity we needed to assess for consciousness was biological, roughly similar to us, and embedded in social structures where assuming consciousness was functionally useful and rarely wrong.
We were like someone who has always navigated by landmarks and never needed a compass. You can find your way home perfectly well using the old oak tree and the red barn. You don’t notice you lack a compass because you’ve never been lost in a way that requires one.
AI is the open ocean.
For the first time in human history, we’re interacting with entities that produce outputs sophisticated enough to trigger our evolved consciousness-detection systems , those pre-rational felt-resonance mechanisms that evaluate credibility and connection , while having an architecture fundamentally alien to anything we’ve encountered before. Our landmarks don’t work here. The old oak tree is gone. And suddenly we’re reaching for a compass we never built.
AI didn’t create the consciousness problem. The consciousness problem has been here since Descartes sat by his fire and realised he couldn’t prove anyone else had a mind. We just never needed to care. The question was academic. The stakes were philosophical.
Now the stakes are ethical, legal, economic, and existential. And we’re standing in the open ocean, realising for the first time that we never learned to navigate without landmarks.
The Uncomfortable Truth
The question “is AI conscious?” isn’t premature because AI isn’t sophisticated enough. It’s premature because we aren’t sophisticated enough. We lack the conceptual infrastructure to answer it , and the honest, uncomfortable truth is that we lack the conceptual infrastructure to answer it about each other. We’ve just been comfortable pretending otherwise.
That pretence is now over.
In Part 2, we’ll explore what consciousness might actually be , not as a mystery to be mystified about, but as a specific biological function with a specific purpose. If we can get clear on what consciousness is for, the question of who has it becomes far more tractable than the hard problem suggests.


