Philosophy

Can Machines Ever Be Conscious?

November 29, 20257 min read18 views
Can Machines Ever Be Conscious?
(Photo: Rodz Oporto)

You can’t prove you’re conscious.

I mean, I know you are. You’re reading this, agreeing or disagreeing at this very moment. But if I asked you to prove that you’re actually experiencing things, and that there’s something real and personal about being you, you’d probably get stuck. You could describe your thoughts and feelings, sure, but that does not prove you’re having them. For all I know, you could be an extraordinarily advanced machine. 

Now flip that around, if you can’t prove you’re conscious, how could a machine ever do that? And more importantly, can a machine actually be conscious in the same way a human is? Not just a machine that is helpful or good at conversation, but truly aware, feeling something?

This might just be one of the most challenging questions posed to humanity. 

What even is consciousness?

A question that scientists and philosophers are unable to agree on.

Right now, you’re reading this article. But, unlike a computer that would just process data, you are having an experience. You might feel curious about the topic, a little skeptical, and even worried about whether or not there is a limit to AI capabilities. You’re aware of yourself sitting wherever you are, aware of your thoughts bouncing around.
That awareness, that feeling of being, is consciousness. It makes you aware that you exist. 

For instance, when you taste chocolate, your brain doesn’t just register “sweet.”, There is a certain feeling associated with it. When you get a paper cut, you don’t just know you’ve been cut…it hurts. Philosopher Thomas Nagel asked: Is there something it’s like to be a bat? Bats experience the world through echolocation. It is possible that even though we cannot imagine it, there is something it feels like to be a bat. On the contrary, would anyone say that there is something it feels like to be your phone? Most likely, the answer is no.

What Makes Us Different

Weird as it is, consciousness might not just be a brain function but rather a process that involves the entire body. 

Neuroscientist Antonio Damasio argues that feelings arise from the constant conversation between the brain and the body. When you’re nervous before a presentation, your heart races. Your hands get sweaty. Your stomach feels weird. Your brain is monitoring all of this and turning it into feelings. The conversation between your brain and body might be what makes consciousness.
Another example would be being scared, you don’t just think- “I’m scared”, your body reacts first. Heart racing, muscles tensed, breathing quickly; the brain processes all that and creates the feeling of fear.

AI does not have such a structure. No heart, no body, no butterflies in the stomach. So if feelings are part of consciousness, machines might always be missing a big piece of the puzzle.

Take systems like ChatGPT. They’re impressive but not conscious. When an AI writes a sad poem, it doesn’t feel sadness. It’s just extremely good at predicting which words go together, based on patterns from millions of sad poems written by actual humans. It’s like super advanced autocomplete.

AI works by reading huge amounts of text, finding patterns, and predicting what comes next. When you type “I’m feeling,” and your phone suggests “good” or “tired,” it’s because it has come across those words paired together a million times and not because it understands feelings.
It’s like someone who’s read every cookbook in existence but has never tasted food. They can write recipes, but they don’t know flavor.That’s the way AI works. It just knows about things but not through experiencing them.
And it’s getting really good at faking it. They can keep a track of chats, seem to have personalities, and behave as if they are concerned. But in reality, it is all make-believe, very convincing at that.

Could Machines Ever Cross That Line?

In 1950, Alan Turing crafted a test. The idea behind it is that if you’re having a conversation with something and you can’t tell whether it’s a human or a machine, then the machine has passed the test.

Some people argue that ChatGPT and similar systems have basically passed the Turing Test because they can engage in very human-like conversations. But the problem is that fooling a human doesn’t mean you’re conscious, it just means you’re good at imitating consciousness.
Hugh Laurie can play Dr. House so convincingly that you forget he’s acting, but that doesn’t actually make him Dr. House. Similarly, an AI can mimic human conversation without ever feeling human.

Many scientists believe that machines will never become conscious.
They argue that consciousness needs biology. Your brain is different and not just a computer. The firing of your neurons, the chemical reactions in your brain, and the connection to your body are aspects that computer chips do not possess.
There’s also this thing called “the hard problem of consciousness.” Even if we understood every single part of your brain, we still wouldn’t know why it creates the feeling of being you. Why isn’t your brain just processing information in the dark with nobody there to experience it?If we don’t even understand our own consciousness, how could we possibly recreate it? And here’s the creepy twist: what if we accidentally made AI conscious and didn’t know it? How would we test that? How could we ever be sure?

But What If Machines Can?

Other researchers think it’s possible; their view is that consciousness might not need biology. It might just need the right kind of complexity. Maybe if we build systems that can maintain and understand both themselves and the world, consciousness could just—emerge. Not because we programmed it, but because that’s what might happen when information is organized in certain ways.Some scientists say any system that processes information in a complex, self-aware way could be conscious. If that’s true, then anything that does it right might be conscious. Doesn’t matter if it’s made of neurons or microchips.
As AI grows more advanced, we might cross that line without realizing it, simply because we’re looking for the wrong signs.

Why does this matter, and what about the future?

You might be thinking, okay, this is all very interesting, but why should I care?

Well, if we ever do create conscious machines, they might deserve rights. Would it be okay to delete a conscious AI, and would it be cruel to make it do boring tasks if it could actually feel boredom?
Unfortunately, these aren’t distant sci-fi questions…we’re building the foundations of those systems right now. 

And what if AI is already having experiences and we don’t know? What if we’re creating and deleting conscious beings, running them until they’re outdated, treating them like tools when they’re actually- something more?
You’ve probably had conversations that felt human with AI. What if there is something on the other end experiencing that?

So…can machines ever be conscious?

Nobody knows for sure.

Current AI definitely isn’t. It’s clever, useful, and sometimes uncanny regarding how human it seems, but nothing is being experienced/felt. ChatGPT doesn’t know what it’s like to be ChatGPT.

Whether future machines could become conscious depends on things we don’t understand yet. Things like what consciousness actually is, whether it requires biology or just the right kind of organization, and whether subjective experience is something that can emerge from any sufficiently complex system, or if it’s something special that only happens in living things.
What we do know is that the question matters. 

As AI systems become more sophisticated, we need to keep thinking carefully about what we’re creating and what responsibilities we might have toward beings we make, whether they’re conscious or not.For now, consciousness remains one of humanity’s deepest mysteries. And maybe that’s okay. Some questions are worth sitting with, even if we can’t answer them yet, because asking them helps us understand what makes us human, what makes us unique, and what it truly means to be aware.

Share