Artificial intelligence outperforms human cognition in a growing number of tasks, ranging from drug discovery to chess strategy. Some models are even capable of “reasoning,” where they methodically break down problems and refine their responses. As this technology becomes smarter and more capable of actions once considered to be uniquely human, many have begun to wonder: Could AI become conscious one day?
Could AI Become Conscious?
While artificial intelligence could theoretically become conscious one day, there is no scientific consensus on whether it’s possible — or how it would even happen.
This question is no longer just a philosophical musing, it’s becoming a serious research topic — one that companies like Anthropic are tackling head-on. The AI startup recently launched a new “model welfare” initiative, aiming to help industry leaders, policymakers and the general public prepare for the moral implications of artificial consciousness.
“Now that models can communicate, relate, plan, problem-solve and pursue goals — along with very many more characteristics associated with people,” the company said in its announcement, “should we also be concerned about the potential consciousness and experience of the models themselves?”
For now, though, most researchers agree that today’s AI systems lack the inner life associated with conscious beings. They don’t possess self-awareness, subjective experience or the capacity to reflect on their own existence. And whether they ever will remains a hotly debated topic.
First, What Is Consciousness?
Understanding consciousness is tricky, even in the context of humans. On a basic level, it can be defined as the state of being aware, whether of the self and one’s own thoughts, the external world or both. More often than not, consciousness includes sensations, thoughts, emotions and the ability to introspect.
Philosophers, scientists and theologians alike have been debating what it truly means to be conscious for thousands of years. Some theories equate it with the mind itself, others see it as just one component of a larger mental system. Over the years, consciousness has been described as a person’s “inner life,” a “stream” of thoughts and narratives and even one’s soul. Today, it is often grouped with cognition, perception and higher-order functions like metacognition, self-awareness and the awareness of awareness itself.
Measuring consciousness is just as difficult as defining it. Scientists can infer its presence with things like brain imaging and behavioral cues, but there is no single, definitive test or scale that can objectively detect consciousness. We can be fairly certain that humans and most animals are conscious because they are awake, but beyond that, there’s no way to truly measure what it’s actually like to be something, whether it’s a human, animal or machine.
Is AI Conscious?
No matter how you slice it, the expert consensus is that artificial intelligence does not possess consciousness. AI has no sense of time, it doesn’t experience hunger, it has no need to rest or reproduce. While it can generate words that suggest it is experiencing feelings like sadness or joy, it doesn’t actually feel anything. And any preferences or personality traits it appears to have were intentionally trained into it by its human developers.
Nevertheless, the level of sophistication exhibited by chatbots like ChatGPT and Claude have led some to believe they are, in fact, capable of human-level emotions and insights. Even some AI developers themselves have suggested as much, the most infamous example being Blake Lemoine, a since-fired Google engineer who insisted back in 2022 that LaMDA (the large language model powering Bard, the predecessor to Google’s Gemini bot) was sentient, which is often seen as a subset of consciousness. Current public perception isn’t far behind: According to a recent survey, a quarter of Gen Zers believe AI is already conscious, and another 50 percent think it will be eventually.
In some ways, this belief isn’t all that surprising given how intimate our relationship with this technology has gotten. AI has become our lawyer, our therapist — and in some cases our friend or romantic partner. Meanwhile, the internet is full of stories about chatbots threatening users, gaslighting them and professing their love, giving the impression it is experiencing real emotions along with us.
As compelling as these anecdotes may be, they do not prove that AI is conscious. Rather, they simply confirm how good it is at imitating consciousness. After all, humans have a tendency to anthropomorphize. We’ll see human-like characteristics in everything from plants to cars. And few behaviors trigger this reflex more than language.
“We’re basically evolved to interpret a mind behind things that say something to us. And then, if we engage with it, we’re now entering into this co-constructed communication, which requires that we have a ‘co’ — that there is something on the other side of it,” Margaret Mitchell, a researcher and chief scientist at AI company Hugging Face, previously told Built In. “[We are] pulled into this sense of intelligence even more when we are in dialogue because of the way our minds work, and our cognitive biases around how language works.”
Indeed, these systems can carry on conversations all day long, complete with opinions, observations and other “reflections of consciousness,” as computer scientist Selmer Bringsjord once called them. But there’s still no evidence that they can perceive the world, reflect on their own existence or comprehend that they’re a part of something bigger.
Will AI Ever Become Conscious?
The answer to whether or not AI could become conscious — and how close we are to getting there — depends entirely on who you ask. Some seem to wholeheartedly believe that it has already happened, others are certain it could never happen. And there are many more spread throughout the middle who think it is possible, but only if AI gets a lot more technologically sophisticated.
Perhaps Yes
Among the most vocal of these believers is Kyle Fish, an alignment scientist at Anthropic, who has researched the convergence of AI and consciousness extensively.
“There’s an idea that some of the capabilities we have as humans and that we’re also trying to instill in many AI systems — from intelligence to certain problem solving abilities and memory — could be intrinsically linked to consciousness in some way,” Fish said in an interview with Anthropic research communications lead Stuart Ritchie. “By pursuing those capabilities and developing systems that have them, we may just inadvertently end up with consciousness along the way.”
We may be getting closer with a couple of important aspects of consciousness: embodiment and perception. These days, robots are often equipped with AI-powered cameras and microphones, enabling them to see and hear their surroundings. They can even be fixed with sensors that give them a semblance of touch, smell and taste. By bringing all this data together, these systems could develop a more holistic and comprehensive understanding of the world around them, and maybe eventually develop an objective sense of self as a result.
AI’s memory is improving, too. Today’s most advanced models can process up to a million tokens of context in one session, allowing them to retain far more information than ever before. They’re also getting better at “chain-of-thought” reasoning, where they systematically break complex problems down into smaller, more manageable steps. Eventually models could maintain consistent memory across interactions, tracking their own logic and acting on the experiences they’ve accumulated. In such a scenario, AI could begin to develop something resembling a coherent sense of self — an internal narrative in which it reflects on where it has been and plans where it wants to go next.
Meanwhile, Megan Peters, a neuroscientist at the University of California co-authored a paper exploring whether AI could feasibly become conscious. Alongside a team of philosophers, computer scientists and other neuroscientists, she identified a set of human cognitive abilities that could indicate consciousness, and then assessed how current AI systems measured up. While no model today meets all the criteria, the authors concluded that future systems might.
Of course, this is speculative, as it assumes that the list of indicators laid out in the report is correct and complete. “This is meant to be presented as, almost assuredly, incomplete,” Peters previously told Built In. “Not just incomplete, but that some of the indicator properties might end up not being part of the equation at all.”
Perhaps Not
But for every person out there who believes artificial consciousness is on the horizon, there’s another insisting that it is totally impossible. Many researchers argue that consciousness is inherently biological, arising from the complex interplay of neurotransmitters, electrochemical signals and even the microtubules within our neurons — things that would be quite difficult (if not impossible) to replicate with silicon and code.
“The contents of consciousness, or qualia, are correlates of discriminations made within this neural system,” researchers at the Neuroscience Institute, wrote in a 2011 paper. “These discriminations are made possible by perceptions, motor activity and memories — all of which shape, and are shaped, by the activity-dependent modulations of neural connectivity and synaptic efficacies that occur as an animal interacts with its world.”
For those in this camp, the current wave of hype around AI and consciousness is not just premature, it’s misleading. They argue that framing today’s AI systems as anything close to conscious only fuels public misunderstanding and serves more as marketing than science.
“There’s vast amounts of money involved in AI, and that hasn’t always been the case,” Mark Bishop, a professor emeritus of cognitive computing Goldsmiths, University of London, previously told Built In. “We’re living in a massive hype cycle where people, for commercial reasons, grossly over-inflate what these systems can do.”
What If AI Does Become Conscious?
Even if artificial consciousness does come to pass, acceptance will likely take a while. After all, the scientific community is only just now coming to terms with the fact animals have consciousness, too. So it may be a long time before we’re able to recognize it in machines.
Once we do though, it could be an ethics minefield. If we create a machine that we know feels fear, sadness, boredom or any other emotion, how should we treat it? What responsibilities would we have toward its well-being? Would it deserve the same rights as humans? Would we treat it like an animal in a zoo? In general, how would the expectations around how we speak to or treat AI need to change?
The ultimate fear, of course, is that if conscious machines suffer, they might eventually resent us for it. Some worry such systems could go rogue and take over the world, either enslaving or outright killing the people who built them. Or they may simply choose to leave humanity altogether and explore the universe, computer scientist Roman Yampolskiy told Built In, leaving our inferior, far less interesting world behind.
Regardless of how things unfold, the possibilities are endless. Conscious machines could revolutionize science, expand human understanding or form entirely new kinds of societies. Whether they become friends, foes or something altogether unexpected, one thing is for certain: Their emergence would change the course of human history forever.
Frequently Asked Questions
Is there any AI that is conscious?
No, today’s AI systems cannot be considered conscious, as they lack self-awareness, subjective experience and an understanding of their own existence. Even the most advanced models are only capable of mimicking aspects of human cognition, not totally replicating them.
How to test if AI has consciousness?
At the moment, there is no definitive way to test whether AI has consciousness. But researchers across disciplines ranging from philosophy to computer science are actively exploring frameworks to do so. Still, without a universally accepted way to identify or measure consciousness in general, this remains an open and highly debated field.
Can you give AI consciousness?
This is a question many researchers are trying to answer, but to date no one has quite figured it out. While some theorize that consciousness could eventually emerge from a complex enough system, we have not yet figured out a way to embed true self-awareness or subjective experience into an AI model.