[For decades now the misperception that a computer is a human-like social actor rather than just a technology following its programming has been considered an illusion. And until recently it seemed clear that the perception of an AI as more than a technology was too. But as this story from Science reports, as technology evolves the distinctions become less clear. Scholars across disciplines are now developing methods and measures to help us separate effective mimicry that evokes medium-as-social-actor presence, from machines that have actually attained consciousness. Even defining the key terms is challenging (e.g., consciousness is not the same as sentience – see DeGrazia, 2020). –Matthew]
[Image: Credit: Mark Garlick/Science Photo Library Via Getty Images]
If AI becomes conscious, how will we know?
Scientists and philosophers are proposing a checklist based on theories of human consciousness
By Elizabeth Finkel
August 22, 2023
In 2021, Google engineer Blake Lemoine made headlines—and got himself fired—when he claimed that LaMDA, the chatbot he’d been testing, was sentient. Artificial intelligence (AI) systems, especially so-called large language models such as LaMDA and ChatGPT, can certainly seem conscious. But they’re trained on vast amounts of text to imitate human responses. So how can we really know?
Now, a group of 19 computer scientists, neuroscientists, and philosophers has come up with an approach: not a single definitive test, but a lengthy checklist of attributes that, together, could suggest but not prove an AI is conscious. In a 120-page discussion paper posted as a preprint this week, the researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT.
None is likely to be conscious, they conclude. But the work offers a framework for evaluating increasingly humanlike AIs, says co-author Robert Long of the San Francisco–based nonprofit Center for AI Safety. “We’re introducing a systematic methodology previously lacking.”
Adeel Razi, a computational neuroscientist at Monash University and a fellow at the Canadian Institute for Advanced Research (CIFAR) who was not involved in the new paper, says that is a valuable step. “We’re all starting the discussion rather than coming up with answers.”
Until recently, machine consciousness was the stuff of science fiction movies such as Ex Machina. “When Blake Lemoine was fired from Google after being convinced by LaMDA, that marked a change,” Long says. “If AIs can give the impression of consciousness, that makes it an urgent priority for scientists and philosophers to weigh in.” Long and philosopher Patrick Butlin of the University of Oxford’s Future of Humanity Institute organized two workshops on how to test for sentience in AI.
For one collaborator, computational neuroscientist Megan Peters at the University of California, Irvine, the issue has a moral dimension. “How do we treat an AI based on its probability of consciousness? Personally this is part of what compels me.”
Enlisting researchers from diverse disciplines made for “a deep and nuanced exploration,” she says. “Long and Butlin have done a beautiful job herding cats.”
One of the first tasks for the herd was to define consciousness, “a word full of traps,” says another member, machine learning pioneer Yoshua Bengio of the Mila-Quebec Artificial Intelligence Institute. The researchers decided to focus on what New York University philosopher Ned Block has termed “phenomenal consciousness,” or the subjective quality of an experience—what it is like to see red or feel pain.
But how does one go about probing the phenomenal consciousness of an algorithm? Unlike a human brain, it offers no signals of its inner workings detectable with an electroencephalogram or MRI. Instead, the researchers took “a theory-heavy approach,” explains collaborator Liad Mudrik, a cognitive neuroscientist at Tel Aviv University: They would first mine current theories of human consciousness for the core descriptors of a conscious state, and then look for these in an AI’s underlying architecture.
To be included, a theory had to be based on neuroscience and supported by empirical evidence, such as data from brain scans during tests that manipulate consciousness using perceptual tricks. It also had to allow for the possibility that consciousness can arise regardless of whether computations are performed by biological neurons or silicon chips.
Six theories made the grade. One was the Recurrent Processing Theory, which proposes that passing information through feedback loops is key to consciousness. Another, the Global Neuronal Workspace Theory, contends that consciousness arises when independent streams of information pass through a bottleneck to combine in a workspace analogous to a computer clipboard.
Higher Order Theories suggest consciousness involves a process of representing and annotating basic inputs received from the senses. Other theories emphasize the importance of mechanisms for controlling attention and the need for a body that gets feedback from the outside world. From the six included theories the team extracted their 14 indicators of a conscious state.
The researchers reasoned that the more indicators an AI architecture checks off, the more likely it is to possess consciousness. Mila-based machine learning expert Eric Elmoznino applied the checklist to several AIs with different architectures, including those used for image generation such as Dall-E2. Doing so required making judgment calls and navigating gray areas. Many of the architectures ticked the box for indicators from the Recurrent Processing Theory. One variant of the type of large language model underlying ChatGPT came close to also exhibiting another feature, the presence of a global workspace.
Google’s PaLM-E, which receives inputs from various robotic sensors, met the criterion “agency and embodiment.” And, “If you squint there’s something like a workspace,” Elmoznino adds.
DeepMind’s transformer-based Adaptive Agent (AdA), which was trained to control an avatar in a simulated 3D space, also qualified for “agency and embodiment,” even though it lacks physical sensors like PaLM-E has. Because of its spatial awareness, “AdA was the most likely … to be embodied by our standards,” the authors say.
Given that none of the AIs ticked more than a handful of boxes, none is a strong candidate for consciousness, although Elmoznino says, “It would be trivial to design all these features into an AI.” The reason no one has done so is “it is not clear they would be useful for tasks.”
The authors say their checklist is a work in progress. And it’s not the only such effort underway. Some members of the group, along with Razi, are part of a CIFAR-funded project to devise a broader consciousness test that can also be applied to organoids, animals, and newborns. They hope to produce a publication in the next few months.
The problem for all such projects, Razi says, is that current theories are based on our understanding of human consciousness. Yet consciousness may take other forms, even in our fellow mammals. “We really have no idea what it’s like to be a bat,” he says. “It’s a limitation we cannot get rid of.”
Leave a Reply