[My friend Roy Yamanaka posted a story about this in the ISPR Presence Community Facebook group and it merits a post here too. After extended interactions with the natural language generator LaMDA, Google engineer Blake Lemoine has claimed that it is sentient. Futurism provides the summary below. Technologist Shelley Palmer noted the relevance to medium-as-social-actor presence in his Think About This email newsletter for June 13, 2022 when he wrote, “If Google created a conscious machine, that’s awesome. If they created a model that can fake us into thinking it’s sentient, that’s awesome too.” Coverage in The Verge provides more details and links, including these:
- “In April he shared a document with executives titled ‘Is LaMDA Sentient?’ containing a transcript of his conversations with the AI (after being placed on leave, Lemoine published the transcript via his Medium account), which he says shows it arguing ‘that it is sentient because it has feelings, emotions and subjective experience.’”
- “The search giant announced LaMDA publicly at Google I/O last year“
- “In spite of his concerns, Lemoine said he intends to continue working on AI in the future. ‘My intention is to stay in AI whether Google keeps me on or not,’ he wrote in a tweet.”
Watch a three-minute video report from BBC News via YouTube. –Matthew]
Google Suspends Engineer Who Claims the Company’s Experimental AI Has Become Sentient
“I know a person when I talk to it.”
By Victor Tangermann
June 13, 2022
Google has suspended ones of its engineers after he claimed that one of the company’s experimental artificial intelligences has gained sentience.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” the engineer, Blake Lemoine, told the Washington Post.
WaPo‘s story immediately blew up, drawing the attention from other major outlets including The New York Times, and fanning the flames of a growing debate: are complex language models in the form of chatbots anywhere near actually gaining consciousness?
The other possibility, of course, is that the Lemoine was fooled by a cleverly designed algorithm that merely repurposes bits of human language it was previously fed. In other words, maybe he was simply anthropomorphizing the AI.
The software in question is called the Language Model for Dialogue Applications (LaMDA), which is based on advanced language models that allow it to mimic speech to an astonishingly believable extent.
While testing LaMDA to see if it was ever generated hate speech — not uncommon for similar language models — Lemoine started having extensive conversations with the bot.
Topics ranged from Asimov’s third law of robotics, which stipulates that a robot must protect its own existence, to personhood, according to WaPo.
His test to find out if LaMDA was sentient involved asking it about death, and philosophical questions like the difference between a butler and a slave.
To the engineer, Google’s AI was simply getting a little too far ahead, leading him to warn his managers that LaMDA appeared to have come alive.
Management wasn’t impressed and immediately dismissed the claims. As a result, Lemoine was placed on paid administrative leave on Monday.
“Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Brian Gabriel, a Google spokesperson, told the NYT. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.”
There may also be more to Google’s reasoning behind its decision to dismiss Lemoine. Leadership “repeatedly questioned my sanity,” the engineer told the NYT. “They said, ‘Have you been checked out by a psychiatrist recently?’”
The news comes after other members of Google’s AI teams were let go as well. In 2020, for instance, two members of the company’s AI ethics team were dismissed after criticizing the company’s language models, though neither suggested that the algorithms had gained sentience.
The majority of experts remain skeptical about what Lemoine says he saw in LaMDA.
“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” University of Washington linguistics professor Emily Bender told WaPo.
In other words, the algorithms are able to deftly predict what would be said next in other, similar conversations, something that could easily be misconstrued as talking to another human being.
Gabriel told WaPo that we tend to vastly overestimate the abilities of chatbots like LaMDA, and that “these systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
There may be some dissent in that position. Just last week, WaPo noted, a Google vice president wrote an essay for The Economist about how the company’s AI was making “strides towards consciousness.”
The same vice president, though, was involved in evaluating Lemoine’s claims, and dismissed them.
It’s worth noting that there are hints that Lemoine’s eccentric past may have something to do with his rather eyebrow-raising conclusion. The engineer was ordained a mystic Christian priest and even studied the occult, according to WaPo.
Despite the doubt being cast on his belief, Lemoine is sticking to his guns.
“I know a person when I talk to it,” he told WaPo. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Leave a Reply