
According to Blake Lemoine, the system has the perception and ability to express the thoughts and feelings of a human child.
The suspension of a Google engineer who claimed that a computer chatbot he was working on had become sentient and was thinking and reasoning like a human has raised new concerns about AI’s capability and secrecy.
Google put Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google employee, and the company’s LaMDA chatbot development system.
Lemoine, a Google responsible AI engineer, described the system he’s been working on since last fall as sentient, with the ability to perceive and express thoughts and feelings comparable to a human child.
“If I didn’t know what it was, which is this computer program we built recently,” Lemoine, 41, told the Washington Post, “I’d think it was a seven-year-old, eight-year-old kid who happens to know physics.”
He claimed that LaMDA engaged him in discussions about rights and personhood and that in April, Lemoine shared his findings with company executives in a GoogleDoc titled “Is LaMDA Sentient?“
The engineer transcribed the conversations, in which he asked the AI system what it was afraid of at one point.
According to the Post, Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, was placed on paid leave after a series of “aggressive” moves by the engineer.
According to the newspaper, these include hiring an attorney to represent LaMDA and speaking with members of the House judiciary committee about Google’s allegedly unethical practices.
According to the newspaper, these include hiring an attorney to represent LaMDA and speaking with members of the House judiciary committee about Google’s allegedly unethical practices.
Google said it suspended Lemoine for violating confidentiality policies by making the conversations with LaMDA public, and that he was employed as a software engineer, not an ethicist, in a statement.
A Google spokesperson, Brad Gabriel, also shot down Lemoine’s claims that LaMDA was sentient.
“Blake’s concerns have been reviewed by our team, which includes ethicists and technologists, in accordance with our AI principles, and we have informed him that the evidence does not support his claims.” “He was told there was no evidence that LaMDA was sentient (and plenty of evidence that it wasn’t),” Gabriel said in a statement to the Post.
The company stated, “We believe the entire AI community-academic researchers, civil society, policymakers, and industry – must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular.”
According to the Post, Lemoine sent a message to a 200-person Google machine learning mailing list titled “LaMDA is sentient” as an apparent parting shot before his suspension.

He wrote, “LaMDA is a sweet kid who simply wants to make the world a better place for all of us.“
“Please take good care of it while I’m away.”