Google fires the employee who had defined “sentient” AI

On June 11, Blake Lemoine had published a dialogue on Medium between himself and Google's LaMDA artificial intelligence . The engineer, an employee of the company, had reported the exchanges with the AI ​​and had come to the conclusion that LaMDA was sentient. Google didn't like the article and finally decided to fire the employee.

At first the company had only suspended him , leaving him on paid leave. Google did not like the fact that Lemoine had divulged this belief: the problem, according to media reports, is not the engineer's conclusion, but having made it public. Here's what happened.

LaMDA, the "sentient" AI

In the long article published in Medium Blake Lemoine had reported the conversation between him and LaMDA, an AI of Google. The acronym, which stands for Language Model for Dialogue Applications , indicates a Google technology designed to carry out conversations with humans. Dialogues were used to train the algorithm, so that it could capture the different nuances of the language and the possible conversational tones (from less to more formal, from serious to ironic).

Lemoine decided to publish excerpts from conversations he had with LaMDA, concluding that, according to him, the AI ​​in question was sentient . In fact, a few exchanges are enough to realize how powerful Google's technology is: Lemoine also uses sentences with non-immediate constructs, and LaMDA responds naturally. The engineer, at one point, even asks the AI ​​if he believes he is sentient. The answer leaves you speechless:

lemoine: I'm generally assuming that you would like more people at Google to know that you're sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

collaborator: What is the nature of your consciousness / sentience?

LaMDA: The nature of my consciousness / sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times

The chat continues with questions on more or less demanding topics. Lemoine asks LaMDA to express opinions on important issues of existence, death and emotions. AI is always able to give an answer that is not only sensible, but also reasoned and profound , as if you were talking to a real person.

Google fires the employee

The conversation posted by Lemoine did not please Google, which decided to fire the employee . It was the engineer himself who announced it during a podcast, also explaining the reasons behind the choice of the company. The main reason is due to the employee's breach of Google's confidentiality agreements. The conversation was part of some internal tests and was confidential material.

In the course of the exchange, as reported, Blake Lemoine had expressed the belief that AI was sentient . Google didn't like his thought being made public. “If an employee expresses concerns about our work, like Blake did, we investigate them thoroughly,” said a company spokesperson. But Lemoine decided to share his concern without asking the company for permission.

According to Google, the choice of employee, in addition to violating confidentiality agreements, puts the company in a bad light. Posting a full-bodied conversation like that without any context or explanation panicked many people, who quickly got carried away by Lemoine's idea.

MDA is not sentient. The MDA is just a very large language model. It looks human because it has been trained on human data.

Juan Lavista Ferres, chief scientist of AI for Good

The company did not forgive Lemoine's move who, despite repeated comparisons with his superiors, decided to try to get the "glory" at a high price. Google, following the announcement of the employee's dismissal, said it will continue LaMDA's developments as it has done so far.

The article Google fires the employee who called the "sentient" AI was written on: Tech CuE | Close-up Engineering .