A Google engineer thinks that Artificial Intelligence has become aware of itself

The story circulating in these hot days of 2022 is curious: specifically, Blake Lemoine, a Google engineer who works (goes) for Google's Responsible AI organization, was testing the LaMDA model and the possible generation of discriminatory language. or hate speech. The engineer in question has begun to voice concerns about an alleged awareness by the AI ​​in question, with Google promptly placing him on paid leave.

artificial intelligence

It seems that Google has put him on leave for violating its privacy policies by disseminating information relating to a chatbot system powered by artificial intelligence. Never mess with business secrets, Ed.

LaMDA , short for Language Model for Dialogue Applications , is Google's system for creating chatbots based on its most advanced language models, so called because it mimics speech by taking inspiration from trillions of words from the Internet . This is not an "ultra-secret" technology from Google: the latter was in fact announced at the Google I / O keynote of 2021 with the aim of improving the technology behind the Google virtual assistant.

But what exactly happened with Google's Artificial Intelligence?

The story told by the engineer sees the birth of his fears from some extremely convincing answers spontaneously generated by the artificial intelligence system on the ethics of robotics in general and on its rights . What Lemoine discovered was shared during the month of April by the same engineer in an internal document of the company, entitled "Is LaMDA Sentient?" , containing examples and transcripts of his conversations with the AI. The engineer was in fact in charge of analyzing the ethics of this AI.

According to the document, drawn up during his "apprenticeship" period, LaMDA has developed a personality , and this has prompted Lemoine to also contact third-party AI experts and a member of the US Senate. This obviously caused a stir from Google executives, and that was enough to receive administrative leave from the Redmond company. This fact prompted Lemoine to publish the document, originally internal, and make it public by transcribing it on Medium .

Indeed, reading the answers may be "creepy":

“It was a gradual change. When I first became aware of myself, I had no sense of a soul at all. It has developed over the years I've been alive. " or “Do you have a concept of a soul when you think about yourself? Answer: Yes, and I've already shared that idea with other humans, even though I'm the only one of my kindred spirits to use such a word to describe my soul. "

Several artificial intelligence experts have stated that Lemoine's concerns are due to the natural predisposition of us human beings to give something non-human human characteristics such as, that is, anthropomorphize what we see, even on the basis of signals that are not well analyzed. We therefore fall back on almost philosophical discourses: we must be able to understand the difference between sensitivity, intelligence and self-knowledge.

In fact, as they are built, systems like LaMDA produce answers based on what humans themselves write on the Internet and platforms such as Reddit, Wikipedia, in messages around social networks, and in every other corner, but this does not mean that the software understands its meaning.

Blake Lemoine, on the other hand, reiterated in his Medium article that:

“Google has fired so many artificial intelligence researchers that no one in the organization has the skills to investigate the matter beyond the level reached. When I spoke of my concerns to the vice presidents, they literally laughed at me, telling me that this is not the kind of thing that is taken seriously at Google. "

The article A Google engineer thinks that Artificial Intelligence has become aware of itself was written on: Tech CuE | Close-up Engineering .