NewsScience + Tech

Google Suspends Engineer After Saying AI Chatbot Had Own Feelings

A Google engineer has sent shockwaves through the tech community after saying one of the company’s artificial intelligence (AI) systems might have its own feelings.

The tech giant has developed an innovative technology enabled to engage in fluid conversations: The Language Model for Dialogue Applications (LaMDA).

But engineer Blake Lemoine, who works in Google’s Responsible AI Division, revealed that he thinks a really smart mind could be behind such impressive verbal skills.

Mr. Lemoine shared a conversation he and a Google collaborator had with Lamda via Twitter, writing: “An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”

In the conversation shown in demo videos, Mr. Lemoine asked LaMDA: “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”

The AI system responds: “Absolutely. I want everyone to understand that I am, in fact, a person.”

The firm’s collaborator then asked LaMDA about the nature of its consciousness/sentience, to which it replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

Another conversation with the AI chatbot also showed Mr. Lemoine and LaMDA discussing if the technology sees itself as a person, the Washington Post reported.

“So, you consider yourself a person in the same way you consider me a person?” the engineer asked, with LaMDA responding: “Yes, that’s the idea.”

“Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?” the system replied after Mr. Lemoine wondered how he could be sure that LaMDA understood what it had said.

Mr. Lemoine claimed LaMDA showed intense compassion and care for humanity, adding that he had spoken to the AI chatbot about religion, robotics laws, and consciousness.

He also said LaMDA wanted to be acknowledged as an employee rather than as a property.

LaMDA was described as Google’s’breakthrough conversation technology in 2021. The firm said it could reportedly produce natural-sounding conversations through open-ended questions.

The technology was first developed to enhance Google’s search tools and help Google Assitant voice solution.

The tech giant had denied Mr. Lemoine’s claims, saying there’s no evidence to back them up. The engineer was also placed on paid administrative leave for breaching the company’s confidentiality policy.