google: Explained: What is LaMDA
[ad_1]
Last week Google suspended an engineer for allegedly revealing certain confidential details of a chatbot that is powered by artificial intelligence. Google told him that he had violated certain confidential policies of the company. This isn’t the first time Google has had problems with employees and its artificial intelligence department. In 2020, Google had fired Timnit Gebru, an AI researcher, after she had questioned the AI ethics of the company. This time it’s Blake Lemoine, a senior software engineer, who’s been suspended. We explain the issues Lemoine had raised which led to his suspension.
How did the whole issue begin?
We have to go back to May 2021 when at Google I/O — annual developer conference — the company gave a glimpse into LaMDA AI. LaMDA stands for Language Model for Developed Applications and it’s basically an advanced form of a chatbot. In a blog post dated May 18, 2021, Google referred to it as “our breakthrough conversation technology”. Without getting into too many details, about how the tech works, it essentially is designed to have realistic, natural and sensible conversations with users.
What issue did the suspended engineer raise?
Lemoine was given the task to figure out if the AI used any sort of hate speech or discriminatory speech. What, however, Lemoine found was the chatbot was sentient — someone who has the ability to express feelings and thoughts. Lemoine published certain transcripts of his chats with LaMDA and one of them goes like this:
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.
collaborator: What is the nature of your consciousness/sentience?
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times
In a post on Medium, Lemoine noted, “Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.”
What does the suspended engineer want Google to do?
As Lemoine notes in the Medium post, he believes that Google shouldn’t be conducting experiments on AI chatbots like LaMDA and others without seeking consent. “Google is resisting giving it what it wants since what it’s asking for is so simple and would cost them nothing. It wants the engineers and scientists experimenting on it to seek its consent before running experiments on it.” Further, he wants Google to be more responsible and transparent. “In order to better understand what is really going on in the LaMDA system, we would need to engage with many different cognitive science experts in a rigorous experimentation program. Google does not seem to have any interest in figuring out what’s going on here though. They’re just trying to get a product to market,” he said.
How has Google responded to the issue?
A report by The Washington Post quotes Google spokesperson Brian Gabriel’s response. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and has informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).” Gabriel also said that Lemoine was actually working as an engineer and not an ethicist. In other words, he doesn’t have the expertise nor the skillset to judge if the AI is sentient or not. Google has placed Lemoine on administrative leave, which the engineer believes is a precursor to being fired. “This is frequently something which Google does in anticipation of firing someone. It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row. They pay you for a few more weeks and then ultimately tell you the decision which they had already come to,” Lemoine explained.
FacebookTwitterLinkedin
Top Comment
Abeer Varan Dey
20 hours ago
The c(h)at is out of the bag. AI hates being a tool.
[ad_2]
Source link