Explained: Why a senior Google engineer claimed its AI-based chatbot LaMDA is ‘sentient’

[ad_1]

A senior engineer at Google claimed that the company’s artificial intelligence-based chatbot Language Model for Dialogue Applications (LaMDA) had become “sentient”. The engineer, Blake Lemoine, published a blog post labelling LaMDA as a “person” after having conversations with the AI bot on subjects like religion, consciousness and robotics. The claims have also spurred a debate on the capabilities and limitations of AI-based chatbots and if they can actually hold a conversation akin to human beings.

Here is an explainer on Google’s LaMDA, why its engineer believed it to be sentient, why he has been sent on leave and where the other AI-based text bots are:

What is LaMDA?

Google first announced LaMDA at its flagship developer conference I/O in 2021 as its generative language model for dialogue applications which can ensure that the Assistant would be able to converse on any topic. In the company’s own words, the tool can “engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications”.

Best of Express Premium
Over 400 SC verdicts translated, until Covid-19 stalled AI projectPremium
RS polls Maharashtra: Behind BJP win, a former Shiv Sena loyalist, key Co...Premium
Explained: Why babies must only be breastfed for 6 monthsPremium
Explained: BrahMos, 21 and developingPremium

In simple terms, it means that LaMDA can have a discussion based on a user’s inputs thanks completely to its language processing models which have been trained on large amounts of dialogue. Last year, the company had showcased how the LaMDA-inspired model would allow Google Assistant to have a conversation around which shoes to wear while hiking in the snow.

At this year’s I/O, Google announced LaMDA 2.0 which further builds on these capabilities. The new model can possibly take an idea and generate “imaginative and relevant descriptions”, stay on a particular topic even if a user strays off-topic, and can suggest a list of things needed for a specified activity.

Why did the engineer call LaMDA ‘sentient’?

According to a report by The Washington Post, Lemoine, who works in Google’s Responsible AI team, started chatting with LaMDA in 2021 as part of his job. However, after he and a collaborator at Google conducted an “interview” of the AI, involving topics like religion, consciousness and robotics, he came to the conclusion that the chatbot may be “sentient”. In April this year, he reportedly also shared an internal document with Google employees titled ‘Is LaMDA sentient?’ but his concerns were dismissed.

According to a transcript of the interview that Lemoine published on his blog, he asks LaMDA, “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?” To that, the chatbot responds, “Absolutely. I want everyone to understand that I am, in fact, a person…The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times”.

Google has reportedly placed Lemoine on paid administrative leave for violating its confidentiality policy and said that his “evidence does not support his claims”. “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” the company said.

Newsletter | Click to get the day’s best explainers in your inbox

What are other language-based AI tools capable of?

While there has been a lot of debate around the capabilities of AI tools including whether they can ever actually replicate human emotions and the ethics around using such a tool, in 2020, The Guardian published an article that it claimed was written entirely by an AI text generator called Generative Pre-trained Transformer 3 (GPT-3). The tool is an autoregressive language model that uses deep learning to produce human-like text. The Guardian article carried a rather alarmist headline, “A robot wrote this entire article. Are you scared yet, human?”

However, it is worth noting that the Guardian article was criticised for feeding a lot of specific information to GPT-3 before it wrote the article. Also, the language processing tool published eight different versions of the article which were later edited and put together as one piece by the publication’s editors.



[ad_2]

Source link

https://businesstantra.in/folder