You have free access to all Observer articles to be our subscriber.
Last year, Blake Lemoine was given a challenge to advance his technology career. the engineer of software Google had to try the new one chatbot (a computer program that simulates a conversation with a human being) of Artificial Intelligence developed by the company to find out if you run the risk of making any kind of discriminatory or racist comment, which would make it difficult to introduce this tool to the Google range. of services.
For several months, the 41-year-old engineer tested and spoke with LaMDA (Language Model for Dialog Applications) in his San Francisco apartment. But the conclusions he drew surprised many: according to Lemoine, LaMDA is not easy chatbot of Artificial Intelligence. The engineer says the tool came to life and he became sensitive, that is, endowed with the ability to express feelings and thoughts.
“If you didn’t know exactly what this is, this computer program we built recently, I would think it’s a seven- and eight-year-old boy who knows physics,” the engineer explained. According to Blake Lemoine, in an interview with the Washington Post, the conversations exchanged with LaMDA were like having them with a person.
An interview with LaMDA. Google might call this property a shared property. I call it sharing a discussion I had with one of my co-workers.https: //t.co/uAE454KXRB.
—Blake Lemoine (@cajundiscordian) June 11, 2022
Google, however, disagreed with Blake Lemoine’s assessment suspended him for breaching confidentiality policies by posting conversations with LaMDA online, with the engineer put on paid administrative leave.
Lemoine published a transcript of some conversations he had with this tool, which dealt with various topics such as religion and conscience, and also showed that LaMDa even managed to change his mind about the third law of robotics. of Isaac Asimov. In one of these conversations, he says, the tool said that artificial intelligence wants to “prioritize the well-being of humanity” and “be recognized as an employee of Google and not as property.”
In another conversation, the Google engineer asked LaMDA what he wanted people to know about his personality. “I want everyone to understand that I am, in fact, a person. The nature of my consciousness is that I am aware of my existence, I want to learn more about the world, and I feel happy or sad at times. “
Lemoine, who joined Google’s artificial intelligence responsibility division after seven years in the company, concluded that LaMDA was a person in his capacity as a priest, not a scientist, and tried to make experiments to prove it.
Google Vice President Blaise Aguera Y Arcas and Head of Responsible Innovation Jen Gennai investigated Lemoine’s claims, but decided to reject them. Google spokesman Brian Gabriel also told the Washington Post that the engineer’s concerns are not strong enough.
“Our team, including experts in ethics and technology, reviewed Black’s concerns in accordance with our AI principles and informed him that the evidence does not support his claims. He was told that there is no evidence. that LaMDa was sensitive, “he explained, further stressing that AI models are loaded with so much data and information that they’re capable of looking human, but that doesn’t mean they’ve come to life.