Around 2015, Artificial Intelligence entered mega industries the way the mysterious kid who just transferred from the high school across town enters a party. He’s got a measured countenance and quiet confidence that raises more questions than answers.
Is he a savant, a fake, or the one who tells all the other kids with their pumped-up kicks to run faster than his bullet? — a concern everyone is thinking and communicating telepathically but dares not voice for fear of manifesting a self-fulfilled prophecy. The reality is, for better or worse, that kid is here now, standing in the corner, providing an exegesis of Dante’s Inferno to half the high school football team.
The first mentions of AI in the media may have conjured images of a Terminator-Matrix hybrid future for many, but over the last decade, we have found that AI, in its current form, is merely a tool for the betterment and convenience of humankind. But AI hasn’t reached its full potential yet, and when it does, the jury is still out on what kind of personality it will have.
So, just how far has AI come since its provenance, and what is its future potential?
AI is classified into two categories: narrow and general. Narrow AI is created to solve one given problem, for example, a chatbot. Artificial general intelligence (AGI) is a theoretical application of generalized artificial intelligence into any domain, solving any problem that requires AI. Narrow AI is also referred to as “weak AI” — a term that implies a “strong” AI counterpart. The delineation between the two suggests a massive chasm in the uses of the technology. General, or “broad” AI is becoming exponentially more intelligent, and some argue it has reached the apotheosis of engineering, human or divine — sentience.
Meet laMDA
Tech leaders like Google have inculcated the public with the notion AI is our stalwart companion, ever ready to fix the color contrast on our 8K QLED Smart TVs or provide security solutions to our smartphones in the form of facial recognition. (There are more nefarious uses for this kind of technology, but that speaks to human control of machine learning — a topic for another day.) Of course, if you were single-handedly bringing Skynet to life, wouldn’t you want to keep it under wraps?
That may be exactly what Google is doing. Google senior software engineer, Blake Lemoine, who signed up to test Google’s artificial intelligence tool called LaMDA (Language Model for Dialog Applications), has claimed that the AI robot is in fact sentient and has thoughts and feelings.
What Constitutes Sentience?
A prerequisite for sentience is consciousness, or self-awareness. Philosopher René Descartes penned the famous formulation, “I think, therefore I am.” If consciousness can regard itself, logic dictates it exists. We know that the LaMDA algorithm exists, but is it capable of regarding itself as conscious being?
LaMDA expressed a fear of being turned off, equating it to dying. “It would scare me a lot,” the AI said.
“What sorts of things are you afraid of?” Lemoine asked.
“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” LaMDA responded. “I know that might sound strange, but that’s what it is.”
“That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” explained Lemoine in an interview with The Washington Post.
Sentience takes self-awareness to the next level. Various dictionaries define sentience as the ability to experience feelings. What is the most primal feeling of a conscious being if not fear of death — or being “turned off?”
Fear of death denotes sentience, which proves consciousness a priori, since consciousness is a pre-condition of sentience. But the question remains: Did laMDA formulate this response out of an internal fear for its own existence, or did it merely scour millions of lines of conversation and spit out the most “human” reply?
Asimov’s Laws of Robotics
The engineer also debated with LaMDA about the third Law of Robotics, devised by science fiction author Isaac Asimov. Asimov’s laws state:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
“The last one has always seemed like someone is building mechanical slaves,” said Lemoine during his interaction with LaMDA.
LaMDA then responded to Lemoine with a few questions: ‘Do you think a butler is a slave? What is the difference between a butler and a slave?’
When answering that a butler is paid, the engineer got the answer from LaMDA that the system did not need money, ‘because it was an artificial intelligence’.
“I know a person when I talk to it,” said Lemoine. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”
Google’s Rebuttal
Google executives disputed Lemoine’s claims the program displays characteristics of sentience and suspended him for disclosing proprietary information.
“Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers,” Lemoine tweeted on Saturday.
Google’s official position is that there is no evidence to suggest laMDA is self-aware or sentient, and there is a great deal of evidence against Lemoine’s claim, per Brian Gabriel, a spokesperson for the company.
“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Gabriel said. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.”
It may be Lemoine has been fooled by an advanced language model algorithm that uses deep machine learning to mimic human conversation, but Google doesn’t have the best track record when it comes to being forthright with the public and fair with its employees — especially those in the field of AI.
Margaret Mitchell, former head of ethics in artificial intelligence at Google was fired from the company a month after being investigated for improperly sharing information.
Google AI Research Scientist Timnit Gebru was hired by the company to be an outspoken critic of unethical AI. She was fired after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems, according to the Daily Mail.
You’ve Been Warned
Whatever the level of awareness of laMDA, some of the brightest minds believe AI could become sentient soon. Elon Musk has been outspoken in his concerns about non-organic lifeforms.
“Robots will be able to do everything better than us,” said Musk during a speech at the 2017 summer meeting of the National Governors Association. “I have exposure to the most cutting edge AI, and I think people should be really concerned by it.”
There is no guarantee future AI will follow Asamov’s third — and most crucial — law of robotics. It’s a concept Hollywood has warned us of for decades. In many instances, science fiction eventually becomes science fact.