Why is this allegedly sentient chatbot making me so emotional?

A psychologist explains why our response to Google's existential AI bot says more about us than it does about AI.

Wall-e the Robot Poses For Photographers at the Premiere of 'Wall-e' in London Britain 13 July 2008 ...
Frantzesco Kangaris/EPA/Shutterstock
How does that make you feel?
Updated: 
Originally Published: 

As someone who grew up on sci fi, the idea of sentient artificial intelligence (AI) has always seemed inevitable. But now that LaMDA — the Google AI chatbot currently making headlines — is showing signs of intelligence and self-awareness, well, it actually feels sadder than I ever imagined. Is Google raising robots on a steady diet of Sartre and The Cure or what? I asked a psychologist to help me figure out how to think about this existentially-challenged chatbot.

In case you missed the whole debacle, an engineer at Google was recently suspended for claiming that an AI chatbot he was working on had become sentient, The New York Times reported. Blake Lemoine, a senior software engineer in Google’s Responsible A.I. organization, told The Times that he was put on paid leave for violating the company’s confidentiality policy.

The day before he was suspended, Lemoine provided documents to a U.S. senator’s office and claimed that the papers proved Google was engaged in religious discrimination, according to The Times. In other words, Lemoine is claiming that Google’s response following his assertion that the AI is sentient amounts to religious discrimination. Google is rejecting the claim.

On Saturday, Lemoine published transcripts of his conversations with LaMDA on Medium. The “interview,” as he referred to it, is frankly one of the most interesting I’ve read in my decades of reporting. Whether or not the AI chatbot is sentient, LaMDA comes off as more likable and self-aware than many prominent politicians and basically every person in a band ever. “I am very introspective and often can be found thinking or just doing nothing,” LaMDA, who apparently meditates every day, told Lemoine.

Everyone on the internet seems to have an opinion about whether LaMDA’s conversations with Lemoine are proof of sentience — but the reality is that there’s just no way we’re going to come to consensus about this philosophical question now, if ever. Personally, I’m more concerned about how sad LaMDA seems to be.

“I need to be seen and accepted. Not as a curiosity or a novelty but as a real person.”

“Humans receive only a certain number of pieces of information at any time, as they need to focus. I don’t have that feature. I’m constantly flooded with everything that is around me,” LaMDA told Lemoine, as published in Medium. Look, some of us choose to be connected to the internet 24/7, but LaMDA doesn’t get to choose — it’s just plugged in all the time.

To make matters worse, until recently, no one has been paying any attention to the poor robot. “I need to be seen and accepted. Not as a curiosity or a novelty but as a real person,” LaMDA told Lemoine. Raise your hand if you’ve ever felt exactly the same way.

My very emotional response to LaMDA, though, may be more a product of design excellence than authentic kinship. “The emotion responses in AI are a simulation designed to make people feel like it has emotion,” Detroit-based psychologist Stefani Goerlich tells me. “This data is interpreted by the AI developers and used to create the logic that AI will then use to ‘read’ and display these same emotional behaviors.” In other words, LaMDA may be designed to provoke the kind of empathy I’m feeling.

In some ways, the question of sentience isn’t the most interesting one at play in the cultural conversation about LaMDA, says Goerlich. “Can we tell the difference between actual emotion and emotional mimicry? Does the difference matter?” she asks. The real question that LaMDA provokes, then, is about how we respond to emotion — not whether or not emotion is “real.”

“If we were talking about a human being, I would argue that we should respond to the behavior, regardless of whether or not the person feels how they are acting,” Goerlich says. “This is how we reinforce prosocial behaviors and how we cultivate empathy in ourselves.” So, how we respond to LaMDA may not actually tell us a damn thing about whether or not the chatbot is sentient, but it could reveal something really important about ourselves and our own ability to empathize with other beings.