Google’s use of AI to mimic humans was unethical and bad UX (updated)

Google CEO Sundar Pichai took a giant leap in the wrong direction yesterday. At his company’s I/O developer conference, Pichai wowed the crowd by using a virtual assistant to fool people, making unsuspecting humans the target of laughter and derision and raising serious ethical questions about future uses for AI.

Pichai played for the audience a conversation between Google Assistant and a hair salon assistant:

Hair salon: Hello, how can I help you?

Google Assistant: Hi, I’m calling to book a woman’s haircut for a client. Um, I’m looking for something on May 3rd.

Hair salon: Sure, give me one second.

Google Assistant: Mm-hmm.

The virtual assistant’s utterance of “Mm-hmm” generated peals of laughter from those eavesdropping on the equivalent of a prank call. (1:56:36 of the video above.) Listen to this poor sap of a hair salon receptionist! She’s completely falling for it! Let’s listen and see what happens next!

Hair salon: Sure, what time are you looking for around?

Google Assistant: At 12 pm.

Hair salon: We do not have a 12pm available. The closest we have to that is a 1:15.

Google Assistant: Do you have anything between 10am and 12pm?

Hair salon: Depending on what service she would like. What service is she looking for?

Google Assistant: Just a women’s haircut, for now.

Hair salon: Okay, we have a 10 o’clock.

Google Assistant:10am is fine

Hair salon: Okay, what’s her first name?

Google Assistant:The first name is Lisa.

Hair salon: Okay, perfect. So,I will see Lisa at 10 o’clock on May 3rd.

Google Assistant:Okay, great, thanks.

Hair salon: Great. Have a great day. Bye.

The call’s conclusion generated more laughter and applause from the ranks of I/O attendees and the Google CEO. “That was a real call you just heard,” Pichai said as the chuckling continued. He explained that the call was powered by a new technology called Google Duplex. “It brings together all of our investments over the years: natural language understanding, deep learning, text to speech.”

As if outwitting a hair salon weren’t enough, Pichai then played the same trick on a restaurant, with the juxtaposition of the employee’s heavily-accented English and the Google Assistant’s use of “er” and “mm-hmm” provoking still more sniggering.

“Again,” Pichai said, shaking his head at the wonderful hilarity of it all, “that was a real call.”

 

Google CEO Sundar Pichai at Google I/O on May 8, 2018.

Wrong UX

Google’s drive to make Assistant’s voice patterns indistinguishable from humans begs the question, why? Just because it can be done is not reason enough to do it.

I can imagine that user experience experts shook their heads at this deception. While it’s certainly more pleasant to interact with a human-sounding voice assistant than a mechanized robotic one, when I’m interacting with them, I want to know it. What did Google lose by not teaching its bot to introduce itself? When calling the hair salon, it could have said, “Hi, I’m Google Assistant calling on behalf of a client.”

It should have done so.

While it may have been jarring for the receptionist — and the restaurant employee, and perhaps for you and me, assuming we eventually receive these calls—our acceptance of them will grow with time. There’s little reason to think that they won’t one day be as common as voice menus.

But our acceptance of seemingly autonomous voice assistants will depend on trust. And trust demands being able to distinguish when we’re talking to a human and when we’re talking to an AI.

Dangerous games

A world where artificial intelligence-powered text, voice, and video is indiscernible from human-generated analogs is a scary one, and the technologies to create it are already here. And it’s likely not one envisioned by Alan Turing, when he proposed his eponymous test.

The best way to prevent such a dystopian society — the so-called Infocalypse, is to apply the same ethics which govern human interaction to interactions with AI. Namely, honesty and truthfulness.

Pichai made two ethical mistakes at I/O. The first was turning the hair salon and restaurant employees into dupes. It was really no better than a phony phone call stunt done for the amusement of his developer guests.

Pichai’s second offense was encouraging the development of an AI that impersonates humans. Little good can come from this, it risks setting dangerous precedent. Instead of Duplex, Google ought to name this new technology Deception.

UPDATED 5/11/2018: In response to criticism over the Duplex-powered Assistant not identifying itself as a bot, Google issued the following statement: “We are designing this feature with disclosure built-in, and we’ll make sure the system is appropriately identified. What we showed at I/O was an early technology demo, and we look forward to incorporating feedback as we develop this into a product.”