Woe is me: a cautionary tale of two chatbots

Emotional chatbots may not detect important cues that humans can pick up on in face-to-face interactions. (Image credit: Getty)

Emotional chatbots may not detect important cues that humans can pick up on in face-to-face interactions. (Image credit: Getty)

The BBC’s recent test of two popular emotional support chatbots was devastating. Designed to offer advice to stressed, grieving, or otherwise vulnerable children and young adults, the Wysa and Woebot apps failed to detect some pretty explicit indicators of child sexual abuse, drug taking, and eating disorder. Neither chatbot instructed the (thankfully imaginary) victim to seek help and instead offered up wildly inappropriate pablum.

Inappropriate responses ranged from advising a 12 year-old being forced to have sex to “keep swimming” (accompanied by an animation of a whale), to telling another “it’s nice to know more about you and what makes you happy” when they admitted they were looking forward to “throwing up” in the context of an eating disorder.

Yet both of these chatbots are among the top emotional support apps. Woebot, has received glowing media coverage and attracted impressive funding, while founders of Wysa say they’re now drawing upon as many as 20 million conversations. As such, it’s reasonable to infer that these companies use some of the most sophisticated technology and the densest data banks.

Yet as the BBC showed, they’re far from perfect. Already, some experts are questioning whether entrepreneurs are playing a dangerous game, asking whether it is “morally and ethically fair to market these chatbots as ‘solving a nation’s mental health problem?’”

It’s an important question as emotional support bots are expected to keep growing in popularity. But before sounding the alarm, it’s worth noting that such apps have a long, long way to go before gaining true mainstream acceptance. It’s also doubtful that they’re supplanting real therapists (or friends) for those needing serious help. Again, as the BBC found, and anyone who has interacted with a customer service chatbot knows, the technology just isn’t that sophisticated — yet.

For now, emotional support chatbots may best serve as an outlet for individuals intimidated by the idea of face-to-face interactions, or as digital companions of a sort. Any textual mishaps and poor interactions are more likely to cause user frustration than widespread harm.

As the Wysa and Woebot example show, we’re still at the early stages of letting these AI into the wild. With each baby step, entrepreneurs should take caution and ensure that their products are being released into an environment of real people with real problems — and not just imaginary test cases.