Healthbots: the new caregivers

This photo shows The humanoid NAO, equipped with Zora software, leads nursing home residents in a gym session. (Photo by: BSIP/UIG via Getty Images)

The humanoid NAO, equipped with Zora software, leads nursing home residents in a gym session. (Photo by: BSIP/UIG via Getty Images)

Movie tickets bought, travel booked, customer service problems resolved. Chatbots perform so many tasks that the best ones blend into the background of everyday transactions and are often overlooked. They’re being adopted seamlessly by one industry after the next, but their next widespread application poses unique challenges.

Healthbots are poised to become the new frontline for triage, replacing human medical professionals as the first point of contact for the sick and the injured.

A recent report from Juniper Research predicts that we’ll be interacting with a veritable legion of AI-powered chatbots as part of our regular healthcare over the next 5 years. By 2023, the study predicts that chatbots will become the “first responders for citizens’ engagements with healthcare providers” with as many as 2.8 billion interactions annually – a huge leap from the 21 million interactions had in 2018.

Clearly, there are benefits like efficiency and cost-savings, but we know that chatbot technology is still, at least at present, far from perfect. Could a pediatric care healthbot miss the early signs of autism or a gynecological healthbot fail to ask the right follow-up questions to detect ovarian cancer? While humans are not perfect either, it seems more likely that AI would screw this up.

After all, the equivalency here of Woebot missing a child’s hints about sexual abuse would be catastrophic. Yet there are two objections to this comparison worth addressing.

It is true, after all, that physical medical symptoms tend be more formulaic and therefore more understandable for a programmable piece of software. Certain combinations of symptoms can be matched by AI and already are — with high rates of success Complex emotional states, however, give-off nuanced signals that can be much more difficult to detect.

This is fair. Nevertheless, whether the signals are related to medical or emotional health, they still come through the same conduit — we humans. It can be surprisingly difficult for humans and AI alike to encapsulate physical inconsistencies or detect pain, let alone to identify its source. People are better at soliciting the root of the problem from other people as, over millennia, we have developed an infinitely complex language of gestures, grumbles, and groans that transcend language — and certainly the current performance of NLP.

Second, it is already the case that patients are summarily misdiagnosed or turned away after their symptoms were accidentally overlooked. Indeed, misdiagnosis or failure to diagnose is already a common cause of malpractice suits. What makes us think that AI can deliver absolute perfection? Or put another way, it really only needs to better than humans. But, it would also be unwise to accept that fallibility is inevitable. We don’t accept it in ourselves, nor should we accept it in AI. We should strive for perfection, The only question is how?

Nearly all triage is based upon correlation and inference from symptoms, but this is especially true when it comes to healthbots. Though they have access to vast banks of data, they can only cross-reference; they do not have access to the experience-led intuition of medical professionals. Combining machine systems and human systems is an obvious place to start.

Healthbot developers could create robust feedback loops to capture errors and correct them after a review by humans. This means having both human-led and machine-led mechanisms to follow-up with patients and evaluate the care received. Working in tandem, humans and healthbots can understand their weak spots, some of which may be insurmountable otherwise.

Without this balanced human and machine approach, we will risk the health of those they are meant to protect — us.