Brain-computer interfaces raise fresh AI privacy and bias concerns

This is photo shows Meredith Whittaker, co-founder of the AI Now Institute, warned that “AI technologies are increasingly used to manipulate, monitor and catalog people in ways that serve certain forms of centralized power," onstage at the O'Reilly Artificial Intelligence Conference in San Francisco on September 6, 2018. (Image credit: O'Reilly Conferences)

AI research scientist Meredith Whittaker wants us to wake up and smell the algorithms. “AI technologies are increasingly used to manipulate, monitor and catalog people in ways that serve certain forms of centralized power,” Whittaker said at the O’Reilly 2018 AI Conference in San Francisco. This isn’t happening in a vacuum, she warned. “We’re in the age of Trump and we’re seeing the shift toward authoritarianism and an erosion of civil rights and liberties.”  

Whittaker is the co-founder of the AI Now Institute at New York University, which does interdisciplinary research on the social implications of artificial intelligence, focusing on issues like civil rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure.  She also co-founded the M-Lab with Vincent Cerf and has advised the White House and the European Union on Internet policy issues. To illustrate her concerns about the potential downsides of where technology is headed, Whittaker focused on the convergence of neuroscience, human enhancement, and AI.  

Powered by recent advances in deep learning technology, large tech companies are racing to develop new brain-computer interfaces that would allow users to “type with your brain,” as Facebook explains its effort. Microsoft recently published a patent for “changing applications using neurological data” and Elon Musk’s Neuralink is developing a product that promises, as Whittaker put it, “lag free interaction between our brains and external devices so you can control your phone, your Nest thermometer, your smart city, whatever, using only your mind.” Musk’s product consists of a high-bandwidth neural lace that gets injected and embedded into the brain. In the Microsoft version, users would wear an EEG band on the head that would allow them “to think” their commands.

This photo shows Regina Dugan of Facebook unveiling its brain-computer interface ambitions at its annual developers conference in April 2017. (Image credit; Facebook)

Facebook unveiled its brain-computer interface ambitions in April 2017. (Image credit; Facebook)

Whether or not this melding of human and machine thrills you or creeps you out, Whittaker sounded alarms about the technology’s potential for abuse. By her estimation, there are only about seven companies globally with the necessary computational power, wealth and data to develop this new frontier, the same companies that are the current leaders in AI. Think Google, Facebook, Microsoft, et al. According to Whittaker, the technological breakthroughs propelling the current AI gold rush “are all contingent on the vast power and resources of the current tech business ecosystem.” This means that the data generated by the new brain-machine interfaces, “your thoughts in the from of neural data,” will be stored in a centralized server infrastructure controlled by a handful of players.

“I think we need to seriously consider the risks of a future in which a handful of private companies whose incentives may or may not align with ours, own and monetize a map of our lives, of ourselves and how we respond and feel at any give moment,” Whittaker cautioned. And if national security agencies were to come calling, would these corporations protect your privacy?  You better hope so because they could turn over “thought logs” containing the most intimate information about your psyche. Or what if it were turned over to ICE or your employer or health insurers?

The other danger Whittaker raised was about AI bias and inaccuracy.   There is growing recognition in the field that more attention has to be paid to the bias pumping through AI systems. The problem of datasets that are presented as universal or neutral, but in fact come from narrow sectors of the population, is showing up in applications such as facial recognition software. According to Whittaker, this software has been shown to be 30 percent less accurate for dark-skinned women as for white men.

This photo illustrates how one neuroscience lab researching brain-computer interfaces can afford only one $200,000 helmet to collect needed data. (Image credit: Video still/O'Reilly Conferences)

Helmets to collect brain data run $200,000 and were designed to fit large heads. This means that most data is collected from male subjects. (Image credit: Video still/O’Reilly Conferences)

The same kind of data bias occurs in the neuroscience field. Whittaker described a lab that uses an expensive helmet to collect data from the brain. The $200,000 helmet was built for people with large heads, however, which means it works better on men than women, and because of the hefty price tag, the lab has just one of them. The result: the lab’s data and experiments “slowly but very meaningfully center men as the norm and women as ‘nice to have’.”   

These are just two examples of how data reflect the messiness of the world we live in — all prejudices, power dynamics and legacies of marginalization included.  

Who gets to define what is the normative archetype for the human?  For Whittaker, that is the core question we need to ask at this moment when AI is expanding into every facet of our lives and the tech industry building the AI is consolidating and concentrating its power.

In order to develop the human enhancement technologies that Elon Musk and others dream of creating, a model of what is considered “the human” is required, Whittaker said.  “Is this an affluent western version of normal? Is it the normal of an SF tech VP or the shuttle driver who transports them?” That’s why it’s essential to constantly interrogate possibly biased data, algorithms, and models of “the human” in AI and make sure that “normal” doesn’t turn out to just encode current marginalization.

Until recently the Diagnostic and Statistical Manual of Mental Disorders classified homosexuality and transsexuality as abnormal, for instance.  That’s just one historical example that illustrates what’s at stake with AI in the hands of a privileged few and why, as Whittaker cautioned, it’s “impossible to ignore the immense damage that oppressive classification of normalcy have done to those who fall outside.”