What is the government’s role in regulating AI?

One of the most significant challenges in AI development today is bias in AI systems (a topic we’ve covered a number of times, including here and here). To generate a discussion about this issue—and shine a light on possible ways to address it through policy—this year’s CES conference hosted a “Solving Bias in Artificial Intelligence” panel.

The panel featured Bärí Williams, VP of Legal and Policy and Business Affairs at All Turtles, as well as Lynne Parker from the White House’s Office of Science and Technology Policy, and Sunmin Kim, who is Senator Brian Schatz’s Technology Policy Advisor.

The regulation question

What is the government’s role in mitigating bias in AI—and what should it be? As Williams said, Mark Zuckerberg’s testimony to Congress last year didn’t inspire a great deal of confidence in the U.S. governing bodies’ understanding of how to regulate technology.

“If you don’t have people who understand how the technology works, are they the right people to set and implement those policies?” asked Williams. “You need people that can advocate and can say, ‘This is how people are using this tool. This is how you should think about regulating this tool based on the usage.’” Gesturing to the women beside her onstage, Williams added, “I appreciate that these ladies are here and are responsible for that and are being mindful about making sure [legislators] do understand the issues.”

Kim’s position on Capitol Hill has given her a clear vantage point to the ways legislators look at this problem. “From a policymaker’s point of view, it doesn’t matter if a human is discriminating or if a machine is,” she said. “If algorithm finds that the better indicator of how to price a product is based on a race, we have to think, that may be good for commerce, but is it good for society? This is where the FTC in the next five, ten years could play a larger role.”

Bias in business

From a commerce perspective, Williams contended that the companies implementing AI are not always properly equipped to address issues of bias. Some, she said, are very tapped into these issues and are working on solving them, but many are not, and those companies perpetuate damaging problems.

“I don’t think people wake up and think, ‘How can I negatively impact someone’s life experience?'” said Williams. “I think people literally walk through life with blinders on. And there are different sets of blinders. Mine are from being a black woman from California, someone else’s may be from being a white man from Ohio… and that isn’t right or wrong.” The problem, she said, arises when people ignore their blinders and create technology that impacts people they aren’t seeing. “It’s important that companies are cognizant of that issue, and have [diverse] people test products in beta mode.”

Transparency is key

Some tech companies have a long way to go in working out issues of bias in their technology. Williams mused, “My grandmother and mother always say, ‘Do what you can with what you have where you are.’ And tech companies are not telling us what they’re doing with what they have where they are. … They’re not very transparent, and it would do all of us a world of good to understand, one, what the problem is, and two, what they’re doing to mitigate the damage.”

An important part of any solution to bias in AI is diversifying the rooms in which decisions are made about this technology. “A lot of this is born out of not having diverse work forces and folks who can tell you what the problem is,” Williams said. “If you don’t have people from marginalized communities in the room, you’re going to miss things… and you’ll see that reflected in the stock price.”

Dire consequences

Bias in algorithms that are built by tech companies have an impact that reaches far beyond Silicon Valley. “I worry about it from the standpoint of the criminal justice system,” Williams said. “You have issues where people are using predictive analytics for where to place officers, who to arrest, where to arrest them, and sentencing guidelines. As mother of black children, that is scary to me.”

Where there is hope

As Parker pointed out, AI also has immense potential to be a tool for good. “Human decision-making is rife with bias. AI can be a tool to detect it and mitigate it,” she said. “And as companies adopt these technologies, we can use them to reach communities that are underserved.”

Time will tell the ways in which tech companies make honest efforts to eliminate bias in their products, but the fact that the largest tech conference of the year hosted a panel discussion on this topic is one indication that people are, at the very least, paying attention.