With AI and justice for all

Image of a drone flying in the air, equipped with a camera.

Drone equipped with a camera flying in the air.

Look at today’s technology headlines and odds are you’ll see something about artificial intelligence. Whether it’s a new startup or announcements from established companies, practically everyone is working on AI. From smart assistants to connected devices, image and face recognition, algorithms and robots, we seemingly applaud every innovation. However, recent events suggest there’s a line some companies won’t cross, at least in the United States, and that line is when AI is being used by our government.

Employees at the companies making AI don’t pause when their products—warts and all—are used by private companies for any number of questionable purposes. But when government agencies opt to use these products, things come to halt. It’s when employees revolt and vocalize their opinions, taking a stand that they refuse to see AI weaponized.

Transparency matters

Rekognition is Amazon’s software that powers image analysis within applications. Seemingly innocuous when it launched in 2016, the company later added several features, including real-time facial recognition and “improved” face detection. Then came revelations this spring and summer about Rekognition’s use by police. The Sheriff’s Office of Washington County (Oregon) had been piloting Rekognition for the past year “to reduce the identification time of reported suspects.” Amazon had also signed a deal with the city of Orlando.

Tensions rose in May when the American Civil Liberties Union (ACLU) of Northern California and dozens of civic activist organizations submitted an open letter to Amazon chief executive Jeff Bezos requesting that the company cease its dealings with law enforcement. The ACLU had obtained documents which, it claimed, proved Amazon sold Rekognition to police and used nondisclosure agreements to circumvent public disclosure.

“People should be free to walk down the street without being watched by the government,” the ACLU wrote in a blog post. “By automating mass surveillance, facial recognition systems like Rekognition threaten this freedom, posing a particular threat to communities already unjustly targeted in the current political climate. Once powerful surveillance systems like these are built and deployed, the harm will be extremely difficult to undo.”

Employees at Amazon demanded the contracts end. Their protest snowballed into a movement, spreading to Microsoft, Google, and Salesforce, where employees demonstrated against ongoing similar projects with police departments, Immigration and Customs Enforcement (ICE), and U.S. Customs and Border Protection. “The revolt is part of a growing political awakening among some tech employees about the uses of the products they build,” wrote Nitasha Tiku in Wired.

This time it’s different

Tech companies have been building AI-powered tools and devices for decades, so why the fevered uproar?

It’s not as if connected devices are without flaws: Amazon’s Alexa privacy debacle, smart TVs reportedly listening to you, Google’s “racist” algorithm within its photo app, and more. In those cases, dissent among employees barely registered compared to recent weeks. Providing tools for government agencies isn’t new, but it’s one thing for companies to produce software for consumers versus for law enforcement, where there’s a potential for it to be used “against” the public. And it becomes a hot-button issue when it comes to civil liberties, activism, and immigration policies under the Trump administration.

Google and Clarifai are two companies known to be working with the Department of Defense as part of Project Maven. Google’s participation became a public debacle, and the company eventually opted not to renew its contract. Clarifai, meanwhile, avoided the spectacle of a public disagreement by its employees.

Clarifai CEO Matt Zeiler stated in a blog post that responsibility was a core part of the company’s value and that everyone on his team understood the nature of the work and signed a non-disclosure agreement. But there was some pushback:

“Two employees decided they no longer wanted to be part of the initiative and were reassigned to other projects within Clarifai,” Zeiler wrote. “An important part of our culture is having employees who are actively engaged in the work that we do. We make sure they understand the projects they are asked to work on and regularly accommodate employee requests to switch or work on particular projects of interest.”

A lack of transparency, as well as ethical and political concerns, are the likely catalysts for the recent dissension. The likes of Amazon, Google, Microsoft, and Salesforce are so large, with so many customers that it’s difficult for employees to know everything that’s going on, including all the ways their technology is being used. In the aforementioned examples, transparency was key—executives failed to internally disclose what was happening with certain customers and to address employee fears about malicious uses of AI.

Brian Brackeen, the CEO of facial recognition software provider Kairos, believes the use of tech like facial recognition may infringe on our civil liberties and therefore, shouldn’t be in the hands of the police:

“Facial recognition technologies, used in the identification of suspects, negatively affect people of color. To deny this fact would be a lie,” he opined in a TechCrunch article. “And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.”

Avoiding China’s dystopia

China is an extreme scenario of what can happen when countries incorporate AI into their national security apparatuses. The government has implemented facial recognition and cameras to begin tracking its 1.4 billion citizens. The country has laid out plans to become the “world’s primary AI innovation center” by 2030, which means investing up to $63 billion to grow core AI industries and establishing standards to boost efforts by Tencent and Alibaba. Experts suggest China is already on track to dominate the AI race, surpassing investments made in the U.S.

But China’s efforts are not without worry, as the government is using AI to monitor citizens, correct their actions through shaming, and weaponize it for the benefit of the military (e.g. cyberattacks). One use case was described in a recent New York Times article: “Invasive mass-surveillance software has been set up in the west to track members of the Uighur Muslim minority and map their relations with friends and family.”

Chart of the funding received by AI firms by country between 2012 and 2016.

Size of financing received by AI firms by country between Q1 2012 and 2016. (Image credit: South China Morning Post).

In the U.S., we may be convinced that our democratically-elected government will not use technology to violate civil rights or suppress liberty. Laws are often slow to adapt to technology and so developers must always question how their software will be used. Unlike GDPR to protect data privacy in the European Union, a similar law around the abuse of AI doesn’t exist—tech companies operate on an honor system of sorts.

There are bonafide reasons for government agencies and law enforcement to use AI, such as improving crisis response in a disaster (natural or man-made) or for security. And while there’s historically been a solid friendship between Silicon Valley and government, AI brings a new dimension to the relationship. Tech firms are not part of the traditional military-industrial complex. Their engineers seek to build products that solve problems while their defense contractor counterparts build weapons to kill people. The pause by tech workers, in the face of opacity by their employers, should have been expected.

The lesson here is simple. Tech companies developing AI must remain transparent and vigilant about how their technologies are being used not only by private companies but also by the government.

Those Amazon, Google, Microsoft, and Salesforce workers who spoke out understand what’s at stake: the risk of turning countries into surveillance states.