If internet trolls are cybercriminals, can AI stop them?

This photo shows Stanford University Professor Alex Stamos (L) and moderator Taylor Hatmaker speak onstage during Day 2 of TechCrunch Disrupt SF 2018 at Moscone Center on September 6, 2018 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)

Stanford University Professor Alex Stamos (L) and moderator Taylor Hatmaker speak onstage during Day 2 of TechCrunch Disrupt SF 2018 at Moscone Center on September 6, 2018 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)

In May 2017, the WannaCry ransomware attack made international headlines. The breach (which was later linked to North Korea) used leaked NSA tools to target businesses that were running outdated Windows software. WannaCry wreaked havoc by encrypting user data and then demanding  Bitcoin ransom payments. Hackers gave victims 7 days to pay, threatening to delete the files of those who wouldn’t comply.

Though a “kill switch” was ultimately discovered, the attack affected over 200,000 business in 150 countries. It has been estimated that WannaCry caused hundreds of millions – and perhaps even billions – of dollars of damage.  

Despite the alarm and headlines associated with it, the WannaCry attack was neither unique nor especially surprising. In today’s connected world we have almost become accustomed to these types of hostile acts. Yahoo. Equifax. Ashley Madison. The list goes on.  Technology has catalyzed big changes to our conception crime, and while the word still attaches itself to physical infringements like theft and assault, “crime” now captures a broad range of clandestine activities, including so-called cybercrimes.

For those born to Generation Z or after, a “hacker” is like a “robber” or a “mugger,” but…well…online.

The problem that arises with this conception, says ex-Yahoo and Facebook security expert Alex Stamos, is that not all criminal or threatening behavior online is a hack. Not every troubling act is the latest WannaCry. So, he implies, we need to start to reimagine the definition of cybercrime and – in turn – of cybersecurity.  

Internet trolls can destroy human lives, but we never refer to them as cybercriminals.

Speaking at Disrupt SF, Stamos declared, “The vast majority of harm that is caused by technology does not have any kind of interesting technical component. It is the technically correct use of the products we use to cause harm.”

His point is strong. Stamos lists off infractions like child abuse, harassment, suicide, and suicidal ideation as some of the most vicious and harmful aspects of our connected lives. By now we’re all (regrettably) familiar with these ideas, but they still don’t fit within our understanding of online crime. Internet trolls can destroy human lives, but we never refer to them as cybercriminals, nor tackle them as such.

The taxonomy of tech-enabled crime extends beyond our current inadequate conceptions and categories.

This is a photo of Alex Stamos, Stanford University professor and Facebook’s former security chief, calls for an expanded definition of cybercrime. (Image credit: TechCrunch/screenshot)

Alex Stamos, Stanford University professor and Facebook’s former security chief, calls for an expanded definition of cybercrime. (Image credit: TechCrunch/screenshot)

Of course, building out a more comprehensive and inclusive picture of cybercrime isn’t going to be a breeze. (Stamos suggested we might want to use another term.)  For instance, there are  WannaCry-style hackers, who use a variety of tools and often wear colored hats. These are perhaps the most straightforwardly illegal actors (except when they’re legal). Then there are those lurking in the seedy underworld of the dark web, with all of its illicit, criminal, and often downright harrowing activity. But although the dark web isn’t indexed by search engines and can be challenging to access, it is legal and – importantly – it isn’t all bad; you can even join a reading club.

Then there are those technological platforms, like those to which Stamos referred, that are abused in ways that could not have been predicted by their creators. They are legal platforms being used in legal ways, but ultimately they are being weaponized – as with the proliferation of disinformation on social media. Inventing memes and sharing them is, in principle, a harmless act. And yet we are learning that there can be real-world consequences. (For more on these classes of online problems tune into Episode 1 of All Turtles Podcast’s Unscaled Series (time code 17:15), at bottom of this article.)

Can AI help counter cruel and malicious acts on the internet?

At present, artificial intelligence is being deployed with some success against the first category of “known unknowns,” but we need to think about the ways in which it could tackle newer problems like inauthentic voices and the entirely non-technical threats highlighted by Stamos. In short, we must consider how (or if) AI can help us counter cruel and malicious acts on the internet.

If it can, Stamos insists it must be cultivated by thoughtful humans. At the moment, he says, there isn’t enough research exploring these kinds of issues. That needs to change.

Everyone building online products — and cybersecurity firms in particular — must begin to consider things that fall outside of the parameters of traditional information security, and learn from the ways current technologies have already been abused. It will take both bits and brains. Only when we start to construct a holistic view of all types of vulnerabilities and attacks can we begin to build true defenses and properly protect our online – and offline – worlds.  

Top image: Stanford University Professor Alex Stamos (L) and moderator Taylor Hatmaker speak onstage during Disrupt SF 2018 at Moscone Center on September 6, 2018 in San Francisco, California. (Photo by Steve Jennings/Getty Images for TechCrunch)