Artificial general intelligence is always 30 years away

The first Blade Runner movie was released in 1985 and offered a prediction of the world in 2015, one with flying cars and an AI who is indistinguishable from humans. The second Blade Runner came out in 2017 and once again contained a prediction of the world in 30 years: flying cars and AIs who are indistinguishable from humans.

For Zia Chishti, CEO of Afiniti, a company that uses AI to monitor human behavior, the pattern of people making 30-year predictions about artificial general intelligence (AGI) is nothing new, and pundits like Ray Kurzweil and Elon Musk proclaiming an AI apocalypse by 2045 — roughly 30 years from now — might as well be making science fiction movies. “It’s not going to happen,” Chishti said. “They’re all wrong.”

This photo shows Zia Chishti, CEO of Afiniti, argued that predictions about AGI are always 30 years away at FDDAY in Paris on September 25, 2018. (Image credit: All Turtles)

Zia Chishti, CEO of Afiniti, said people have been predicting AGI 30 years from now for more than 30 years. Chishti was onstage at FDDAY in Paris on September 25, 2018. (Image credit: All Turtles)

Speaking at French Digitale Day in Paris, Chisti debunked the idea that exponential growth in computing power will lead to corresponding increases in the abilities of AI. “If you have 10,000 hamsters, you have 10,000 hamsters. You don’t have one hamster with the processing power of 10,000 hamsters.” There’s a hardware-software dichotomy that few people, except those working in the field, bother to investigate. Instead, Chishti said, “We fool ourselves into thinking that increases in processing power results in machines as sentient as human beings.”

While hardware has reliably kept pace with Moore’s law (see chart below), Chishti argued there have not been any breakthroughs in AI software to justify any reasonable prediction about AGI. “Machine learning and deep learning are just buzzwords. The stuff has been around for 20 years. They’re just pattern-recognition tools.”

Gordon Moore’s original graph: ‘The Number of Components per Integrated Function’ (Image credit: Intel/Our World in Data)

While it’s easy to view predictions about AI as either “good” — full of benefits and solving global problems, or “bad” — dystopias with robot overlords, the reality of what good and bad mean in practice is quite different.

Bad AI is largely limited to corporations not fully understanding how to build products with AI ethically and efficiently, according to Chishti. Meanwhile, examples of good AI are those being implemented in a disciplined fashion for specific use cases with clear, measurable returns. One way to tell the difference? Chishti said, “The problem AI solves should be easily understood by a 12-year-old.”