Writing in the New York Times recently, venture capitalist Kai-Fu Lee signaled an important, oncoming change in the way we think about artificial intelligence. We are graduating, he cautioned, from an age of discovery and vision into a more practical era of implementation.
Lee is promoting his new book, titled A.I. Superpowers: China, Silicon Valley, and the New World Order, and he suggests that this transition from lab to launchpad may naturally privilege Chinese advantages—like data abundance and government investment—above the research capabilities and “freewheeling intellectual environment” of the U.S.
Though Lee—who primarily invests in Chinese tech companies—is keen to praise the country’s evolving attitudes to technology, this is not a conversation that needs to deteriorate into one that references an “arms race”—indeed, it shouldn’t. This is about anticipating change and interpreting it into meaningful action to accommodate and cultivate new AI products within our towns, cities, workplaces, hospitals, and homes.
If Lee is correct, and AI developments are about to shift away from the “purely digital world to the physical one,” there will be a need for new and more advanced conversations in all jurisdictions that concentrate on adapting infrastructures and institutions to more easily absorb the intelligent systems we have.
This isn’t about new AI, it’s about optimizing and capitalizing on the work we’ve already done.
Lee gives an example of this adaptive thinking in his New York Times essay: “The Chinese government understands that… if we want autonomous cars to reduce accidents, we may need to embed sensors in our roads. If we want A.I.-powered diagnoses to spot cancer earlier, we may need hospital administrators to develop data-sharing agreements that protect privacy while also allowing research to be conducted.”
The focus is on policy and approach, not big sexy ideas and groundbreaking research.
Many would see China’s adaptive advantage here as obvious, given the pervasive nature of its government, but a recent report published by Oxford University’s Future of Humanity Institute seeks to debunk the assumption that China’s approach to AI is “defined by its monolithic nature.”
On the contrary, the report’s author Jeffrey Ding argues that AI development is much more dispersed: “While the central government plays an important guiding role, bureaucratic agencies, private companies, academic labs, and subnational governments are all pursuing their own interests to stake out their claims to China’s AI dream.”
There is also the popular conception that AI development of any sort in China is—and will continue to be—unhampered by “obstructive” conversations about ethics, privacy, and safety concerns. Again, Ding forwards that this is simply untrue, claiming that “substantive discussions about AI safety and ethics are emerging in China.”
Ding cites the Chinese government’s intentions as stated in the State Council’s AI plan, writing: “The document stated that by 2025, China will have initially established AI laws and regulations, ethical norms, and beginnings of AI security assessment and control capabilities; and by 2030, China will have constructed more comprehensive AI laws and regulations, as well as an ethical norms and policy system.”
For those who think that China’s advantage in this new era of implementation arises singularly from the enforced will of big government, or as a result of an unambiguously cavalier attitude to ethics, they’re making a sweeping, simplistic dismissal of what’s really going on.
Lee concludes by saying that countries like China and the U.S.—and no doubt others—have much to learn from one another, and that they should make attempts to do so. He reflects, “Chinese researchers, start-ups and AI companies should let their imaginations run a little wilder, placing long-term bets that give them a chance of breaking new ground rather than playing catch-up… And American policymakers could move away from a hands-off stance toward AI, looking instead to actively adapt the nation’s physical structures and public institutions to better mesh with new technology.”
This suggests a new way forward. Developers, governments and the public could do a lot worse than to observe the methods of others and organize for a new approach to making the AI products of our future. If we want AI to solve big problems and improve lives globally, then cooperation is needed, not an AI arms race between superpowers.