roach153_GettyImages-AIcensorship Getty Images

The AI Moment of Truth for Chinese Censorship

For years, China has assumed that it will have a structural advantage in the global AI race by dint of its abundance of data and limited privacy protections. But now that the field is embracing large language models that benefit from the free flow of ideas, the country's leadership is faced with a dilemma.

NEW HAVEN – In his now-classic 2018 book, AI Superpowers, Kai-Fu Lee threw down the gauntlet in arguing that China poses a growing technological threat to the United States. When Lee gave a guest lecture to my “Next China” class at Yale in late 2019, my students were enthralled by his provocative case: America was about to lose its first-mover advantage in discovery (the expertise of AI’s algorithms) to China’s advantage in implementation (big-data-driven applications).

Alas, Lee left out a key development: the rise of large language models and generative artificial intelligence. While he did allude to a more generic form of general-purpose technology, which he traced back to the Industrial Revolution, he didn’t come close to capturing the ChatGPT frenzy that has now engulfed the AI debate. Lee’s arguments, while making vague references to “deep learning” and neural networks, hinged far more on AI’s potential to replace human-performed tasks rather than on the possibilities for an “artificial general intelligence” that is close to human thinking. This is hardly a trivial consideration when it comes to China’s future as an AI superpower.

That’s because Chinese censorship inserts a big “if” into that future. In a recent essay, Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher – whose 2021 book hinted at the potential of general-purpose AI – make a strong case for believing we are now on the cusp of a ChatGPT-enabled intellectual revolution. Not only do they address the moral and philosophical challenges posed by large language generative models; they also raise important practical questions about implementation that bear directly on the scale of the body of knowledge embedded in the language that is being processed.

https://prosyn.org/ZP2jzhK