vestager4_Joan CrosNurPhoto via Getty Images_airecognition Joan Cros/NurPhoto via Getty Images

How to Think About AI Policy

As policymakers and regulators around the world grapple with recent developments in artificial intelligence, they should look to the European Union for a basic model of how to balance freedom and safety. The key is to focus not on the technology but on the risks likely to accompany its various uses.

BRUSSELS – In Poznan, 325 kilometers (200 miles) east of Warsaw, a team of tech researchers, engineers, and child caregivers are working on a small revolution. Their joint project, “Insension,” uses facial recognition powered by artificial intelligence to help children with profound intellectual and multiple disabilities interact with others and with their surroundings, becoming more connected with the world. It is a testament to the power of this quickly advancing technology.

Thousands of kilometers away, in the streets of Beijing, AI-powered facial recognition is used by government officials to track citizens’ daily movements and keep the entire population under close surveillance. It is the same technology, but the result is fundamentally different. These two examples encapsulate the broader AI challenge: the underlying technology is neither good nor bad in itself; everything depends on how it is used.

AI’s essentially dual nature informed how we chose to design the European Artificial Intelligence Act, a regulation focused on the uses of AI, rather than on the technology itself. Our approach boils down to a simple principle: the riskier the AI, the stronger the obligations for those who develop it.