The Big Picture brings together a range of PS commentaries to give readers a comprehensive understanding of topics in the news – and the deeper issues driving the news. The Big Question features concise contributor analysis and predictions on timely topics.
Who Controls AI?
The challenge of regulating generative artificial intelligence has been a topic of fierce debate since the technology captured the world’s attention late last year. But when it comes to ensuring that generative AI advances the common good, the who is just as important as the how.
For MIT’s Daron Acemoglu, Simon Johnson, and Austin Lentsch, workers themselves must play a central role in compelling their employers not to pursue mindless automation, but rather to use AI to augment human creativity, boost productivity, and ultimately drive widely shared prosperity. The ongoing Writers Guild of America strike will be an important litmus test: if striking Hollywood screenwriters fail, “other knowledge workers will stand even less of a chance of shaping the future of work and technology.”
Maria Eitel, Nike’s founding Vice President of Corporate Responsibility, places the onus for limiting the dangers of AI squarely on the companies developing and applying the technology. “After all,” she points out, “regulators and governments don’t fully understand how AI-based products work or the risks they create; only companies do.” But firms must be required to devise and commit to “credible action plans” to guide responsible innovation.
Princeton University’s Anne-Marie Slaughter and Ethos Capital’s Fadi Chehadé, for their part, call on leading “scientists, technologists, philosophers, ethicists, and humanitarians from every continent” to “come together to secure broad agreement on a framework for governing AI that can win support at the local, national, and global levels.” Institutions will be vital, but the type of institution needed will depend on the specific function in question, from ensuring AI’s safe development and use to spreading beneficial technologies.
According to Princeton’s Harold James, however, we can control only so much. Technology has “never stopped simply because some people wanted it to.” As it changes “how collective activity is conceptualized” – particularly through the “automation of war” – AI will also cause power to move “away from the people” and reshape “our understanding of legitimate authority.” Whatever we do, “AI changes everything.”