guriev36_Getty Images Getty Images

AI Has Not Yet Destroyed Democracy

Given that generative AI models capable of rendering life-like “deepfakes” are now accessible to everyone, it is understandable that many would worry about the implications for elections and democratic discourse. But during the “super election year” of 2024, the worst predictions were not borne out.

LONDON – With almost half the world’s population going to the polls, 2024 was dubbed a super election year, leading many experts to warn of a coming flood of political disinformation. After all, generative artificial intelligence has made it possible for anyone, anywhere, to produce lifelike “deepfake” images and video. Never have anti-democratic bad actors had such powerful tools for undermining free and fair elections.

Yet while AI-augmented disinformation has clearly proliferated online, it did not have a substantial destabilizing impact on democracy in 2024. The reason is not entirely clear. Perhaps social-media users have become more discerning, while fact-checkers and digital platforms have done a better job of curtailing the spread of falsehoods – with Elon Musk’s X (formerly Twitter) being an obvious exception.

To be sure, in the US presidential election, both sides accused the other of trying to suppress free speech and democracy. According to the leading US fact-checking site Politifact, both campaigns issued misleading or false statements, though the overwhelming majority came from Donald Trump. Nonetheless, the worst predictions about AI disrupting the democratic process were not borne out. More broadly, the results of the year’s elections around the world were a mixed bag, but liberal and pluralist parties and candidates generally exceeded expectations.

In our book Spin Dictators, Daniel Treisman and I point out that most people worldwide (or at least the majority of respondents to the World Values Survey and other similar polls) favor democracy over any alternative model of governance. That is why political leaders tend to cater to this preference by holding elections and allowing some independent media. While elections in many countries are neither free nor fair, the fact that even non-democratic leaders choose to hold them demonstrates the popularity of voting. By the same token, a strong performance for pro-democratic forces should be considered the norm, not the exception.

But hasn’t digital media corroded democratic discourse? In 2019, the New York University social psychologist Jonathan Haidt and the essayist Tobias Rose-Stockwell published an influential article titled “The Dark Psychology of Social Networks,” which warned that the leading social-media platforms’ ad-based business model was promoting attention-grabbing content. Since truth can seem mundane compared to sensationalist falsehood, ad-based platforms have a propensity to fuel political disinformation and polarization. Meanwhile, many other scholars have linked this model to the rise of false information on social media in the 2010s, and to its use by non-democratic actors.

But technology companies have taken some steps to address this problem. To mitigate the reputational costs of being leading disseminators of disinformation, most social-media platforms established “trust and safety” departments, invested in content moderation, and engaged in self-regulation. They trained algorithms to identify misinformation (material that is simply inaccurate) and disinformation (deliberately inaccurate material that is meant to mislead), and referred flagged posts to certified human fact-checkers.

Secure your copy of PS Quarterly: The Year Ahead 2025
PS_YA25-Onsite_1333x1000

Secure your copy of PS Quarterly: The Year Ahead 2025

Our annual flagship magazine, PS Quarterly: The Year Ahead 2025, has arrived. To gain digital access to all of the magazine’s content, and receive your print copy, subscribe to PS Digital Plus now.

Subscribe Now

Randomized controlled trials from 2020 indicate that these measures may have been effective. While a 2018 study found that Facebook usage led to political polarization and reduced well-being, similar studies in 2020 found no such effects or only minor ones.

Other research has examined the way people process and share false news. When evaluating partisan messages publicly, people are more likely to support their party’s stance, possibly to signal loyalty or to influence others. But when asked privately and offered an incentive to assess a message’s truthfulness, partisans are more likely to identify the facts correctly and to refrain from sharing such posts.

While social-media platforms have indeed spread misinformation – some of which has really been persuasive – the public launch of generative AI platforms raised new and serious concerns. AI can now produce highly convincing audio and video forgeries that are all but impossible to distinguish from real footage.

Given that this technology is accessible to everyone, it is understandable that many would worry about the implications for elections. Yet so far, the dog has barked, not bitten. While Russia and other hostile state and private actors pursued various disinformation and election-interference strategies in the US and elsewhere, there is no substantial evidence that generative AI or deepfakes played a pivotal role in determining any outcomes.

This may be because political operatives have yet to master the technology’s use, or because we have yet to study its impact thoroughly. But another possibility is that the experience of the 2010s taught social-media users to be more wary of what they encounter online. We certainly need more research but, in the meantime, we can be a little less fearful of what AI will mean for public discourse and democratic governance.

https://prosyn.org/SV2zsnP