More than 2 billion people are eligible to participate in major global elections that will occur throughout 2024. With anti-democratic movements deepening their grip, the stakes could not be higher.
Trust in democratic institutions and facts themselves faced headwinds long before the public gained access to new generative AI tools. While the underlying technology is not entirely new, OpenAI’s public launch of ChatGPT in November 2022 unleashed a mix of euphoria and hand wringing from a public coming to terms with such capable tools.
There is no firm consensus whether generative AI threats in the civic context represent a difference in degree or a difference in kind. Some suggest wide availability of fast-evolving AI tools simply exacerbate familiar misinformation challenges from familiar bad actors. But others cite the exponential rate of technological improvements and the promise of ever-greater speed, scale, and sophistication as reason enough to expect and to counter a dramatic erosion in trust across democratic institutions, including elections.
Our research at Aspen Digital yielded seven risk factors:
1
Siloed Expertise
Elections officials are not up to date on AI capabilities and unlikely to know where they can turn for help, we found in conversations. The AI labs and some tech companies are not attuned to the challenges elections officials face. “There’s very little understanding about how democracy works,” said an expert who engages regularly with AI labs and tech companies. Dots aren’t being connected amongst AI experts; mis and dis-information specialists; elections officials; and policymakers. This is certainly true in the US, and we expect even greater disparities globally.
2
Public Readiness
Experts doubt the public will be resilient in the face of AI tools, which some expect to “flood the zone” with believable falsehoods during crises. Even if the public infrequently encounters AI-generated content, a surge in press coverage around AI capabilities might be enough to trigger public reactions that affect civic behavior–including an erosion in public trust overall. As a result, people may revert to sources they already trust regardless of veracity, or reject factuality in general, a phenomenon known as the “liar’s dividend.” These outcomes do not require personal exposure to fake content but may occur simply because the public is aware that content could be fake.
3
Inadequate Platform Readiness
Over the last two years, major platforms have cut staff across integrity operations, and offered less transparency to media and researchers. Generative AI is likely to pressure already-taxed platform resources, experts said. The capacity to generate volumes of content at speed may overwhelm fact checking efforts, even as the ability to produce unlimited variations of the same underlying claim might avoid detection by integrity tools built to prioritize the virality of a particular post (not a general claim).
4
Slow Moving Regulation
The EU recently enacted regulations to hold platforms accountable for “harmful content” (or face a financial penalty), and they are acting quickly to create an “AI Act” that could have broad implications worldwide. The AI Act, along with a joint effort between the US and the EU to create a transatlantic AI Code of Conduct are under consideration, but would take so long to be adopted that they will not impact the 2024 election cycle. In the US, many efforts are underway at the local and state levels, but federal policy is not expected before November elections.
5
Increasing Quality A.I.-Generated Media
As AI-generated content has increased in quality, visual instinct alone is increasingly unreliable. Consequently, policymakers and others have shifted their mitigation efforts to overt labeling of AI generated content, digital signatures–so-called “watermarking” technologies that are still in their infancy.
6
Scaled Distribution at High-Speed
Until recently, substantial resources were needed to draft convincing misinformation or to effectively alter audio/visual content. Technical expertise and language requirements prevented some bad actors from creating and distributing large volumes of content. AI dramatically lowers these barriers and allows people to generate high-quality content that may restate the same false claim in many different ways or depict fake events from multiple camera angles, for example.
7
Message Targeting & Hyperlocal Misinformation
Generative AI may supercharge targeting capabilities by allowing creators to dramatically scale so-called “A/B testing,” producing so many variations of content that targeting models grow exponentially robust as users engage with particular messages. Some believe these capabilities will result in such granular targeting that messages will essentially be honed to particular psychological profiles – what some have called “superhuman persuasion.”
AI may also generate compelling content that appears credible simply because it references highly localized information – ” hyperlocal misinformation” — such as the name of the school where a precinct is located or the names of streets and neighborhoods. Some are concerned that people will use AI to create hyperlocal misinformation about conditions at critical polling locations or safety in certain locations on Election Day. The risk is particularly acute given improvements across language groups.
8
Automated Harassment
Bad actors may create harassing content targeting elections administrators, activists, journalists, and other civic leaders and topics for a number of reasons: to intimidate, to reduce the algorithmic distribution of a post by adding large volumes of toxic comments, or to sway opinion during a crisis by appropriating particular hashtags.
9
Cybersecurity of Elections Infrastructure
Experts we spoke with raised concern that generative AI is a boon for social engineering scams, including phishing attacks, raising concerns that AI-enabled audio impersonation could spoof official communications from superiors to poll workers AI capabilities are also expected to enhance malware as fast-evolving code generation and analysis features are increasingly integrated into AI tools.
These developments underscore the urgent need for coordination, prioritization, and accountability across all sectors with stakes in a shared democratic future. The coming months will require policymakers, tech companies, and civil society to take responsible action in the face of evolving social and technological shifts in a critical election year.