In this high-stakes election year, it’s never been more important that voters have unimpeded access to reliable and truthful information about the candidates, the issues and the voting process. Amid an already fraught information ecosystem, artificial intelligence (AI) adds an additional alarming layer of risk.
That’s why Aspen Digital’s AI Elections Advisory Council created three risk checklists focused on areas where AI tools make it easier for bad actors to discourage and disinform: hyperlocal voter suppression, language-based influence operations, and deepfaked public figures.
The Advisory Council is a non-partisan group composed of civil society and technology leaders taking steps to build democratic resilience in the face of AI. The Council is chaired by Alondra Nelson, Klon Kitchen, and Kim Wyman. It is part of our ongoing AI Elections Initiative, which works to secure the US elections in November and beyond.
“Understanding the threat is the first line of defense for our elections. But we all have a role in responding. We already place a huge burden on local election officials and volunteers to defend our democracy. This guidance spells out the role voters, the news media, local leaders — and most importantly, the tech companies themselves — can play in the weeks, days, hours, and minutes leading up to the polls closing. We can limit AI risks and keep our elections fair and safe.”
— Alondra Nelson
“State and local election officials are the backbone of our system and AI challenges won’t stand in the way of free, fair, and trustworthy elections. These checklists equip election officials and others with the information they need before November.”
— Kim Wyman
“In the noisy space of AI and elections, we’ve zeroed-in on key areas where AI misuse adds unique risk. The actions we outline will help communities counter bad actors who want to mislead and discourage voters.”
— Klon Kitchen
To help ground public discussion and leadership action in the facts underlying specific AI concerns this election year, these new AI Election Risk Checklists center on:
- Hyperlocal Voter Suppression: For years, bad actors have attempted to impede voting by spreading false information about when, where, and how to vote. AI tools can generate convincing content quickly, including personalized details and interactive exchanges that add credibility to false information. These tools make text message campaigns, interactive robocalls, and fake local news websites cheaper to run at scale.
- Language-Based Influence Operations: Artificial intelligence makes it easier to create content in any language, using automated translation tools. While these tools can be beneficial in many settings, in the wrong hands they can make spreading falsehoods easier, faster, and harder to detect.
- Deepfaked Public Figures: As artificial intelligence improves, it has become easier to create convincing images, audio, and video depicting public figures saying or doing something that they did not.
These risks are not inevitable. The checklists detail steps that election administrators, social media and messaging platforms, AI labs and companies, news media, advocates, and civil society groups should take to mitigate the harms that AI could pose to our democracy.
As Americans prepare to vote this November, we must not let fears of artificial intelligence deter us from participating in our democracy. Bad actors might try to keep us home this election, but they can only succeed if we let them. At Aspen Digital, we’re committed to partnering with those committed to election resilience to keep our democracy strong and to help ensure that every American has the information and access they need to cast their vote.
The AI Elections Advisory Council consists of:
- Alexandra Reeve Givens (Center for Democracy & Tech. – Pres. & CEO)
- Alexandra Sanderford (Anthropic – Head of Policy and Enforcement)
- Arjun Gupta (TeleSoft – Managing Partner)
- Becky Waite (OpenAI – Global Elections)
- Ben Scott (Reset – Executive Director)
- Brad Carson (Americans for Responsible Innovation – President)
- Brian Hooks (Stand Together – Chairman & CEO)
- Chris Krebs (SentinelOne & CISA -Fmr. Director)
- Claire Wardle (Brown – Professor)
- Damon Hewitt (Lawyers Committee on Civil Rights – Pres. & ED)
- Danielle K. Citron (UVA Law – Professor)
- Dave Willner (Consultant & OpenAI – Fmr. Head of Trust & Safety)
- David Becker (Elections Innovation Center – ED & Founder)
- David Vorhaus (Google – Director, Global Elections Integrity)
- Gary Marcus (Social Scientist)
- Ginny Badanes (Microsoft – Democracy Forward, Director)
- Irene Solaiman (Hugging Face – Head of Global Policy)
- Jane Harman (fmr. U.S. Rep. for CA-36 & Wilson Center, President)
- Jennifer Morrell (Election Group – Chief Executive Office)
- Joe Amditis (Center for Cooperative Media – Assistant Director)
- Justin Erlich (TikTok – Global Head of Issue Policy)
- Kelly Born (Packard – Democracy, Rights, and Governance Director)
- Larry Norden (Brennan Center – Senior Director)
- Maya Wiley (Leadership Conference for Civil & Human Rights – Pres. & CEO)
- Michele Jawando (Omidyar Networks – SVP)
- Nate Persily (Stanford Law – Professor)
- Neil Chilson (Center for Growth & Opportunity – Sr. Research Fellow)
- Nick Penniman (Issue One – Founder & CEO)
- Raffi Krikorian (Emerson Collective – CTO)
- Rebecca Finlay (Partnership on AI – CEO)
- Sam Gregory (WITNESS – Executive Director)
- Thomas Rid (Johns Hopkins – Professor of Strategic Studies)
- Vilas S Dhar (McGovern Foundation – President)