Aspen Digital

April A.I. Elections Newsletter

Curated resources and readings, just for you

A closeup of the "A" in Aspen Digital's brand mark, framed by lines of descending shades of purple.
April 23, 2024

Tom Latkowski

Program Associate

Josh Lawson

Director, A.I. & Democracy

Welcome to the second edition of Aspen Digital’s AI Elections Newsletter, a curated collection of news, resources, and updates from the AI Elections Initiative.

Know others who would like to receive these monthly emails or follow our work? They can sign up and learn more about our work at aielections.aspendigital.org.

We launched the AI Elections Advisory Council composed of top leaders from across the tech industry and civil society. Co-chaired by Alondra Nelson, Klon Kitchen, and Kim Wyman are leading the effort to provide a constructive space for leaders and experts to discuss critical AI issues affecting election resilience.

We cohosted a public forum on AI’s Impact on Global Elections with the Institute of Global Politics at Columbia University’s School of International and Public Affairs. Speakers voiced concern over diminished social trust and discussed how bad actors are leveraging new technologies to contaminate online information and interactions. 

Speakers included former Secretary of State Hillary Clinton; former Secretary of Homeland Security and Co-Founder and Executive Chairman of Chertoff Group Michael Chertoff; Co-Founder of Schmidt Futures and former CEO & Chairman of Google Eric Schmidt; Vice President of Global Affairs at OpenAI Anna Makanju; CEO of Jigsaw Yasmin Green; Nobel Peace Prize-Winning Journalist Maria Ressa; Vice-President for Values and Transparency, European Commission Věra Jourová; and others.

Watch (C-Span or Aspen Digital) and read (Aspen Institute, WSJ, CNET, and DigiDay) about the gathering.

We teamed up with Archewell Foundation, Future US, and Issue One for a gathering of screenwriters and other creatives around messaging efforts to help prepare the public for AI incidents ahead of the election. We expect PSAs and other public campaigns to benefit over the coming months from the many ideas generated at the event.

Aspen Digital and Reid Hoffman will host a day of AI-focused briefings and critical conversations on What Democracy Needs from Tech (April 25 in Los Altos, CA). Participants include technology professionals whose products will shape the information environment ahead of November. This is an invite-only, closed-door event, but please reach out if interested in attending.

A sampling of confirmed speakers includes: Reid Hoffman; Dario Amodei, CEO of Anthropic; Secretaries of State Scott Schwab (R-KS) and Cisco Aguilar (D-NV); Cait Conley, Senior Advisor at CISA; Klon Kitchen, Managing Director at Beacon Global Strategies; Ginny Badanes, GM of Democracy Forward at Microsoft; and Professors Claire Wardle (Brown), Nathaniel Persily (Stanford), and Olivier Sylvain (Fordham & Columbia). 

Election disinformation takes a big leap with AI being used to deceive worldwide | AP News

AI deepfakes have had impacts in elections across Asia and Europe, raising concerns among experts.

– Ali Swenson and Kelvin Chan

Deepfakes, distrust and disinformation: Welcome to the AI election | Politico

Political experts are increasingly concerned at the ease with which fake images of political candidates can be created.

– Mark Scott

‘Disinformation on steroids’: is the US prepared for AI’s influence on the election? | US news | The Guardian

Amid an unsettled regulatory and tech landscape, experts worry that the US is unprepared for disinformation in 2024.

– Rachel Leingang

Clemson researchers have unearthed mock local news across the US linked to Russia | NYT.

– Steven Lee Myers

Opinion: AI doesn’t have all the answers— especially this election season | LA Times

In an investigation, researchers found that chatbots answered election related questions incorrectly about 50% of the time.

– Alondra Nelson and Julia Angwin

Trump supporters target black voters with faked AI images | BBC

AI-generated images of President Trump spending time with black voters have circulated on social media, recently.

– Marianna Spring

OpenAI, Meta, and other tech giants sign effort to fight AI election interference | Reuters

20 tech companies signed an agreement in February in an effort to mitigate the impacts of generative AI on global elections.

– Sheila Dang and Katie Paul

The Shortlist: Seven Ways Platforms Can Prepare for the U.S. 2024 Election | Protect Democracy

Protect Democracy published recommendations for what tech companies can do to mitigate the impact of AI on elections in 2024.

Generative AI Risk Factors on 2024 Elections | Munich Security Conference

Aspen Digital’s research identified key risk factors that will impact elections in 2024.

– Aspen Digital’s Vivian Schiller and Josh Lawson

The AI That Could Heal a Divided Internet | TIME

Using large language models, online platforms can rank content in different ways, giving sites the option to prioritize virtues like nuance and compassion over anger and disgust.

– Billy Perrigo

AI is changing how elections are fought, from deepfake endorsements to chatbot campaigners | ABC News

In countries like India, Pakistan, and Indonesia, deepfakes and other uses of AI have increasingly become the norm in elections.

– James Purtill

Deepfake democracy: Behind the AI trickery shaping India’s 2024 election | Al Jazeera English

Election deepfakes have become increasingly common in India, in advance of this year’s elections.

– Yashraj Sharma

Countering Disinformation Effectively | Carnegie Endowment

The Carnegie Endowment released recommendations for countering disinformation.

– Jon Bateman and Dean Jackson

Democratic operative admits to commissioning fake Biden robocall that used AI | NBC News

A Democratic political operative admitted to using AI to create a fake robocall of Joe Biden encouraging his supporters not to vote.

– Alex Seitz-Wald

NewsGuard launches suite of AI anti-misinfo tools | Semafor

In March, the fact checking outlet NewsGuard launched a new anti-misinformation hub.

– Max Tani

Opinion: AI is turbocharging disinformation attacks on voters, especially in communities of color | LA Times

Immigrants and people of color are at high risk from AI-enabled disinformation.

– Bill Wong and Mindy Romero

Disinformation has a powerful impact on voting intentions | The Parliament Magazine

EU advocates are concerned about disinformation in the context of their June parliamentary elections.

– Ana Fota

If you’re looking for more detail you can read our full notes below, or of course read the articles themselves. Feel free to send us suggestions at tom.latkowski@aspeninstitute.org.

Full Notes

  • Experts argue that US law and regulations are unprepared for the impact of AI on our elections
  • Beyond just deepfakes, experts are concerned that the so-called “grandparent scam,” in which AI technology is used to mimic someone’s voice, could be targeted more widely
  • Additionally concerning is the “liar’s dividend,” – the notion that because of fears of AI, people will stop believing true things they see or hear, allowing bad actors to avoid accountability
  • In February, 20 major tech companies signed an accord at the Munich Security Conference, committing to create shared standards for watermarking AI-generated content and invest in public awareness campaigns
  • In an investigation led by Alondra Nelson and Julia Angwin, researchers found that when asked election-related questions, chatbots answered incorrectly 50% of the time
  • Despite the fact that Google said its Gemini model would not respond to election-related questions, the investigators found that it did, and often gave false information
  • Aspen’s Vivian Schiller & Josh Lawson outline various risks at the intersection of AI and elections this cycle
  • Some supporters of former-President Donald Trump have shared AI-generated images of Trump with black voters in an effort to convince black voters to support Republicans
  • Several false images have been widely shared on social media
  • The nonprofit group Protect Democracy announced its recommendations for mitigating the negative impacts of AI and technology on the 2024 elections
  • (1) Platforms should invest in policy, legal, and trust/safety teams, and maintain these teams through inauguration day
  • (2) Platforms should prominently offer accurate voting information
  • (3) Usage rate limits should be enacted by platforms to reduce spam risks
  • (4) Social media platforms should limit the distribution of new accounts and suspicious accounts in the runup to election day
  • (5) Messaging platforms should ban “coordinated inauthentic behavior”
  • (6) Generative AI platforms should disclose content authenticity
  • (7) Generative AI platforms should prohibit their models from being used to disrupt elections, including through spreading falsehoods or intimidating officials
  • Jigsaw, a Google subsidiary, has used LLMs to create a set of tools that rank content based on characteristics like nuance, evidence-based reasoning, and compassion, rather than just on engagement.
  • Jigsaw plans to make these new tools available for free to developers, in the hope that they’ll be popular with smaller websites
  • Some people have raised concern that LLMs could bring biases to the ranking process, potentially further stigmatizing marginalized groups
  • In a paper published by Jigsaw, users preferred comments sorted by their new classification algorithms then comments sorted by engagement
  • Elections have already happened this year in Pakistan and Indonesia, and campaigning is ongoing in India, with numerous examples of AI usage
  • In Indonesia, candidates are using a tool called “Pemilu.AI” as a data analytics tool to help them understand the voting public. Some candidates have used the tool as a partial replacement for campaign consultants
  • Another Indonesian candidate — an ex-special forces commander who has faced questions of human rights abuses — is using AI-generated imagery to present himself as more friendly via a cartoon avatar
  • In Pakistan, former Prime Minister Imran Khan is serving a prison sentence, but his party has released AI-generated speeches to mobilize his supporters
  • In India, campaigners are using a deepfake of Muthuvel Karunanidhi, a politician who died in 2018. In the deepfake videos, Karunanidhi praises his son’s political career. Momium, the firm behind the deepfakes, has also used their technology to translate political speeches into multiple languages
  • On the morning of last years legislative elections, an Indian opposition party posted a deepfake video online showing a state political leader encouraging his supporters to vote for their party
  • Many other Indian campaigns are deploying similar tactics, using deepfake videos to boost their chances of winning
  • Some Indian politicians have used deepfake technology to generate videos of themselves speaking in multiple languages
  • India has existing laws against defamation, but some experts have argued that they are insufficient to handle the onslaught of deepfakes
  • Researchers at the Carnegie Endowment put out a report on the most effective approaches to counter disinformation
  • The authors noted that while there is not a “silver bullet” to solving the disinformation problem, policymakers can help address the problem by supporting local journalism, prioritizing fact-checking and media literacy, labeling false social media content, changing social media algorithms, and investing in counter-messaging strategies
  • NBC news reported that Steve Kramer, a Democratic political operative, was the creator of the fake New Hampshire robocall which purported to be a recording of Joe Biden telling his supporters not to vote
  • Kramer admitted that he was behind the call, stating that his goal was to encourage regulators to take action on the impact of AI in elections
  • In March, the fact checking outlet NewsGuard launched a new effort to minimize the uses of AI to harm elections
  • NewsGuard is investing in “fingerprinting” – identifying and labeling misinformation, information which can be fed into AI models to help them avoid sharing misinformation further
  • NewsGuard’s 2024 Elections Misinformation Tracking Center will be a hub for this work
  • After AI deepfakes in Asian and European elections (including in Moldova, Slovakia, and Bangladesh), experts are increasingly concerned about the global impact AI-enabled disinformation
  • FBI Director Christopher Wray echoed this sentiment, stating that generative AI would help “foreign adversaries to engage in malign influence”
  • AI-enabled disinformation could be a particularly big problem for voters of color
  • Immigrants and communities of color are at particular risk for disinformation because they are more likely to face language barriers. With new AI tools, generating disinformation in foreign languages can be automated in ways that were previously impossible
  • Advocates are worried about the impact of AI-enabled disinformation in the runup to the June European Parliament elections
  • Generative AI makes it easier and faster to generate disinformation, leaving democracies vulnerable
  • Tech platforms, governments, and news organizations all have an important role to play in countering disinformation