Aspen Digital

Inaugural A.I. Elections Newsletter

Curated resources and readings, just for you

A closeup of the "D" in Aspen Digital's brand mark, framed by lines of descending shades of green.
February 22, 2024

Tom Latkowski

Program Associate

Josh Lawson

Director, A.I. & Democracy

Welcome to the inaugural AI Elections Newsletter, a curated collection of timely resources and readings. Plus, you’ll get updates on Aspen Digital’s AI Elections Initiative.

Know others who would like to receive these monthly emails? Invite them to sign up here.

We convened 100 participants including civil society groups, academics, and foundations on the sidelines of the Knight INFORMED Conference in Miami on how civil society groups can help mitigate the impacts of AI in 2024.

This session has focused civil society conversation on risks we find most actionable, and it has supported subsequent coordination on key election-related resilience work.

We presented to the National Association of Secretaries of State Winter Conference in Washington, DC on how AI will impact election administration. We had around 250 people in attendance to learn how to mitigate the risks posed by AI.

We published a Q and A with our team on Preparing for the AI Election Impact.

Aspen Digital and the Institute of Global Politics at Columbia’s School of International Policy (SIPA) will co-host the Conference on AI & Global National Elections (March 28).

Confirmed speakers include former Secretary of State Hillary Clinton, Vice President of the European Commission Věra Jourová, CEO of Rappler and journalist Maria Ressa, FEC Chair Dara Lindebaum, former US Secretary of Homeland Security Michael Chertoff, and VP of Global Affairs at OpenAI Anna Makanju.

Along with Reid Hoffman, we will host a day of briefings and focused conversations on What Democracy Needs from Tech (April 25). Attendees will include tech companies, technolosts and investors together with elections officials and security experts.

It’s invite only, so please reach out.

Artificial Intelligence’s Threat to Democracy | Foreign Affairs

Generative AI could supercharge election risks, like disinformation and cyber threats, that could affect voter registration, vote casting, and reporting of votes.

– Jen Easterly, Scott Schwab, and Cait Conley

Biden officials confront limits of federal response in exercise preparing for 2024 election threats | CNN Politics

The Biden administration is preparing its response to viral disinformation, violence at election sites, and other possible threats to the 2024 election.

– Sean Lyngaas

AI-generated voices in robocalls can deceive voters. The FCC just made them illegal | AP News

Featuring Aspen Digital’s Josh Lawson

The FCC has banned robocalls that contained AI-generated voices.

– Ali Swenson

AI could disrupt the 2024 US presidential election. What’s Congress doing about it? | CNN Business

Congressional experts are skeptical that Congress will pass any AI legislation before November.

– Brian Fung

AI-powered misinformation is the world’s biggest short-term threat, Davos report says | AP News

The World Economic Forum found that “false and misleading information” is the “top immediate risk to the world economy” in a new report.

– Kelvin Chan

‘Open season’ for AI to impact 2024 election | CNN Politics

A fake Joe Biden robocall went out to New Hampshire voters this month, raising concerns of the impact AI will have in 2024.

– Sara Fischer

Labeling AI-Generated Images on Facebook, Instagram and Threads | Meta

Meta announced new steps to identify and label AI-generated images on its platforms.

– Nick Clegg

Inside the battle to label digital content as AI-generated media spreads | Axios

Google joined the Coalition for Content Provenance and Authority (C2PA), alongside Microsoft, Meta, and Adobe.

– Ryan Heath

It’s So Easy To Make AI Politicians That We Made Biden Legalize Weed | HuffPost

HuffPost created a deepfake of President Biden to demonstrate the technology.

– Matt Shuham

Video shows fabricated results for Indonesians voting overseas in 2024 presidential election | AFP Indonesia

An AI-generated video claimed to show the results of ballots Indonesia’s voting overseas in the country’s presidential election.

Taiwan’s Early Warning for the Future of Tech | Council on Foreign Relations

Though Chinese disinformation in Taiwan dates back to at least 2020, it “increased by 40 percent since last year” in the runup to Taiwan’s January 13 presidential election.

– Moira Whelan

Voice cloning tech to power 2024 political ads as disinformation concerns grow | VentureBeat

Startups are entering the political ad market, helping candidates tailor specific versions of ads to different groups of voters.

– Sharon Goldman

What AI Will Do to Elections | Foreign Policy

Experts worry that many tech platforms are less prepared for the 2024 election cycle than they were in previous years, right as generative AI becomes a major issue.

– Rishi Iyengar

If you’re looking for more detail you, can read our full notes below, or of course read the articles themselves. Feel free to send us suggestions at tom.latkowski@aspeninstitute.org.

Full Notes

  • Generative AI could supercharge election risks, like disinformation and cyber threats, that could affect voter registration, vote casting, and reporting of votes
  • County and state officials need support to respond; call to action for government, voting equipment companies, AI companies, media, and voters
  • Deepfakes: An audio deepfake was already used in the 2023 elections in Slovakia and Argentina
  • Cyberattacks: AI will make cyberattacks easier because of data aggregation (Ex: DDOS attack, where AI-enabled bots flood an election office’s website, blocking real people from getting information about voting)
  • Harassment: AI could make possible the large scale harassment of election workers
  • Disinformation: Targeted lies giving false information about voting or about voter fraud
  • But election workers are good at their jobs and will adapt. E.g., “there is no evidence that any voting system lost any votes—or was compromised in any other way—in any national election” since 2017, when current security efforts ramped up. Efforts include stronger “digital and physical controls,” more measures to “detect malicious activity,” mandates that “vendors take certain security precautions,” and shifting to “.gov” websites
  • The article lists various measures election officials can take to strengthen protections (e.g., “tabletop exercises” to prepare for sudden changes)
  • Companies also have a role to play in protecting our elections – in particular, helping validate legitimate content
  • Federal officials held tabletop exercises in December to plan their response to threats to the 2024 election, including foreign disinformation operations and violence at polling places
  • Officials discussed whether publicly discussing disinformation would amplify it, ultimately favoring a softer response, letting state officials lead
  • Chinese President Xi recently told President Biden that China would not interfere in the US elections, though American officials remain deeply concerned
  • In a unanimous ruling, the FCC has outlawed robocalls containing AI-generated voices
  • The ruling allows call recipients and state attorneys general to file lawsuits and empowers the FCC to levy fines against perpetrators (up to $23,000 per call)
  • The ruling comes in the wake of a robocall in New Hampshire containing AI-generated audio intended to sound like President Biden
  • Congressional experts are pessimistic about the chances of passing any AI legislation before this year’s elections
  • Majority Leader Schumer may prioritize a bill related to the impact of AI on elections, but aides are uncertain whether it can pass before November
  • Senator Schumer has also expressed interest in US innovation, national security, and transparency
  • One group of Senators proposed legislation to ban deepfakes used to influence US elections

The World Economic Forum wrote that “false and misleading information” is the “top immediate risk to the world economy” in a new report

  • In 2016, Russia had to spend a lot of money on their disinformation efforts. But today, it’s much cheaper.
  • AI enables disinformation that is personalized, targeted, and done at-scale.
  • Legitimate outlets are cutting their staff and social media companies are cutting their trust and safety teams – this is a perfect storm.
  • The “liar’s dividend” is a huge concern – bad actors can deny that real things happened by claiming they came from AI.
  • OpenAI published a statement detailing its plans to help users access accurate information about the 2024 elections
  • DALLE: This included steps to help people identify if an image was created using DALL-E 
  • Chat GPT: OpenAI announced a plan to let ChatGPT access and link to current reporting, and will direct voting related questions to CanIVote.org
  • Political uses: OpenAI doesn’t allow people to use its tools to build political applications or to try and stop people from voting
  • Meta will begin labeling AI-generated images posted to Facebook, Instagram, or Threads, when detectable using industry standards (such 
  • “Photorealistic” AI images created using Meta AI include both “visible markers” and “invisible watermarks”
  • Meta notes that it’s possible for some AI-generated content to avoid detection and says they’re continuing to invest in research on mitigations
  • Google became the latest member of the Coalition for Content Provenance and Authority (C2PA), alongside Microsoft, Meta, and Adobe
  • The C2PA aims to use “content credentials” to images to verify whether something was created by generative AI
  • Deepfake audio technology has become widely accessible, raising concerns regarding political use cases.
  • The Huffington Post created deepfakes of President Joe Biden and former-President Donald Trump to demonstrate the technology.
  • Josh Lawson of Aspen Digital recommended that voters turn to trusted sources of information to verify content.
  • Prior to Indonesia’s February presidential election, an AI generated video circulated that ostensibly showed the results of votes cast by Indonesians overseas
  • The commissioner of KPU, Indonesia’s elections body, said in a statement that the video was not true
Disrupting Elections
  • China has used information operations to influence Taiwanese elections since at least 2020
  • Disinformation has “increased by 40 percent since last year”
  • Recently, Chinese disinformation has focused more on local concerns and non-mainstream platforms
  • In August, Meta took down “more than seven thousand accounts… linked to a Chinese influence operation”
Disrupting Internet
  • China wants to transition internet governance away from the current “multistakeholder system” to a “system of cyber sovereignty,” allowing them to box out Taiwan
  • Taiwan is an observer member of the Freedom Online Coalition
  • In 2023, “China is suspected to have severed underground cables providing internet services to a Taiwanese island”
Taiwanese Civtech
  • Taiwan has a strong, active civic tech community; great opportunity for tech companies to partner
  • An AI startup called “Instreamatic” says that it’s expanding into the political advertising world
  • Instreamatic requires that confirmation that they have permission to use someone’s voice
  • In addition to generating brand new content, this will also make it easier for candidates to record one version of an ad, and target alternatives that are slightly different to different audiences
  • X (formerly Twitter) has dramatically reduced its trust and safety team, which is responsible for limiting disinformation about elections
  • Experts worry that many tech platforms are less prepared for the 2024 election cycle than they were in previous years
  • This problem is worse outside the West, where tech companies more often lack the cultural context to effectively enforce their policies
  • Artificial intelligence exacerbates this problem, though some companies do have policies regarding disclosure of AI-generated content when used for politics
  • AI could also have a role in solutions. YouTube, TikTok, and Meta have all reported using AI to flag and take down content