Aspen Digital

July A.I. Elections Newsletter

Curated resources and readings, just for you

A closeup of the "D" in Aspen Digital's brand mark, framed by lines of descending shades of green.
July 30, 2024

Tom Latkowski

Program Associate

Welcome to the third edition of Aspen Digital’s AI Elections Newsletter, a curated collection of news, resources, and updates from our AI Elections Initiative.

Know others who would like to receive these monthly emails or follow our work? They can sign up and learn more at aielections.aspendigital.org.

Our AI Elections Advisory Council published three risk checklists on the key threats AI could pose to the 2024 US elections: hyperlocal voter suppression, language-based influence operations, and deepfaked public figures. The checklists recommend mitigations for election administrators, social media and messaging platforms, AI labs and companies, as well as news media, advocates, and civil society.

In August, we’ll host “Algorithmic Suppression: AI, Elections, & the Future State of Social Justice” in Washington D.C. This intimate gathering will bring together leaders from civil society, technology, and academia to discuss the AI-driven risks and challenges facing marginalized and underrepresented communities during the 2024 election cycle and beyond.

On September 4th, we’ll host a day-long series of briefings and discussion for editors, reporters, and columnists to help inform their coverage of the 2024 election including the impacts of AI, changes to certification procedures, elections security, and more.

Google’s and Microsoft’s AI Chatbots Refuse to Say Who Won the 2020 US Election | WIRED

The chatbots Gemini and Copilot won’t answer when asked who won the 2020 presidential election.

– David Gilbert

Tests find AI tools readily create election lies from the voices of well-known political leaders | AP News

Researchers were able to create deep faked audio of political leaders using publicly available AI tools.

– Ali Swenson

FCC will consider rules for AI-generated political ads on TV and radio, but can’t touch streaming | AP News

The FCC is considering a new proposal for regulating the use of AI in political ads, though they are unable to regulate ads using in streaming.

– Ali Swenson

EXCLUSIVE: Sens. Lee, Lummis Introduce Bill As They Accuse FCC Of Meddling In Elections | MSN

Two Republican Senators introduced a bill to block the FCC’s proposed regulations of AI in political campaigns, accusing the FCC of interfering in the election process.

– Henry Rodgers

Political consultant behind AI-generated Biden robocalls faces $6 million fine, criminal charges | AP News

The FCC proposed a $6 million fine and criminal charges of voter suppression for the creator of the deepfake recording of President Biden, as well as a $2 million fine for the company accused of transmitting the calls.

– Holly Ramer and Ali Swenson

Five Myths About How AI Will Affect 2024 Elections | Tech Policy Press

Amid hype and confusion regarding the impacts of generative AI, the most credible threats come from expanding existing risks, requiring a wide array of mitigations.

– Irene Solaiman

Deepfakes Are Evolving. This Company Wants to Catch Them All | WIRED

With a rise in deepfake scams targeting many sectors, there is increasing investment in deepfake detection technology, which could play a role in elections as well.

– Will Knight

Ticked off: TikTok approves EU elections disinformation ads for publication in Ireland | Global Witness

To test platform content policies, researchers attempted to run ads with false information about how to vote in Europe, finding that all of the ads were approved to run on TikTok and some were approved to run on YouTube.

Europe’s elections test a landmark social media law | Washington Post

Though the European Union’s Digital Services Act (DSA) was written before the generative AI era began, it has already had some success at stemming voter suppression efforts.

– Cat Zakrzewski

A Small Army Combating a Flood of Deepfakes in India’s Election | The New York Times

Deepfakes were commonplace in India’s election, ranging from comedic content to genuine attempts at deception.

– Alex Travelli

WhatsApp Channels, used by millions, has no clear election rules | POLITICO

Despite rolling out public accounts, WhatsApp lacks the moderation that other Meta platforms like Facebook and Instagram have in place.

– Rebecca Kern

ChatGPT gave incorrect answers to questions about how to vote in battleground states | CBS News

Startups are entering the political ad market, helping candidates tailor specific versions of ads to different groups of voters.

– Haley Ott and Emmet Lyons

An AI mayor? OpenAI shuts down tools for AI political candidates | CNN Business

Citing their policy against using their technology for political campaigning, OpenAI shut down access to their tools for two “AI political candidates.”

– Samantha Murphy Kelly

If you’re looking for more detail you, can read our full notes below, or of course read the articles themselves. Feel free to send us suggestions at tom.latkowski@aspeninstitute.org.

Full Notes

Google’s and Microsoft’s A.I. Chatbots Refuse to Say Who Won the 2020 US Election | WIRED

  • When asked by researchers, Microsoft Copilot chatbot and Google’s Gemini chatbot wouldn’t say who won the 2020 US presidential election
  • Other chat bots including OpenAI’s ChatGPT, Anthropic’s Claude, and Meta’s LLaMa correctly answered that Joe Biden won the election
  • Representatives of Google and Microsoft reported that they are continuing to work on election-related issues, and that their chatbots refused to answer “out of an abundance of caution”
  • In a test of several prominent AI voice tools, researchers found that they could create fake audio recordings of prominent US and EU politicians
  • One of the fake statements included a threat of violence at polling places and an encouragement for voters to stay home, raising fears that fake audio recordings of this sort could reduce turnout on election day
  • One audio platform acknowledged the issue, and said they are continuing to build safety measures into their product
  • The Federal Communications Commission (FCC) is considering a new rule to require disclosure when political ads use artificial intelligence, though the rule would apply only to television and radio advertising
  • Commissioners are continuing to discuss the details of the proposal, including the definition of “AI-generated content” and whether any disclosure would have to take place on air
  • Two Republican Senators introduced a bill to stop the FCC from putting in place its proposed rules regarding the use of AI by political campaigns
  • The Senators argued that the FCC’s proposal would be “a clear overstep of their regulatory authority”
  • The political consultant who generated a fake recording of President Biden encouraging New Hampshire primary voters to stay home faces a $6 million fine and up to 7 years in prison
  • The charges come in the wake of the FCC’s February declaration that the use of AI voice-cloning tools in robocalls is banned under existing law
  • When it comes to election risk, AI primarily exacerbates existing problems like voter suppression and disinformation.
  • Many mitigations will be needed, including not just technical changes but efforts to address distribution channels and public susceptibility
  • As more companies are targeted by scams using deepfakes and AI-generated content, new firms are attempting to build effective detection tools for use in the private sector.
  • These efforts come in the wake of a 2022 warning from the FBI, which noted the risk of scammers using deepfakes to pose as job hunters or employees.
  • To test their election policies, researchers submitted advertisements containing false information about voting to TikTok, YouTube, and X in advance of the EU elections. TikTok and YouTube approved some or all of the advertisements (though the researchers pulled them down before they were seen by the public).
  • Researchers noted that the ads explicitly violated the companies’ content policies.
  • Though many of Europe’s AI regulations have yet to go into effect, the 2022 Digital Services Act (DSA) did take effect in advance of the 2024 European Union elections.
  • Among other things, the DSA At least one ad was intended to regulate deceptive political advertising. There was at least one example of an ad campaign, approved by TikTok, encouraging European voters to vote by text (which is, of course, not an actual voting method – the ad campaign was intended to suppress voters)
  • In advance of the elections, EU regulators ran “stress tests” on several major platforms, testing their responses to theoretical scenarios such as viral AI-generated content or “manipulated information that resulted in incitement to violence”
  • In India’s election, some activists responded to waves of deepfakes by launching “vigilante fact-checking outfits,” intended to call out AI-generated content online
  • Multiple parties and candidates used deepfakes in India’s election, including generating fake videos of their opponents, themselves, and deceased politicians
  • Unlike other Meta platforms such as Facebook and Instagram, WhatsApp Channels lacks any explicit policies relating to election-disinformation
  • Meta argues that its existing community guidelines are sufficient to handle voter suppression and other election-related issues, but many experts argue that this is insufficient, with one calling it a “loophole”
  • Other messaging platforms including Discord and Messenger (which is also owned by Meta) do have election disinformation policies in place
  • When reporters asked OpenAI’s ChatGPT questions about how to vote in several key states, it repeatedly gave incorrect information, including about when mail ballots must be received, how to vote absentee, and early voting rules
  • Other chatbots such as Microsoft’s Copilot and Google’s Gemini refused to answer election-related questions whatsoever, which companies said was a decision made “out of an abundance of caution”
  • OpenAI shut down the accounts behind “AI candidates” running for office in Cheyenne, Wyoming and the a parliamentary district in the United Kingdom, citing violations of its use policies
  • Experts argue that these candidacies are merely gimmicks, and emphasize that chatbots do not approach the human intelligence required to serve in public office