Category: Misinformation

Potential Source of Harm: Political - Election Interference

Updated July 26, 2024

 

Nature of Harm

Election interference involves the dissemination of AI-generated content (e.g. text, images, video or audio) that alters the behavior of voters in democratic elections. It can involve fabricated or misleading content, as well as content whose source is misidentified.

 

Communication of false information in elections is a problem that has existed for centuries. In the modern era, false information has been communicated at scale without the use of AI. However, the availability of LLM-enabled content generation significantly increases the ability of election attackers to generate misleading content at scale, including persuasively fabricated images, videos and recordings of candidates. Reports on this issue have included:

 

Despite this apparent risk, Nick Clegg (Meta's President of Global Affairs and former UK Deputy Prime Minister) stated in May 2024 that misleading political content in 2024 is not reaching the elevated levels that some have predicted. Similarly, a June 2024 article in Nature Misunderstanding the harms of online misinformation suggests that exposure to "false and inflammatory content" is mostly limited to a relatively small segment of the population that actively seeks it out.

 

Techniques similar to those used for election interference can be also used to alter public opinion for various other political purposes. This is identified separately in our harms register, and we plan to add a separate page on it.

 

For a bit of educational fun, you can take this Misinformation Susceptibility Test from the University of Cambridge.

 

Regulatory and Governance Solutions

Election interference is a very difficult problem to address, because most democratic countries place fairly limited restrictions on access to mass media, including by malicious actors. Regulatory and governance approaches to date have generally a few main forms:

 

However, these measures are unlikely to have significant effects on malicious actors seeking to influence elections.

 

As for other type of AI misinformation threats, it is likely to be crucial that populations are well-educated about the risks of misinformation, and therefore less likely to trust inaccurate content. For example:

 

Technical Solutions

Technical solutions to content-based election interference are challenging, including because AI-generated content (especially text content) can be very difficult to identify. However, there are some useful corporate initiatives:

  • In February 2024, a group of 20 leading technology companies (including Microsoft, Meta, Google, Amazon, IBM, Adobe, Arm, OpenAI, Anthropic, Stability AI, Snap, TikTok and X) announced an agreement to combat election-related misinformation.
  • Microsoft has announced a set of technical "tools and tactics" for dealing with election interference.
  • Former Google CEO Eric Schmidt has proposed a 6-point plan for fighting election misinformation.
  • Meta in February 2024 announced a team focused on addressing disinformation and other AI-related harms in connection with the June 2024 European Parliament elections.

 

Start-ups have developed tools that assist in identifying misinformation, e.g.:

  • Blackbird AI provides solutions to "identify and measure the manipulation of public and social perception"
  • Tremau provides tools to assist Internet platforms to moderate for restricted content, including content subject to the EU Digital Services Act.

 

Government and Private Entities

Governments and election bodies around the world are taking a variety of steps to address election interference, as are a large number of private entities (including NGOs and political groups).