Saihub.info
Category: Misinformation
Potential Source of Harm: Political - Election Interference
Updated November 18, 2024
Nature of Harm
Election interference involves the dissemination of AI-generated content (e.g. text, images, video or audio) that alters the behavior of voters in democratic elections. It can involve fabricated or misleading content, as well as content whose source is misidentified.
Communication of false information in elections is a problem that has existed for centuries. In the modern era, false information has been communicated at scale without the use of AI. However, the availability of LLM-enabled content generation significantly increases the ability of election attackers to generate misleading content at scale, including persuasively fabricated images, videos and recordings of candidates. Reports on this issue have included:
Despite this apparent risk, there is some evidence that harms are not significantly materializing in actual elections, e.g.:
A June 2024 article in Nature Misunderstanding the harms of online misinformation suggests that exposure to "false and inflammatory content" is mostly limited to a relatively small segment of the population that actively seeks it out.
Techniques similar to those used for election interference can be also used to alter public opinion for various other political purposes. This is identified separately in our harms register, and we plan to add a separate page on it.
For a bit of educational fun, you can take this Misinformation Susceptibility Test from the University of Cambridge.
Regulatory and Governance Solutions
Election interference is a very difficult problem to address, because most democratic countries place fairly limited restrictions on access to mass media, including by malicious actors. Regulatory and governance approaches to date have generally a few main forms:
election rules adopted in Brazil in February 2024
letters from Chairwoman Jessica Rosenworcel to telecoms companies on on efforts to deal with AI-generated political robocalls (June 2024) and responses (July 2024)
Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts (notice of proposed rulemaking) (August 2024)
those emerging in various US states, such as California legislation on robocalls and election-related deepfakes.
However, these measures are unlikely to have significant effects on malicious actors seeking to influence elections.
As for other type of AI misinformation threats, it is likely to be crucial that populations are well-educated about the risks of misinformation, and therefore less likely to trust inaccurate content. For example:
Technical Solutions
Technical solutions to content-based election interference are challenging, including because AI-generated content (especially text content) can be very difficult to identify. However, there are some useful corporate initiatives:
Political content policy (using technology such as digital watermaking -- see Technical Solutions page)
How we’re approaching the 2024 U.S. elections (Dec. 2023)
Meta
in February 2024, announced a team focused on addressing disinformation and other AI-related harms in connection with the June 2024 European Parliament elections
in August 2024 shut down its popular misinformation tracking tool CrowdTangle and replaced it with another (and apparently less functional) tool Content Library.
Start-ups have developed tools that assist in identifying misinformation, e.g.:
Government and Private Entities
Governments and election bodies around the world are taking a variety of steps to address election interference, as are a large number of private entities (including NGOs and political groups).
We helped organize a consultation on 'AI and the Electoral Process' at St George's House, Windsor Castle in September/October 2024, at which some of these issues were discussed. The report of the consultation is forthcoming.