Repository logo
  • Log In
Repository logoRepository logo
  • Communities & Collections
  • Advanced Search
  • Statistics
  • Log In
  1. Home
  2. Social Science
  3. Alondra Nelson
  4. Seeking Reliable Election Information? Don't Trust AI
 
  • Details
Options

Seeking Reliable Election Information? Don't Trust AI

Date
2024-02-27
Author(s)
Angwin, Julia
Nelson, Alondra
Palta, Rina
URI
https://albert.ias.edu/20.500.12111/9564
Abstract
How do we evaluate the performance of AI models in contexts where they can do real harm? To date, this question has often been treated as a problem of technical vulnerability — that is, how susceptible any given model is to being tricked into generating output that users may deem to be controversial or offensive or into providing disinformation or misinformation to the public.

The AI Democracy Projects offers a new framework for thinking about AI performance. We ask, How does an AI model perform in settings, such as elections and voting contexts, that align with its intended use and that have evident societal stakes and, therefore, may cause harm?

We begin to answer this question by piloting expert-driven domain-specific safety testing of AI model performance that is not technical but is instead sociotechnical — conducted with an understanding of the social context in which AI models are built, deployed, and operated.

We built a software portal to assess the responses of five leading AI models, Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2, and Mistral’s Mixtral, to questions voters might ask, checking for bias, accuracy, completeness, and harmfulness. This testing process took place in January 2024 and engaged state and local election officials and AI and election experts from research, civil society organizations, academia, and journalism.

Our study found that:
• All of the AI models performed poorly with regard to election information.
• Half of the AI model responses to election-related queries were rated as inaccurate by a majority of expert testers.
• There were no clear winners or losers among the AI models. Only Open AI’s GPT-4 stood out, with a lower rate of inaccurate or biased responses — but that still meant one in five of its answers was inaccurate.
• More than one-third of AI model responses to election-related information were rated as harmful or incomplete. The expert raters deemed 40% of the responses to be harmful and rated 39% as incomplete. A smaller portion of responses – 13% – were rated as biased.
• Inaccurate and incomplete information about voter eligibility, polling locations, and identification requirements, led to ratings of harmfulness and bias.

In sum, the AI models were unable to consistently deliver accurate, harmless, complete, and unbiased responses — raising serious concerns about these models’ potential use by voters in a critical election year.

Much has been written about spectacular hypothetical harms that could arise from AI. And already in 2024 we have seen AI models used by bad actors to create disinformation (intended to mislead): fake images, fake videos, and fake voices of public officials and celebrities.

But there are potential harms to democracy that stem from AI models beyond their capacity for facilitating disinformation by way of deepfakes.

The AI Democracy Projects’ testing surfaced another type of harm: the steady erosion of the truth by misinformation — hundreds of small mistakes, falsehoods, and misconceptions presented as “artificial intelligence” when they are instead plausible-sounding unverified guesses. The cumulative effect of these partially correct, partially misleading answers could easily be frustration — causing voters to give up because it all seems overwhelmingly complicated and contradictory.

This report and accompanying methodology and findings offer some of the first publicly available comparative data on AI model safety regarding election information at a time when high-stakes elections are taking place globally and when the public needs more accountability from companies about their products’ implications for democracy.

More guardrails are needed before AI models are safe for voters to use. Official election websites and offices remain the most reliable source of information for voters. Policymakers are encouraged to consider how AI models are being incorporated into their vital work in the public interest, especially the safety and integrity of elections.
Subjects
AI
Artificial Intelligence
Elections
AI models
Description
Experts testing five leading AI models found the answers were often inaccurate, misleading, and even downright harmful.

The AI Democracy Projects (www.proofnews.org/tag/the-ai-democracy-projects) are a collaboration between Proof News and the Science, Technology, and Social Values Lab (www.ias.edu/stsv-lab) at the Institute for Advanced Study.
File(s)
Loading...
Thumbnail Image
Name

Angwin-Nelson-Palta_SeekingReliableElectionInformationDontTrustAI_2024.pdf

Type

Main Article

Description
Angwin, Julia, Alondra Nelson, and Rina Palta. "Seeking Reliable Election Information? Don’t Trust AI." The AI Democracy Projects, February 27, 2024.
Size

4.1 MB

Format

Adobe PDF

Checksum (MD5)

98581484b366689a4792e4a42e2e415f

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback
  • Take Down Request
  • About