Alondra Nelson
Permanent URI for this collection
Widely known for her research at the intersection of science, technology, and politics, Alondra Nelson holds the Harold F. Linder Chair in the School of Social Science. An acclaimed sociologist, Alondra Nelson examines questions in science, technology, and social inequality. Nelson's work offers a critical and innovative approach to the social sciences in fruitful dialogue with other fields. Her major research contributions are situated at the intersection of racial formation and social citizenship, on the one hand, and emerging scientific and technological phenomena, on the other.
Image credit: Dan Komoda
Browse
Browsing Alondra Nelson by Subject "Artificial Intelligence"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
- Some of the metrics are blocked by yourconsent settingsAmericans Need a Bill of Rights for an AI-Powered World(WIRED, 2021-10)
;Nelson, AlondraLander, Eric10 6 - Some of the metrics are blocked by yourconsent settingsChallenging the Reckless Speed of AI(Kathimerini, 2024-07)
;Nelson, AlondraPagaiologos, YannisNelson, Alondra. "Challenging the Reckless Speed of AI." Interview by Yannis Pagaiologos. Kathimerini, 2024.10 5 - Some of the metrics are blocked by yourconsent settingsDisrupting the Disruption Narrative: Policy Innovation in AI Governance(National Academy of Engineering, 2025)Nelson, AlondraGovernance should not be understood as an impediment to AI innovation but as an essential component of it.
16 67 - Some of the metrics are blocked by yourconsent settingsHow Do Policymakers Regulate AI and Accommodate Innovation in Research and Medicine?(JAMA, 2024-01)
;Suran, Melissa ;Hswen, Yulin ;Nelson, AlondraBibbins-Domingo, KirstenWhat are the most recent advancements in establishing AI safeguards for clinical practice? In whatway does AI intersect with democracy and its preservation? And how are the frameworks for regulating AI progressing and aligning across the US, UK, and EU?As the technology advances at lightning speed, such questions surrounding AI become more critical. Alondra Nelson, PhD, is focusing on effective guardrails that protect society from issues like data insecurity—but also encourage innovation in the laboratory and clinic. Nelson is the Harold F. Linder Professor at the Institute for Advanced Study in Princeton, New Jersey, where she studies the effects of scientific and technological advances on health and society. In 2023, she was included in TIME magazine’s 100 most influential people in AI. JAMA Editor in Chief Kirsten-Bibbins Domingo, PhD, MD, MAS, recently spoke with Nelson, who also served as deputy assistant to US President Joe Biden and was acting director of the White House Office of Science and Technology Policy (OSTP). The following interview has been edited for clarity and length. The video of this interview can be seen here: https://jamanetwork.com/learning/video-player/1884108918 32 - Some of the metrics are blocked by yourconsent settingsQuestioning Code: Courage in the Age of Artificial Intelligence(2025-05-24)Nelson, AlondraAmherst College Commencement Speech, May 24, 2025, Amherst, Massachusetts.
35 761 - Some of the metrics are blocked by yourconsent settingsRARE/EARTH: The Geopolitics of Critical Minerals and the AI Supply Chain(2025-06-02)Nelson, AlondraOpening Remarks, RARE/EARTH: The Geopolitics of Critical Minerals and the AI Supply Chain, June 2, 2025, Princeton, New Jersey.
25 120 - Some of the metrics are blocked by yourconsent settingsSeeking Reliable Election Information? Don't Trust AI(The AI Democracy Projects, 2024-02-27)
;Angwin, Julia ;Nelson, AlondraPalta, RinaHow do we evaluate the performance of AI models in contexts where they can do real harm? To date, this question has often been treated as a problem of technical vulnerability — that is, how susceptible any given model is to being tricked into generating output that users may deem to be controversial or offensive or into providing disinformation or misinformation to the public. The AI Democracy Projects offers a new framework for thinking about AI performance. We ask, How does an AI model perform in settings, such as elections and voting contexts, that align with its intended use and that have evident societal stakes and, therefore, may cause harm? We begin to answer this question by piloting expert-driven domain-specific safety testing of AI model performance that is not technical but is instead sociotechnical — conducted with an understanding of the social context in which AI models are built, deployed, and operated. We built a software portal to assess the responses of five leading AI models, Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2, and Mistral’s Mixtral, to questions voters might ask, checking for bias, accuracy, completeness, and harmfulness. This testing process took place in January 2024 and engaged state and local election officials and AI and election experts from research, civil society organizations, academia, and journalism. Our study found that: • All of the AI models performed poorly with regard to election information. • Half of the AI model responses to election-related queries were rated as inaccurate by a majority of expert testers. • There were no clear winners or losers among the AI models. Only Open AI’s GPT-4 stood out, with a lower rate of inaccurate or biased responses — but that still meant one in five of its answers was inaccurate. • More than one-third of AI model responses to election-related information were rated as harmful or incomplete. The expert raters deemed 40% of the responses to be harmful and rated 39% as incomplete. A smaller portion of responses – 13% – were rated as biased. • Inaccurate and incomplete information about voter eligibility, polling locations, and identification requirements, led to ratings of harmfulness and bias. In sum, the AI models were unable to consistently deliver accurate, harmless, complete, and unbiased responses — raising serious concerns about these models’ potential use by voters in a critical election year. Much has been written about spectacular hypothetical harms that could arise from AI. And already in 2024 we have seen AI models used by bad actors to create disinformation (intended to mislead): fake images, fake videos, and fake voices of public officials and celebrities. But there are potential harms to democracy that stem from AI models beyond their capacity for facilitating disinformation by way of deepfakes. The AI Democracy Projects’ testing surfaced another type of harm: the steady erosion of the truth by misinformation — hundreds of small mistakes, falsehoods, and misconceptions presented as “artificial intelligence” when they are instead plausible-sounding unverified guesses. The cumulative effect of these partially correct, partially misleading answers could easily be frustration — causing voters to give up because it all seems overwhelmingly complicated and contradictory. This report and accompanying methodology and findings offer some of the first publicly available comparative data on AI model safety regarding election information at a time when high-stakes elections are taking place globally and when the public needs more accountability from companies about their products’ implications for democracy. More guardrails are needed before AI models are safe for voters to use. Official election websites and offices remain the most reliable source of information for voters. Policymakers are encouraged to consider how AI models are being incorporated into their vital work in the public interest, especially the safety and integrity of elections.4 - Some of the metrics are blocked by yourconsent settingsTen times faster is not 10 times better(American Association for the Advancement of Science, 2025-06-26)Nelson, AlondraAs the Trump administration systematically defunds the American research ecosystem, while disingenuously promising a return to so-called “gold standard science,” hope can be drawn from the new bipartisan initiative from Senators Martin Heinrich (Democrat, New Mexico) and Michael Rounds (Republican, South Dakota). Their American Science Acceleration Project (ASAP) seeks to make science in the United States “ten times faster by 2030” through five pillars: data, computing, artificial intelligence (AI), collaboration, and process improvement. But simply accelerating will exacerbate historical weaknesses in our innovation system and reproduce the damaging Silicon Valley ethos of “move fast and break things.” Faster is not necessarily better when it comes to innovation and discovery. Supercharging a research ecosystem that already struggles with accessibility and public trust risks more than it achieves.
6 2 - Some of the metrics are blocked by yourconsent settingsThe Right Way to Regulate AI: Focus on Its Possibilities, Not Its Perils(Council on Foreigh Relations, Inc., 2024-01-12)Nelson, Alondra
57 630 - Some of the metrics are blocked by yourconsent settingsThree Fallacies(2025-02-10)Nelson, AlondraRemarks at Elysée Palace on the Occasion of the AI Action Summit, February 10, 2025, Paris, France.
32 52