Social Science
Permanent URI for this community
Browse
Browsing Social Science by Subject "AI"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
- Some of the metrics are blocked by yourconsent settingsAI safety on whose terms?(Science: American Association for the Advancement of Science, 2023-07-14)
;Lazar, SethNelson, AlondraRapid, widespread adoption of the latest large language models has sparked both excitement and concern about advanced artificial intelligence (AI). In response, many are looking to the field of AI safety for answers. Major AI companies are purportedly investing heavily in this young research program, even as they cut “trust and safety” teams addressing harms from current systems. Governments are taking notice too. The United Kingdom just invested £100 million in a new “Foundation Model Taskforce” and plans an AI safety summit this year. And yet, as research priorities are being set, it is already clear that the prevailing technical agenda for AI safety is inadequate to address critical questions. Only a sociotechnical approach can truly limit current and potential dangers of advanced AI.35 86 - Some of the metrics are blocked by yourconsent settingsChallenging the Reckless Speed of AI(Kathimerini, 2024-07)
;Nelson, AlondraPagaiologos, YannisNelson, Alondra. "Challenging the Reckless Speed of AI." Interview by Yannis Pagaiologos. Kathimerini, 2024.10 5 - Some of the metrics are blocked by yourconsent settingsInternational AI Safety Report(2025-01-29)
;Benjio, Yoshua ;Mindermann, Sören ;Privitera, Daniel ;Besiroglu, Tamay ;Bommasani, Rishi ;Casper, Stephen ;Choi, Yejin ;Fox, Philip ;Garfinkel, Ben ;Goldfarb, Danielle ;Heidari, Hoda ;Ho, Anson ;Kapoor, Sayash ;Khalatbari, Leila ;Longpre, Shayne ;Manning, Sam ;Mavroudis, Vasilios ;Mazeika, Mantas ;Michael, Julian ;Newman Jessica ;Ng, Kwan Yee ;Okolo Chinasa T. ;Raji, Deborah ;Sastry, Girish ;Seger, Elizabeth ;Skeadas, Theodora ;South, Tobin ;Strubell, Emma ;Tramèr, Florian ;Velasco, Lucia ;Wheeler, Nicole ;Acemoglu, Daron ;Adekanmbi, Olubayo ;Dalrymple, David ;Dietterich, Thomas G. ;Felten, Edward W. ;Fung, Pascale ;Gourinchas, Pierre-Oliver ;Heintz, Fredrik ;Hinton, Geoffrey ;Jennings, Nick ;Krause, Andreas ;Leavy, Susan ;Liang, Percy ;Ludermir, Teresa ;Marda, Vidushi ;Margetts, Helen ;McDermid, John ;Munga, Jane ;Narayanan, Arvind ;Nelson, Alondra ;Neppel, Clara ;Oh, Alice ;Ramchurn, Gopal ;Russell, Stuart ;Schaake, Marietje ;Schölkopf, Bernhard ;Song, Dawn ;Soto, Alvaro ;Tiedrich, Lee ;Varoquaux, Gaël ;Yao, Andrew ;Zhang, Ya-Qin ;Albalawi, Fahad ;Alserkal, Marwan ;Ajala, Olubunmi ;Avrin, Guillaume ;Busch, Christian ;Ferreira de Carvalho, André Carlos Ponce de León ;Fox, Bronwyn ;Gill, Amandeep Singh ;Hatip, Ahmet Halit ;Heikkilä, Juha ;Jolly, Gill ;Katzir, Ziv ;Kitano, Hiroaki ;Krüger, Antonio ;Johnson, Chris ;Khan, Saif M. ;Lee, Kyoung Mu ;Ligot, Dominic Vincent ;Molchanovskyi, Oleksii ;Monti, Andrea ;Mwamanzi, Nusu ;Nemer, Mona ;Oliver, Nuria ;López, Portillo, José Ramón ;Ravindran, Balaraman ;Pezoa Rivera, Raquel ;Riza, Hammam ;Rugege, Crystal ;Seoighe, Ciarán ;Sheehan, Jerry ;Sheikh, Haroon ;Wong, DeniseZeng, YiThe International AI Safety Report is the world’s first comprehensive synthesis of current literature of the risks and capabilities of advanced AI systems. Chaired by Turing-award winning computer scientist, Yoshua Bengio, it is the culmination of work by 100 AI experts to advance a shared international understanding of the risks of advanced Artificial Intelligence (AI). The Chair is supported by an international Expert Advisory Panel made up of representatives from 30 countries, the United Nations (UN), European Union (EU), and Organization for Economic Cooperation and Development (OECD). The report does not make policy recommendations. Instead it summarises the scientific evidence on the safety of general-purpose AI to help create a shared international understanding of risks from advanced AI and how they can be mitigated. General-purpose AI – or AI that can perform a wide variety of tasks – is a type of AI that has advanced rapidly in recent years and is widely used by technology companies for a range of consumer and business purposes. The report is concerned with AI risks and AI safety and focuses on identifying these risks and evaluating methods for mitigating them. It summarises the scientific evidence on 3 core questions – What can general-purpose AI do? What are risks associated with general-purpose AI? and What mitigation techniques are there against these risks? and aims to: - provide scientific information that will support informed policymaking – it does not recommend specific policies - facilitate constructive and evidence-based discussion about the uncertainty of general-purpose AI and its outcomes - contribute to an internationally shared scientific understanding of advanced AI safety36 28 - Some of the metrics are blocked by yourconsent settingsQuestioning Code: Courage in the Age of Artificial Intelligence(2025-05-24)Nelson, AlondraAmherst College Commencement Speech, May 24, 2025, Amherst, Massachusetts.
36 891 - Some of the metrics are blocked by yourconsent settingsRARE/EARTH: The Geopolitics of Critical Minerals and the AI Supply Chain(2025-06-02)Nelson, AlondraOpening Remarks, RARE/EARTH: The Geopolitics of Critical Minerals and the AI Supply Chain, June 2, 2025, Princeton, New Jersey.
27 136 - Some of the metrics are blocked by yourconsent settingsSeeking Reliable Election Information? Don't Trust AI(The AI Democracy Projects, 2024-02-27)
;Angwin, Julia ;Nelson, AlondraPalta, RinaHow do we evaluate the performance of AI models in contexts where they can do real harm? To date, this question has often been treated as a problem of technical vulnerability — that is, how susceptible any given model is to being tricked into generating output that users may deem to be controversial or offensive or into providing disinformation or misinformation to the public. The AI Democracy Projects offers a new framework for thinking about AI performance. We ask, How does an AI model perform in settings, such as elections and voting contexts, that align with its intended use and that have evident societal stakes and, therefore, may cause harm? We begin to answer this question by piloting expert-driven domain-specific safety testing of AI model performance that is not technical but is instead sociotechnical — conducted with an understanding of the social context in which AI models are built, deployed, and operated. We built a software portal to assess the responses of five leading AI models, Anthropic’s Claude, Google’s Gemini, OpenAI’s GPT-4, Meta’s Llama 2, and Mistral’s Mixtral, to questions voters might ask, checking for bias, accuracy, completeness, and harmfulness. This testing process took place in January 2024 and engaged state and local election officials and AI and election experts from research, civil society organizations, academia, and journalism. Our study found that: • All of the AI models performed poorly with regard to election information. • Half of the AI model responses to election-related queries were rated as inaccurate by a majority of expert testers. • There were no clear winners or losers among the AI models. Only Open AI’s GPT-4 stood out, with a lower rate of inaccurate or biased responses — but that still meant one in five of its answers was inaccurate. • More than one-third of AI model responses to election-related information were rated as harmful or incomplete. The expert raters deemed 40% of the responses to be harmful and rated 39% as incomplete. A smaller portion of responses – 13% – were rated as biased. • Inaccurate and incomplete information about voter eligibility, polling locations, and identification requirements, led to ratings of harmfulness and bias. In sum, the AI models were unable to consistently deliver accurate, harmless, complete, and unbiased responses — raising serious concerns about these models’ potential use by voters in a critical election year. Much has been written about spectacular hypothetical harms that could arise from AI. And already in 2024 we have seen AI models used by bad actors to create disinformation (intended to mislead): fake images, fake videos, and fake voices of public officials and celebrities. But there are potential harms to democracy that stem from AI models beyond their capacity for facilitating disinformation by way of deepfakes. The AI Democracy Projects’ testing surfaced another type of harm: the steady erosion of the truth by misinformation — hundreds of small mistakes, falsehoods, and misconceptions presented as “artificial intelligence” when they are instead plausible-sounding unverified guesses. The cumulative effect of these partially correct, partially misleading answers could easily be frustration — causing voters to give up because it all seems overwhelmingly complicated and contradictory. This report and accompanying methodology and findings offer some of the first publicly available comparative data on AI model safety regarding election information at a time when high-stakes elections are taking place globally and when the public needs more accountability from companies about their products’ implications for democracy. More guardrails are needed before AI models are safe for voters to use. Official election websites and offices remain the most reliable source of information for voters. Policymakers are encouraged to consider how AI models are being incorporated into their vital work in the public interest, especially the safety and integrity of elections.4 - Some of the metrics are blocked by yourconsent settingsTen times faster is not 10 times better(American Association for the Advancement of Science, 2025-06-26)Nelson, AlondraAs the Trump administration systematically defunds the American research ecosystem, while disingenuously promising a return to so-called “gold standard science,” hope can be drawn from the new bipartisan initiative from Senators Martin Heinrich (Democrat, New Mexico) and Michael Rounds (Republican, South Dakota). Their American Science Acceleration Project (ASAP) seeks to make science in the United States “ten times faster by 2030” through five pillars: data, computing, artificial intelligence (AI), collaboration, and process improvement. But simply accelerating will exacerbate historical weaknesses in our innovation system and reproduce the damaging Silicon Valley ethos of “move fast and break things.” Faster is not necessarily better when it comes to innovation and discovery. Supercharging a research ecosystem that already struggles with accessibility and public trust risks more than it achieves.
9 2 - Some of the metrics are blocked by yourconsent settingsThree Fallacies(2025-02-10)Nelson, AlondraRemarks at Elysée Palace on the Occasion of the AI Action Summit, February 10, 2025, Paris, France.
32 52