Social Science
Permanent URI for this community
Browse
Browsing Social Science by Subject "AI"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- Some of the metrics are blocked by yourconsent settingsAI safety on whose terms?(Science: American Association for the Advancement of Science, 2023-07-14)
;Lazar, SethNelson, AlondraRapid, widespread adoption of the latest large language models has sparked both excitement and concern about advanced artificial intelligence (AI). In response, many are looking to the field of AI safety for answers. Major AI companies are purportedly investing heavily in this young research program, even as they cut “trust and safety” teams addressing harms from current systems. Governments are taking notice too. The United Kingdom just invested £100 million in a new “Foundation Model Taskforce” and plans an AI safety summit this year. And yet, as research priorities are being set, it is already clear that the prevailing technical agenda for AI safety is inadequate to address critical questions. Only a sociotechnical approach can truly limit current and potential dangers of advanced AI.32 71 - Some of the metrics are blocked by yourconsent settingsInternational AI Safety Report(2025-01-29)
;Benjio, Yoshua ;Mindermann, Sören ;Privitera, Daniel ;Besiroglu, Tamay ;Bommasani, Rishi ;Casper, Stephen ;Choi, Yejin ;Fox, Philip ;Garfinkel, Ben ;Goldfarb, Danielle ;Heidari, Hoda ;Ho, Anson ;Kapoor, Sayash ;Khalatbari, Leila ;Longpre, Shayne ;Manning, Sam ;Mavroudis, Vasilios ;Mazeika, Mantas ;Michael, Julian ;Newman Jessica ;Ng, Kwan Yee ;Okolo Chinasa T. ;Raji, Deborah ;Sastry, Girish ;Seger, Elizabeth ;Skeadas, Theodora ;South, Tobin ;Strubell, Emma ;Tramèr, Florian ;Velasco, Lucia ;Wheeler, Nicole ;Acemoglu, Daron ;Adekanmbi, Olubayo ;Dalrymple, David ;Dietterich, Thomas G. ;Felten, Edward W. ;Fung, Pascale ;Gourinchas, Pierre-Oliver ;Heintz, Fredrik ;Hinton, Geoffrey ;Jennings, Nick ;Krause, Andreas ;Leavy, Susan ;Liang, Percy ;Ludermir, Teresa ;Marda, Vidushi ;Margetts, Helen ;McDermid, John ;Munga, Jane ;Narayanan, Arvind ;Nelson, Alondra ;Neppel, Clara ;Oh, Alice ;Ramchurn, Gopal ;Russell, Stuart ;Schaake, Marietje ;Schölkopf, Bernhard ;Song, Dawn ;Soto, Alvaro ;Tiedrich, Lee ;Varoquaux, Gaël ;Yao, Andrew ;Zhang, Ya-Qin ;Albalawi, Fahad ;Alserkal, Marwan ;Ajala, Olubunmi ;Avrin, Guillaume ;Busch, Christian ;Ferreira de Carvalho, André Carlos Ponce de León ;Fox, Bronwyn ;Gill, Amandeep Singh ;Hatip, Ahmet Halit ;Heikkilä, Juha ;Jolly, Gill ;Katzir, Ziv ;Kitano, Hiroaki ;Krüger, Antonio ;Johnson, Chris ;Khan, Saif M. ;Lee, Kyoung Mu ;Ligot, Dominic Vincent ;Molchanovskyi, Oleksii ;Monti, Andrea ;Mwamanzi, Nusu ;Nemer, Mona ;Oliver, Nuria ;López, Portillo, José Ramón ;Ravindran, Balaraman ;Pezoa Rivera, Raquel ;Riza, Hammam ;Rugege, Crystal ;Seoighe, Ciarán ;Sheehan, Jerry ;Sheikh, Haroon ;Wong, DeniseZeng, YiThe International AI Safety Report is the world’s first comprehensive synthesis of current literature of the risks and capabilities of advanced AI systems. Chaired by Turing-award winning computer scientist, Yoshua Bengio, it is the culmination of work by 100 AI experts to advance a shared international understanding of the risks of advanced Artificial Intelligence (AI). The Chair is supported by an international Expert Advisory Panel made up of representatives from 30 countries, the United Nations (UN), European Union (EU), and Organization for Economic Cooperation and Development (OECD). The report does not make policy recommendations. Instead it summarises the scientific evidence on the safety of general-purpose AI to help create a shared international understanding of risks from advanced AI and how they can be mitigated. General-purpose AI – or AI that can perform a wide variety of tasks – is a type of AI that has advanced rapidly in recent years and is widely used by technology companies for a range of consumer and business purposes. The report is concerned with AI risks and AI safety and focuses on identifying these risks and evaluating methods for mitigating them. It summarises the scientific evidence on 3 core questions – What can general-purpose AI do? What are risks associated with general-purpose AI? and What mitigation techniques are there against these risks? and aims to: - provide scientific information that will support informed policymaking – it does not recommend specific policies - facilitate constructive and evidence-based discussion about the uncertainty of general-purpose AI and its outcomes - contribute to an internationally shared scientific understanding of advanced AI safety15 2