Options
International AI Safety Report
Date
2025-01-29
Author(s)
Benjio, Yoshua
Mindermann, Sören
Privitera, Daniel
Besiroglu, Tamay
Bommasani, Rishi
Casper, Stephen
Choi, Yejin
Fox, Philip
Garfinkel, Ben
Goldfarb, Danielle
Heidari, Hoda
Ho, Anson
Kapoor, Sayash
Khalatbari, Leila
Longpre, Shayne
Manning, Sam
Mavroudis, Vasilios
Mazeika, Mantas
Michael, Julian
Newman Jessica
Ng, Kwan Yee
Okolo Chinasa T.
Raji, Deborah
Sastry, Girish
Seger, Elizabeth
Skeadas, Theodora
South, Tobin
Strubell, Emma
Tramèr, Florian
Velasco, Lucia
Wheeler, Nicole
Acemoglu, Daron
Adekanmbi, Olubayo
Dalrymple, David
Dietterich, Thomas G.
Felten, Edward W.
Fung, Pascale
Gourinchas, Pierre-Oliver
Heintz, Fredrik
Hinton, Geoffrey
Jennings, Nick
Krause, Andreas
Leavy, Susan
Liang, Percy
Ludermir, Teresa
Marda, Vidushi
Margetts, Helen
McDermid, John
Munga, Jane
Narayanan, Arvind
Nelson, Alondra
Neppel, Clara
Oh, Alice
Ramchurn, Gopal
Russell, Stuart
Schaake, Marietje
Schölkopf, Bernhard
Song, Dawn
Soto, Alvaro
Tiedrich, Lee
Varoquaux, Gaël
Yao, Andrew
Zhang, Ya-Qin
Albalawi, Fahad
Alserkal, Marwan
Ajala, Olubunmi
Avrin, Guillaume
Busch, Christian
Ferreira de Carvalho, André Carlos Ponce de León
Fox, Bronwyn
Gill, Amandeep Singh
Hatip, Ahmet Halit
Heikkilä, Juha
Jolly, Gill
Katzir, Ziv
Kitano, Hiroaki
Krüger, Antonio
Johnson, Chris
Khan, Saif M.
Lee, Kyoung Mu
Ligot, Dominic Vincent
Molchanovskyi, Oleksii
Monti, Andrea
Mwamanzi, Nusu
Nemer, Mona
Oliver, Nuria
López, Portillo, José Ramón
Ravindran, Balaraman
Pezoa Rivera, Raquel
Riza, Hammam
Rugege, Crystal
Seoighe, Ciarán
Sheehan, Jerry
Sheikh, Haroon
Wong, Denise
Zeng, Yi
Abstract
The International AI Safety Report is the world’s first comprehensive synthesis of current literature of the risks and capabilities of advanced AI systems. Chaired by Turing-award winning computer scientist, Yoshua Bengio, it is the culmination of work by 100 AI experts to advance a shared international understanding of the risks of advanced Artificial Intelligence (AI).
The Chair is supported by an international Expert Advisory Panel made up of representatives from 30 countries, the United Nations (UN), European Union (EU), and Organization for Economic Cooperation and Development (OECD).
The report does not make policy recommendations. Instead it summarises the scientific evidence on the safety of general-purpose AI to help create a shared international understanding of risks from advanced AI and how they can be mitigated. General-purpose AI – or AI that can perform a wide variety of tasks – is a type of AI that has advanced rapidly in recent years and is widely used by technology companies for a range of consumer and business purposes.
The report is concerned with AI risks and AI safety and focuses on identifying these risks and evaluating methods for mitigating them. It summarises the scientific evidence on 3 core questions – What can general-purpose AI do? What are risks associated with general-purpose AI? and What mitigation techniques are there against these risks? and aims to:
- provide scientific information that will support informed policymaking – it does not recommend specific policies
- facilitate constructive and evidence-based discussion about the uncertainty of general-purpose AI and its outcomes
- contribute to an internationally shared scientific understanding of advanced AI safety
The Chair is supported by an international Expert Advisory Panel made up of representatives from 30 countries, the United Nations (UN), European Union (EU), and Organization for Economic Cooperation and Development (OECD).
The report does not make policy recommendations. Instead it summarises the scientific evidence on the safety of general-purpose AI to help create a shared international understanding of risks from advanced AI and how they can be mitigated. General-purpose AI – or AI that can perform a wide variety of tasks – is a type of AI that has advanced rapidly in recent years and is widely used by technology companies for a range of consumer and business purposes.
The report is concerned with AI risks and AI safety and focuses on identifying these risks and evaluating methods for mitigating them. It summarises the scientific evidence on 3 core questions – What can general-purpose AI do? What are risks associated with general-purpose AI? and What mitigation techniques are there against these risks? and aims to:
- provide scientific information that will support informed policymaking – it does not recommend specific policies
- facilitate constructive and evidence-based discussion about the uncertainty of general-purpose AI and its outcomes
- contribute to an internationally shared scientific understanding of advanced AI safety
Subjects
Description
The report was written by a diverse group of academics, guided by world-leading experts in AI. There was no industry or government influence over the content. The secretariat organised a thorough review, which included valuable input from global civil society and industry leaders. The Chair and writers considered all feedback and included it where needed.
The report will be presented at the AI Action Summit in Paris in February 2025. An interim version of this report was published in May 2024 and presented at the AI Seoul Summit.
UK Department for Science, Innovation and Technology. (2025). International AI Safety Report. Research series no. DSIT 2025/001C. Crown Copyright.
The report will be presented at the AI Action Summit in Paris in February 2025. An interim version of this report was published in May 2024 and presented at the AI Seoul Summit.
UK Department for Science, Innovation and Technology. (2025). International AI Safety Report. Research series no. DSIT 2025/001C. Crown Copyright.
File(s)
Loading...
Name
International_AI_Safety_Report_2025.pdf
Type
Main Article
Description
UK Department for Science, Innovation and Technology. (2025). International AI Safety Report. Research series no. DSIT 2025/001C. Crown Copyright.
Size
4.46 MB
Format
Adobe PDF
Checksum (MD5)
7352138f84f94f0c87572bb9e626d661