Repository logo
  • Log In
Repository logoRepository logo
  • Communities & Collections
  • Advanced Search
  • Statistics
  • Log In
  1. Home
  2. Social Science
  3. Alondra Nelson
  4. AI safety on whose terms?
 
  • Details
Options

AI safety on whose terms?

Date
2023-07-14
Author(s)
Lazar, Seth
Nelson, Alondra
URI
https://albert.ias.edu/20.500.12111/8168
DOI
10.1126/science.adi8982
Abstract
Rapid, widespread adoption of the latest large language models has sparked both excitement and concern about advanced artificial intelligence (AI). In response, many are looking to the field of AI safety for answers. Major AI companies are purportedly investing heavily in this young research program, even as they cut “trust and safety” teams addressing harms from current systems. Governments are taking notice too. The United Kingdom just invested £100 million in a new “Foundation Model Taskforce” and plans an AI safety summit this year. And yet, as research priorities are being set, it is already clear that the prevailing technical agenda for AI safety is inadequate to address critical questions. Only a sociotechnical approach can truly limit current and potential dangers of advanced AI.
Subjects
Artificial intelligence
AI
machine-intelligent
large language models
File(s)
Loading...
Thumbnail Image
Name

Lazar-Nelson_AI-Safety-on-whose-terms_2023.pdf

Type

Main Article

Description
AI safety on whose terms? Science 381, p. 138
Size

636.78 KB

Format

Adobe PDF

Checksum (MD5)

0b1cde196d41bd3b0279bde4b7079e29

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback
  • Take Down Request
  • About