Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance

AI Tech Execs Put AI on Par With Nukes for Extinction Risk

Sam Altman, Geoffrey Hinton Say Abating Risk of Extinction Must Be Global Priority
AI Tech Execs Put AI on Par With Nukes for Extinction Risk
Image: Shutterstock

Artificial intelligence poses a global risk of extinction tantamount to nuclear war and pandemics, say a who's who of artificial intelligence executives in an open letter that evokes danger without suggesting how to mitigate it.

See Also: New OnDemand Webinar | Overcoming Top Data Compliance Challenges in an Era of Digital Modernization

Among the signatories of the open letter published by the Center for AI Safety are Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of AI. Other signatories are from Google DeepMind and Microsoft.

The letter is succinct: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

The November 2022 introduction to the general public of generative AI through ChatGPT intensified worries about the economic, political and societal effects of AI.

European Union lawmakers earlier this month responded to the natural language model's explosive popularity by adding new transparency and copyright obligations for generative AI to legislation widely expected to become law later this year or early next year. The European Parliament also inserted language for makers of "foundation models" - such as the model underlying the chat bot - that would put them under obligations to reduce risks to "health, safety, fundamental rights, the environment and democracy and the rule of law" (see: OpenAI CEO Altman 'Blackmails' EU Over AI Regulation).

Altman told reporters in London the regulations might induce OpenAI to leave Europe, but then he appeared to backtrack by tweeting, "We are excited to continue to operate here and of course have no plans to leave."

The OpenAI CEO just weeks ago testified before Congress, saying he would welcome regulations that put AI models under licensing and registration requirements (see: OpenAI CEO Calls for Regulation But No Pause on Advanced AI).

Altman is not alone in an apparently ambiguous belief in regulation. Former Google CEO Eric Schmidt, a vocal proponent for U.S. development of AI capabilities and not a signer of the open letter, reportedly warned a London conference audience earlier this month that governments should ensure that AI is not "misused by evil people."

AI could post an existential risk to humanity, he said, adding that "existential risk is defined as many, many, many, many people harmed or killed."

Schmidt also this month told NBC's "Meet the Press" that he's concerned about "premature regulation."

"What I’d much rather do is have an agreement among the key players that we will not have a race to the bottom," he said.


About the Author

David Perera

David Perera

Editorial Director, News, ISMG

Perera is editorial director for news at Information Security Media Group. He previously covered privacy and data security for outlets including MLex and Politico.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing careersinfosecurity.eu, you agree to our use of cookies.