Artificial Intelligence Experts Warn That Minimizing ‘Risk of Extinction From AI Should Be a Global Priority’

More than 300 scientists, engineers, and other notable figures signed off on a statement released by the Center for AI Safety.

Yuichiro Chino / Getty Images

Artificial Intelligence experts are warning the public about the risks posed to humanity.

More than 300 top researchers, scientists, engineers, and business heads signed off on a statement Tuesday detailing the "risk of extinction" posed by AI. The list of notable signatories includes OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and Geoffrey Hinton, a professor known as the “godfather of AI," among hundreds of other individuals.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the release shared on the San Francisco-based nonprofit Center for AI Safety's website.

The AI experts added that the statement is meant to "create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously."

In a conversation with the New York Times, Dan Hendrycks, executive director of the Center for AI Safety, maintained that Tuesday's message is merely the first step forward in the battle against AI risk.

“There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Hendrycks shared. “But, in fact, many people privately would express concerns about these things.”

Latest in Life