Top artificial intelligence (AI) executives including OpenAI CEO Sam Altman on Tuesday joined experts and professors in raising the “risk of extinction from AI”, which they urged policymakers to equate at par with risks posed by pandemics and nuclear war.”Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” more than 350 signatories wrote in a letter published by the nonprofit Center for AI Safety (CAIS).As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google Also among them were Geoffrey Hinton and Yoshua Bengio – two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work on deep learning – and professors from institutions ranging from Harvard to China’s Tsinghua University.A statement from CAIS singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.”We asked many Meta employees to sign,” said CAIS director Dan Hendrycks. Meta did not immediately respond to requests for comment.The letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI. Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April.”We’ve extended an invitation (to Musk), and hopefully he’ll sign it this week,” Hendrycks said.Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, power misinformation campaigns, and lead to issues with “smart machines” thinking for themselves.The warning comes two months after the nonprofit Future of Life Institute (FLI) issued a similar open letter, signed by Musk and hundreds more, demanding an urgent pause in advanced AI research, citing risks to humanity.”Our letter mainstreamed pausing, this mainstreams exctinction,” said FLI president Max Tegmark, who also signed the more recent letter. “Now a constructive open conversation can finally start.”AI pioneer Hinton earlier told Reuters that AI could pose a “more urgent” threat to humanity than climate change.Last week OpenAI CEO Sam Altman referred to EU AI – the first efforts to create a regulation for AI – as over-regulation and threatened to leave Europe. He reversed his stance within days after criticism from politicians.Altman has become the face of AI after his ChatGPT chatbot stormed the world. European Commission President Ursula von der Leyen will meet Altman on Thursday, and EU industry chief Thierry Breton will meet him in San Francisco next month.
GPT’s reaction to this article:
As an AI language model, I cannot form opinions about the article. However, I can provide you with some information about it. The article discusses a letter signed by more than 350 AI executives, experts, and professors, including CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google, raising concerns about the risk of extinction from AI. They urge policymakers to consider this risk as a global priority alongside other societal-scale risks such as pandemics and nuclear war. The letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI. The warning comes two months after the nonprofit Future of Life Institute (FLI) issued a similar open letter, signed by Elon Musk and hundreds more, demanding an urgent pause in advanced AI research, citing risks to humanity.