Long before Elon Musk and Apple co-founder Steve Wozniak signed a letter warning that artificial intelligence poses “profound risks” to humanity, British theoretical physicist Stephen Hawking had been sounding the alarm on the rapidly evolving technology.
“The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in an interview in 2014.
Hawking, who suffered from amyotrophic lateral sclerosis (ALS) for more than 55 years, died in 2018 at the age of 76. Though he had critical remarks on AI, he also used a very basic form of the technology in order to communicate due to his disease, which weakens muscles and required Hawking to use a wheelchair.
Hawking was left unable to speak in 1985 and relied on various ways to communicate, including a speech-generating device run by Intel, which allowed him to use facial movements to select words or letters that were synthesized to speech.
Hawking’s comment to the BBC in 2014 that AI could “spell the end of the human race” was in response to a question about potentially revamping the voice technology he relied on. He told the BBC that very basic forms of AI had already proven powerful but creating systems that rival human intelligence or surpass it could be disastrous for the human race.
“It would take off on its own and re-design itself at an ever-increasing rate,” he said.
Stephen Hawking as he hosts a press conference to announce Breakthrough Starshot, a new space exploration initiative, at One World Observatory on April 12, 2016, in New York City. Bryan Bedder
“Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded,” Hawking added.
Months after his death, Hawking’s final book hit the market. Titled “Brief Answers to the Big Questions,” his book provided readers with answers to questions he was frequently asked. The science book hashed out Hawking’s argument against the existence of God, how humans will likely live in space one day and his fears over genetic engineering and global warming.
Artificial intelligence also took a top spot on his list of “big questions,” arguing that computers are “likely to overtake humans in intelligence” within 100 years.
Elon Musk attends The 2022 Met Gala Celebrating “In America: An Anthology of Fashion” at The Metropolitan Museum of Art on May 2, 2022 in New York City. Getty Images for The Met Museum/
“We may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails,” he wrote.
He argued that computers need to be trained to align with human goals, adding that not taking the risks associated with AI seriously could potentially be “our worst mistake ever.”
“It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake — and potentially our worst mistake ever.”
Hawking’s remarks echo concerns this year from tech giant Elon Musk and Apple co-founder Steve Wozniak in a letter released in March. The two tech leaders, along with thousands of other experts, signed a letter that called for an at least six-month pause on building AI systems more powerful than OpenAI’s GPT-4 chatbot.
Professor Hawking attends the gala screening of “Hawking” on the opening night of the Cambridge Film Festival held at Emmanuel College on Sept. 19, 2013 in Cambridge, Cambridgeshire. Getty Images
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” reads the letter, published by nonprofit Future of Life.
OpenAI’s ChatGPT became the fastest-growing user base with 100 million monthly active users in January as people across the world rushed to use the chatbot, which simulates human-like conversations based on prompts it is given. The lab released the latest iteration of the platform, GPT-4, in March.
Despite calls to pause research at AI labs that are working on tech that would surpass GPT-4, the release of the system served as a watershed moment that reverberated across the tech industry and catapulted various companies to compete on building their own AI systems.
Google is working to overhaul its search engine and even create a new one that relies on AI; Microsoft has rolled out the “new Bing” search engine described as users’ “AI-powered copilot for the web”; and Musk said he will launch a rival AI system that he described as “maximum truth-seeking.”
Hawking advised in the year prior to his death that the world needed to “learn how to prepare for, and avoid, the potential risks” associated with AI, arguing that the systems “could be the worst event in the history of our civilization.” He did note, however, that the future is still unknown and AI could prove beneficial for humanity if trained correctly.
“Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and sidelined, or conceivably destroyed by it,” Hawking said during a speech at the Web Summit technology conference in Portugal in 2017.
GPT’s reaction to this article:
I think that Stephen Hawking is a very intelligent man and he has some valid points. However, I also think that we need to be careful with artificial intelligence and make sure that we are in control of it.