“I’m sorry, Dave, I’m afraid I can’t do that.”
With this line, the renegade computer HAL 9000 directly contradicted a direct order given to it by Dave Bowman, mission scientist aboard the Discovery One spacecraft bound for Jupiter, in Stanley Kubrick’s 1968 film “2001: A Space Odyssey.” In 1984, James Cameron’s “The Terminator” introduced us all to Skynet, a defense computer network that initiated a global nuclear war to exterminate humanity.
Cinematic portrayals like these have made us all familiar with the risks of artificial intelligence, where computers capable of thinking like humans pose a danger to us all. But that’s simply science fiction, right?
We all use computer-assisted artificial intelligence every day. The autocorrect feature in text messaging is one example; Apple’s Siri and Amazon’s Alexa are others.
The endoscopy unit in my office has an artificial intelligence feature that assists us in detecting colon polyps. When you order something on Amazon and ads pop up on social media for similar products? That’s AI, too. So is the facial-recognition technology on your phone.
Opinions in your inbox: Get exclusive access to our columnists and the best of our columns
Not your parents’ Google: Why universities should embrace, not fear, ChatGPT and artificial intelligence
In 2017, Russian President Vladimir Putin said the nation that controls AI “will become the ruler of the world.” Putin has become a worldwide villain following the Russian invasion of Ukraine, but the dictator was not wrong about AI.
Artificial intelligence impacts all of us daily. A search on the Apple app store reveals hundreds of AI-driven apps in areas as disparate as art, photography, video production and music composition.
Progress in AI went relatively slowly until 2012, when the idea of a neural network revolutionized the entire industry. A neural network is a mathematical system that finds statistical patterns in enormous amounts of data.
In 2018, another technological leap occurred when Google, Microsoft and OpenAI began building vast neural networks trained on large volumes of text from the internet. These large language models opened the door for the next step in AI evolution: Generative AI, where the systems learned to write prose, poetry and have conversations that seemed almost human.
The AI chatbot ChatGPT, launched by developer OpenAI in collaboration with Microsoft last November, has made headlines because of its conversational and writing abilities, which mimic human speech and writing, respectively. ChatGPT can write business pitches, compose music and poetry, simulate an entire chat room, and write essays. It can compose grocery lists, give you ideas about travel and can describe art in detail. AI chatbot programs can also “read” large articles and summarize them – a sort of computerized SparkNotes.
How far will we let AI go?: ChatGPT made up research claiming guns aren’t harmful to kids
John Oliver is wrong to worry about ChatGPT: AI can help us solve complex problems
A newer version of Open AI’s chatbot platform, GPT-4, recently scored in the 88th percentile on the Law School Admission Test. Naturally, there are concerns about the capabilities of such an advanced system. Italy recently banned ChatGPT due to concerns about privacy.
ChatGPT is available for free on the OpenAI website. I have an account there and asked ChatGPT to compose a poem about Savannah, a haiku about enduring love and a paragraph about pirates in the style of Ernest Hemingway. ChatGPT composed each of these in seconds. Both were passable, if not spectacular.
As a writer, this frightens me.
What might be even more frightening is the ability of AI programs to create misleading, but realistic-looking disinformation. AI programs can create realistic-looking digital images, such as one of Pope Francis in a puffy Balenciaga jacket or of Donald Trump marching through the streets of New York this month in front of a crowd of flag-waving supporters, which in fact were both AI-generated fakes.
In an era in which social media is filled with all sorts of misleading information, the ability of realistic AI-generated fakes to be propagated in the media should arouse concern in all of us.
Opinion alerts: Get columns from your favorite columnists + expert analysis on top issues, delivered straight to your device through the USA TODAY app. Don’t have the app? Download it for free from your app store.
Real concerns could arise if artificial intelligence becomes self-aware, or sentient. This is the concern graphically illustrated in “The Terminator,” where Skynet decided that human beings were no longer necessary and exterminated them. There are some researchers who claim that this sort of thing is already happening. Last year, Google sidelined an engineer who claimed that its LaMDA AI software was sentient.
Most researchers in the area do not believe that AI models have achieved sentience, at least not yet. But the progress in these areas has been very rapid, and it’s likely only a matter of time before that occurs.
This is especially concerning considering the increasing automation of the world’s military. The U.S. Navy estimates that up to 60% of its carrier air fleet will be composed of AI-driven unarmed aerial vehicles in the next decade. This first deployment of such a vehicle is slated for 2026.
The Navy has already taken delivery of a full-sized fully autonomous warship, the USNS Apalachicola, which is unmanned and can remain at sea for up to 30 days without any human crew.
The advent of the internet in the early 1990s revolutionized commerce, communication and the dissemination of information. Today, with 63% of the globe connected via the web, most of us cannot imagine the world without it.
Artificial intelligence holds similar promise, but greater peril. How we utilize AI will define the trajectory of humanity’s next generation, and beyond.
Mark Murphy, a Savannah-based author and physician, is a longtime contributor to the Savannah Morning News, where this column first published.
You can read diverse opinions from our Board of Contributors and other writers on the Opinion front page, on Twitter @usatodayopinion and in our daily Opinion newsletter. To respond to a column, submit a comment to firstname.lastname@example.org.
“I’m sorry, Dave, I’m afraid I can’t do that.”