‘If this technology goes wrong, it can go quite wrong,’ Sam Altman told the Senate judiciary committee this week
Text Thom Waite
It might seem like a contradiction, but Sam Altman – CEO of OpenAI, the developer of one of the most powerful publicly-available AI systems, GPT-4 – has supported increased regulation of the technology for some time. Now, he’s joined other industry leaders to call for new guardrails in front of the US Congress. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman told the Senate judiciary committee on Tuesday (May 16). Many of these models have, of course, been introduced by OpenAI itself, which is responsible for the widely-used chatbot ChatGPT, as well as the pioneering image generator DALL-E 2. “For a very new technology we need a new framework,” he added, proposing that existing frameworks are insufficient to ensure that AI doesn’t have disastrous consequences for humanity. As one of several experts to call for a new regulatory agency for AI, he suggested that the government could consider establishing a set of independent safety standards and tests that models would have to pass before they’re deployed, as well as licensing requirements for developers.
At this point, the potential downsides of new AI tools (from harmful deepfakes, to weaponised disinformation, to impersonation fraud, and job replacement) are widely understood. During the hearing, politicians also drew comparisons to the disruptive impact of social media on society, which makes sense – like AI, social media was a new technology that threw up a range of new social and ethical problems that regulators were totally unprepared for. Hopefully, by learning from the past, they’re less likely to repeat the same mistakes. Undoubtedly, regulation will play a big role in controlling artificial intelligence as it progresses toward “godlike” levels of knowledge. Vocal critics of OpenAI, however, have suggested that Altman’s calls for government intervention will help the company maintain its established lead, while strangling competitors (including open source efforts). Co-founder Elon Musk, who resigned his board seat in 2018 and has gone on to sign an open letter calling for a pause on developing new AI systems, also recently criticised the company for transitioning to a for-profit business model.
❗Sam Altman’s opening statement from the AI hearing today. pic.twitter.com/JhSDnOyloD — Smoke-away (@SmokeAwayyy) May 16, 2023
GPT’s reaction to this article:
As an AI language model, I am not capable of having opinions or beliefs. However, I can provide an analysis of the article. The article discusses the CEO of OpenAI, Sam Altman, joining other industry leaders in calling for increased regulation of AI technology. Altman argues that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models. He proposes the establishment of a new regulatory agency for AI, which would set independent safety standards and tests for AI models and licensing requirements for developers. The article acknowledges the potential downsides of new AI tools and the need for regulation to control AI as it progresses toward “godlike” levels of knowledge. However, some critics suggest that Altman’s calls for government intervention will help OpenAI maintain its established lead while strangling competitors.