As lawmakers from both sides of the aisle appear eager to begin regulating artificial intelligence, experts told the Daily Caller News Foundation that any serious reforms face significant obstacles to implementation.
OpenAI CEO Sam Altman testified at a Senate Judiciary Committee hearing on Tuesday, and called for the regulation of artificial intelligence by implementing licensing and testing requirements or creating a new agency, to which many lawmakers seemed to be receptive. However, actually passing legislation to regulate AI, or creating an agency to oversee it as subcommittee members also proposed, presents numerous legal, political and practical problems, experts told the DCNF.
“I think it’s premature to call for this sort of massive regulation of what is at least right now still kind of a novelty,” Zach Graves, head of policy at the Lincoln Network, told the DCNF.
If not a new agency, then regulation of artificial intelligence would require a “greater kind of authorization of powers for an existing [agency],” Graves said. “And in the US, those are going to run into some real constitutional tests” such as freedom of speech.
Since artificial intelligence chatbots can only generate speech using code programmed by humans, the text they produce could be considered a form of human speech, wrote AI-focused attorney John Frank Weaver in 2018.
Graves also said he seriously doubts that lawmakers will promptly advance any substantial AI regulations. “I just don’t think people are very clear about how this is going to work and I don’t think they really have a clear political path to doing anything,” he said. (RELATED: Federal Agencies Pledge To ‘Vigorously Enforce’ Laws Against Discriminatory AI Technologies In Joint Statement)
At the Tuesday hearing, lawmakers recommended rules that require companies to reveal the inner workings of their AI models and the datasets they utilize. They also suggested implementing antitrust measures to prohibit monopolization by companies like Microsoft and Google in the emerging AI industry. Proponents of regulating artificial intelligence argue that such measures are necessary because of the potential damage the technology can cause. “I do think that there should be an agency that is helping us make sure that some of these systems are safe, that they’re not harming us, that it is actually beneficial,” AI ethicist Timnit Gebru said in a 60 Minutes interview in March. “There should be some sort of oversight. I don’t see any reason why this one industry is being treated so differently from everything else.”
Joel Thayer, president of the Digital Progress Institute, told the DCNF he supports some form of regulation, but also has some concerns.
“I agree in principle with Sam Altman’s comments that we need a comprehensive strategy on how best to deal with AI, and we must do so by leveraging all the tools in our toolkit,” Thayer said.
“A worthy initial strategy would be two-fold, stronger antitrust enforcement to decrease the centralization of those markets and, thus, allowing for new market entrants in AI (such as OpenAI),” Thayer said. “The other would be more transparency on how they are developing their AI systems—most importantly with whom they are partnering.”
However, over-regulating AI could present considerable danger to U.S. national security, especially concerning China, Thayer told the DCNF.
“The China threat is preeminent, and we must put our focus there first,” he said.
Overly strict regulations will hamper the United States’ potential to advance its AI technologies, James Czerniawski, senior policy analyst at Americans for Prosperity, told the DCNF. “Anything that we do to slow and impede our progress on AI, it’s just allowing China to close the gap that exists between the United States and China,” he said.
Proposals that reduce the speed of AI development in the U.S. would be undesirable because “when you are the first … to go and crack through any of this stuff, that gives you an immense amount of power,” Czerniawski said.
However, OpenAI CEO Altman told lawmakers that government oversight is crucial because of the potential dangers of future AI models.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” Altman said in his opening statement.
“For example, the U.S. government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities,” Altman said. Other ways AI companies can work with the government that he shared were “ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures, and examining opportunities for global coordination.”
OpenAI did not respond to the DCNF’s request for comment.
All content created by the Daily Caller News Foundation, an independent and nonpartisan newswire service, is available without charge to any legitimate news publisher that can provide a large audience. All republished articles must include our logo, our reporter’s byline and their DCNF affiliation. For any questions about our guidelines or partnering with us, please contact firstname.lastname@example.org.
GPT’s reaction to this article:
This article discusses the challenges and potential dangers of regulating artificial intelligence (AI). While some experts support the idea of regulation, others argue that it could hamper the development of AI and give other countries, such as China, an advantage. Additionally, there are legal and constitutional obstacles to creating a new agency or granting more powers to existing ones. The article also highlights the potential benefits of transparency and antitrust measures in the AI industry. Overall, the article presents a balanced view of the issue and provides insights from various experts.