Congress shouldn't rush into regulating AI
Sam Altman's proposal to license foundation models isn't ready for prime time.
When Silicon Valley executives testify before Congress, they normally get raked over the coals. But OpenAI CEO Sam Altman’s Tuesday appearance before the Senate Judiciary Committee went differently.
Senators asked Altman probing questions and listened respectfully to his answers. Afterward, the committee’s chairman, Sen. Richard Blumenthal (D-CT) praised Altman.
“Sam Altman is night and day compared to other CEOs,” Blumenthal told reporters after the hearing. “Not just in the words and rhetoric, but in actual actions, and his willingness to participate and commit to specific action.”
The centerpiece of Altman’s testimony was a call for a new licensing regime for powerful AI models.
“I would form a new agency that licenses any effort above a certain scale of capabilities, and could take that license away and ensure compliance with safety standards,” Altman said. He added these standards should be focused on “dangerous capabilities” such as the ability to “self-replicate and self-exfiltrate into the wild.”
Altman’s proposal would represent a dramatic expansion of federal power over the AI sector. And as far as I can tell, there’s been little work done to flesh out what such a system might look like.
“I'm a little perplexed by what he's proposing,” University of Colorado legal scholar Margot Kaminski told me on Wednesday. “It doesn't map onto all the laws that are out there.”
Intuitively, it makes sense that a radically new technology like AI might need a new kind of regulatory framework. The problem is that Altman and others who think like him haven’t explained how such a licensing scheme would work. And given how fast the technology is changing, there’s a big risk of getting it wrong.
Two approaches to AI regulation
Another witness at Tuesday’s hearing was IBM executive Christina Montgomery, who defended a more conventional approach to regulating AI. IBM calls it “precision regulation,” and it focuses on overseeing the use of AI in high-stakes domains like criminal justice, hiring, and medicine, where you’d want to be especially careful about removing humans from any decision-making.
Kaminski told me that the European Union is currently developing a new AI Act that takes an approach consistent with IBM’s recommendations. It would classify AI applications by their level of risk and subject higher-risk applications to stricter regulation. Some of the highest-risk applications of AI—for example, tracking people in real time using biometric identifiers—would be banned outright.
The EU published an updated draft of the proposal last week. The new draft caused a stir because it proposed regulating providers of so-called foundation models—powerful machine-learning models like GPT-4 with a wide range of potential uses. Before a European company could build a product on top of a foundation model, the model’s creator would need to provide EU regulators with detailed information about how the model was trained, what it could do, and how potential risks were being mitigated.
Critics warn that this could create a schism in the AI world, since US-based creators of foundation models might be unwilling or unable to comply with EU requirements. European companies could then be cut off from access to cutting-edge US models, which could hamper the development of Europe’s AI sector. Critics also warn that it could limit the development of open-source foundation models, since their sponsors might not have the resources necessary to comply with the EU’s red tape.
Still, the European proposal mainly focuses on regulating consumer-facing applications of AI. In contrast, Altman seems to be advocating for governments to create a licensing regime for foundation models themselves.
I suspect these competing proposals reflect the divergent philosophical approaches I wrote about recently. Altman’s proposal to directly regulate powerful language models reflects the singularist concern that sufficiently powerful AI models could become self-aware and wipe out the human race. In contrast, the IBM and EU proposals reflect a more physicalist approach: focusing on the harms that can occur when people apply AI to specific sectors of the economy.
Sam Altman’s worst fears
Near the start of Tuesday’s hearing, Blumenthal said his “biggest nightmare” about AI was “the effect on jobs.” He asked Altman to share his own biggest AI nightmare and then comment on whether he expected AI to cause large-scale job losses.
“Like with all technological revolutions, I expect there to be significant impact on jobs, but exactly what that impact looks like is very difficult to predict,” Altman said. “I believe that there will be far greater jobs on the other side of this and the jobs of today will get better.”
The panel’s third witness, psychologist and entrepreneur Gary Marcus, said he was also concerned about the potential for job losses. But he then pointed out that Altman didn’t actually reveal his biggest nightmare. So Blumenthal offered Altman another chance to respond.
“My worst fears are that we, the field, the technology, the industry, cause significant harm to the world,” Altman said. “It’s why we started the company [to avert that future]. I think if this technology goes wrong, it can go quite wrong.”
This is still pretty vague, but Altman’s past comments make it clear he’s worried that AI could threaten the survival of humanity. For example, in an interview earlier this year, Altman said that “the worst case is lights-out for all of us.”
Later in Tuesday’s hearing, Altman mentioned AIs designing “novel biological agents” as one of the threats regulators should guard against.
Like Altman, Marcus favors a licensing regime for new AI models. He called for “a safety review like we use with the FDA” to be conducted before a system like ChatGPT could be widely deployed.
Testing for AI safety won’t be easy
Chatbots can produce a wide range of outputs that people might consider unsafe, from bad medical advice to instructions for committing crimes to biased or bigoted statements. Deciding when a chatbot’s responses are harmful enough to justify keeping it off the market seems like a political minefield.
For example, several senators expressed concern that AI-generated misinformation could undermine democracy. Sen. Amy Klobuchar (D-MN) raised concerns about ChatGPT giving voters inaccurate information about how to vote on election day. Others worried about generative AI systems generating “deep fake” images, audio, or video that could deceive voters and influence how they vote.
But while almost every member of Congress probably agrees that disinformation is bad in the abstract, Republicans and Democrats are likely to disagree sharply about exactly how to define the concept. Moreover, Margot Kaminski told me that contemporary First Amendment jurisprudence would make it difficult for governments in the US to limit AI-generated misinformation. For example, any law requiring a license to generate political speech using AI would likely be disallowed as unconstitutional prior restraint.
There’s also a major conceptual problem with using FDA-style testing to guard against dangerous, superintelligent AI. A basic premise of singularist thought is that such systems will be skilled at manipulating and deceiving humans. Such a system could presumably trick government regulators into approving it by pretending to be less capable and more benign than it really is.
Even if an AI model isn’t dangerous on its own, it could be a significant component of a dangerous system. In recent weeks, people have been experimenting with “agentic” AI systems like Auto-GPT and BabyAGI that effectively give large language models the ability to make plans and then carry them out autonomously. So far, these systems don’t work very well and don’t seem to pose a danger to anyone. But that could change as large language models get more sophisticated.
All of which is to say I’m not surprised Altman doesn’t have all the details of his licensing scheme worked out. Guarding against the worst-case consequences of AI seems like a legitimately difficult problem.
But if these details aren’t forthcoming, the result could be a big mismatch between what policymakers say they’re trying to accomplish and what they actually do.
Tuesday’s hearing made it clear that there’s a strong, bipartisan appetite in Congress for new AI regulations. Their sense of urgency is driven by the belief that AI could pose a serious threat to our jobs, our democracy, and perhaps even our survival as a species.
Yet concrete regulatory proposals tend to focus on more pedestrian goals. For example, last October the Biden administration published a “Blueprint for an AI Bill of Rights” that included sections on privacy, nondiscrimination, and transparency. There’s also a section on “safe and effective systems” that focuses on ensuring that physical systems like self-driving cars don’t malfunction and hurt people.
These are all worthy concerns, but I don’t think they’re the concerns that keep Sam Altman up at night.
Proceed with caution
A recurring theme of Tuesday’s hearing was that Congress moved too slowly to regulate social media and shouldn’t make the same mistake with AI. I’m not sure I agree with this premise.
Today there’s a fairly broad consensus that social media has deepened partisan divisions and worsened mental health—especially for teenage girls. But it’s not obvious to me that Congress could have anticipated these problems 10 or 20 years ago. And even today, there’s no real consensus about how to solve them.
Right now, generative AI technology is changing so quickly that it’s difficult to predict what it will look like five or 10 years down the road. It’s harder to predict what social or economic problems AI is likely to cause, and still harder to anticipate what policy changes are likely to be helpful.
So it’s not obvious to me that Congress’s sense of urgency on this issue is justified. Enacting a licensing regime now could also cement the dominance of industry incumbents like Google and OpenAI by making it harder for startups to create foundation models of their own. It might make more sense to wait a year or two and see how AI technology evolves before passing a major bill to regulate AI.
In the meantime, I think the best thing Congress could do is to fund efforts to better understand the potential harms from AI. Earlier this month, the National Science Foundation announced the creation of seven new National Artificial Intelligence Research Institutes focused on issues like trustworthy AI and cybersecurity. Putting more money into initiatives like this could be money well spent.
I’d also love to see Congress create an agency to investigate cybersecurity vulnerabilities in real-world systems. It could work something like the National Transportation Safety Board, the federal agency that investigates plane crashes, train derailments, and the like. A new cybersecurity agency could investigate whether the operators of power plants, pipelines, military drones, and self-driving cars are taking appropriate precautions against hackers.
These precautions would make our systems more secure against attacks from humans as well as AIs. And it would also give us a margin of safety if Sam Altman’s nightmare is eventually realized.
The EU AI Act actually isn't the worst legislation and assuming that the costs of developing different systemts for different jurisdictions are very high, it seems that the AI Act is likely to become something like the global standard, just like GDPR did for privacy. And yet, you're right, especially with foundational models, it's hard to predict exactly where harm might occur and it is likely that lawmakers will have to finetune legislation later on. But this will happen against the background of existing and pretty solid tech regulation, especially around online safety and intermediary liability, so it's not like there's no precedence with regard to how legislation could look like ...
I think the core issue with applying traditional regulatory approaches to AI is that we've never had to regulate a [black ball technology](https://nickbostrom.com/papers/vulnerable.pdf). Bioweapons are almost black balls, and could be in the future, but are currently too difficult / costly, and don't have strong economic incentives favoring their use by rational actors.