Regulating AI won’t be easy
A new framework for regulating AI raises more questions than it answers.
Last week the Senate Judiciary Committee held a hearing on regulating AI. If you had watched the hearing hoping to gain insight into how Congress might regulate artificial intelligence, you would have mostly been disappointed.
Senators asked executives from Microsoft and Nvidia—plus a law professor from Boston University—if they were concerned about the potential harmful impact of AI on misinformation, jobs, and local journalism. All three witnesses said they were. But there was little concrete discussion of how Congress might address these concerns.
But toward the end of the hearing, the Senate’s youngest member, John Ossoff (D-GA) asked a fantastic question that made clear the dilemma Congress faces:
“As more and more models are trained and developed to higher levels of power and capability, there may be a proliferation of models,” Ossoff said. “Perhaps not the frontier models, perhaps not those at the bleeding edge that use the most compute of all, but powerful enough to have serious implications. So is the question which models are the most powerful at a moment in time? Or is there a threshold of capability or power that should define the scope of regulated technology?”
A leading proposal for regulating AI was put together by Sen Richard Blumenthal (D-CT)—who chaired last week’s hearing—and his GOP counterpart Sen. Josh Hawley (R-MO). Blumenthal and Hawley haven’t published legislative text yet; all they’ve released so far is a press release and a one-page PDF. The latter document says companies will need to register with a new government agency before they can develop “sophisticated general-purpose AI models” (like GPT-4) as well as “models used in high-risk situations” (like facial recognition).
This is in line with the recommendations of witnesses at a previous hearing back in May. But of course “sophisticated,” “general purpose” and “high risk” are not rigorously defined concepts. If Congress wants to turn the Blumenthal/Hawley framework into a new statute, it’s going to have to give those concepts specific definitions. And that’s not going to be easy.
Currently, the most powerful AI models are being produced by a small number of well-funded organizations, including OpenAI, Google, Anthropic, and Meta. If we could be sure that this would continue to be the case, then it might make sense to focus on regulating these companies.
But computing power is steadily getting cheaper, and there’s a thriving open source community that’s finding ways to squeeze more powerful models onto less powerful silicon. In a couple of years, we may be able to run models as sophisticated as GPT-4 on our laptops—maybe even our phones.
So as Ossoff noted, Congress faces a difficult choice. If it establishes fixed performance criteria for model licensing—for example, “anything more powerful than GPT-4”—it could wind up covering a bunch of small projects that don’t have the resources to comply with a complex licensing system. In practice, this would likely push open source projects overseas to countries with more permissive rules.
Alternatively, the threshold for regulation could rise over time as models get more powerful. That would allow regulators to keep their focus on companies building the most powerful models. But this would mean that the capabilities of unlicensed models would be improving rapidly, just a couple of years behind the licensed models. If your goal is to keep powerful models away from the bad guys, that might not be good enough.
I continue to believe that the best way to address threats from AI is to do a better job of locking down the physical world. If we’re worried about someone using AI to hack into critical infrastructure, we should invest resources in making critical infrastructure more secure. If we’re worried about someone using AI to design a deadly pathogen, we should regulate biology labs more strictly.
It’s probably impossible to stop advanced AI from coming into existence or to prevent the bad guys from getting access. So policymakers should assume the technology will eventually become widely available and focus on minimizing the damage it can do.
Will Gavin Newsom ban driverless trucks?
While Congress is stuck arguing about generalities, the California legislature recently passed legislation with a crystal clear purpose: ban driverless trucking. The bill, AB 316, is now sitting on the desk of Gov. Gavin Newsom. He hasn’t said whether he’ll sign it.
Newsom has traditionally been a strong advocate of self-driving technology. Under his watch, state agencies have largely resisted pressure from the city of San Francisco to curtail driverless taxis. But AB 316 was passed overwhelmingly by both houses of the California legislature and enjoys strong union support. So Newsom will be under a lot of pressure to sign it.
The short-term stakes here are low because companies need a permit from the California Department of Motor Vehicles to operate a driverless vehicle in the state. The DMV hasn’t issued any permits for driverless trucks, and no one expects them to do so in the near future. A lot of the testing of these technologies—with safety drivers—is already happening in other states, including Arizona and Texas.
I’m personally not very optimistic that any of the startups working on this technology will survive. A big reason Waymo and Cruise felt comfortable launching their driverless taxi services is that their vehicles had the option to come to a stop if they encountered a situation they didn’t understand. That is much harder to do with a 50,000-pound truck traveling 70 miles per hour down a freeway.
So the short-term stakes of AB 316 are low—I think we’re unlikely to see driverless trucks operating in California in the next five years with or without this legislation. But the longer-term stakes are significant.
If someone eventually develops safe driverless trucking technology, AB 316 will make it much more difficult to deploy it in California. That could lead to a future where California becomes something of an economic backwater. Perhaps driverless trucks will carry freight to the California border before a human driver hops into the cab and drives it on to the final destination.
Of course, if you’re a California truck driver that probably sounds like a fine future, which is why the Teamsters have been a driving force behind AB 316.
How AI could disrupt society
It seems clear that AI is going to have significant social, political, and economic impacts. But it’s hard to predict what those changes will be. So people have been reaching for analogies to earlier information technologies.
During last week’s hearing, for example, Sen. Hawley repeatedly referenced the rise of social media, arguing that it has been a mistake for Congress to take a laissez faire attitude toward technology giants over the last 20 years.
In a fascinating three-part series, the writer Sam Hammond draws an analogy to a much earlier information technology breakthrough: the printing press. The printing press democratized access to knowledge, eroding the power of established institutions and enabling new forms of political organization. That sparked the protestant revolution and led to a series of political upheavals that swept Europe in the 17th Century.
Hammond argues that something similar is going to happen with AI, and that it will happen over years, not the decades it took with the printing press. AI will give individuals and small organizations capabilities that only large companies and governments possessed in the past, while eroding traditional forms of privacy. Hammond predicts people will respond to the resulting chaos by retreating into figurative walled gardens and literal gated communities, leading to a wealthier but more balkanized world.
I really enjoyed the first two parts of the essay, which explain how AI could erode the foundations of modern liberal democracies. But I did not find Hammond’s specific predictions in part 3 to be very plausible.
I think that like Internet evangelists a decade ago, a lot of AI enthusiasts are underestimating how much inertia there is in the economic and political systems of wealthy democracies. I have no doubt that certain sectors of the economy—including entertainment and media—will be transformed by AI. But I expect other sectors, such as housing and hospitality, to evolve quite slowly.
But either way, I don’t think we can predict the future accurately enough to be confident that any broad AI legislation will do more harm than good.
Suppose you took a time machine back to 2001 to warn George W. Bush that there was about to be a thing called social media that would worsen teenage depression, destabilize governments in the Middle East, and aid the election of an nativist demagogue in the US in 2016 (or, if you prefer, engage in large-scale censorship of right-leaning political speech).
Do you think Congress could have passed legislation that would have averted these outcomes? I don’t. Even with everything we know today, it’s hard to think of a regulatory framework that would have led to a better outcome.
The printing press and social media were both revolutionary because they removed artificial barriers to people communicating with one another in a decentralized way. No single 17th Century political pamphlet or 21st Century social media post has a big impact. Rather, it was the cumulative impact of thousands of pamphlets—and billions of tweets and Facebook posts—that brought down governments and changed public opinion.
And precisely because they shift power to millions of decentralized individuals, it’s hard to predict how they will affect the world—to say nothing of changing those impacts. Decentralized social and political movements are inherently difficult to control. And here in the US the First Amendment creates a strong presumption against even trying to control them.
How is Congress going to regulate something they do not understand? It be net neutrality all over again and more than likely a worse outcome. Plus some of these CEOs are being disingenuous I am looking at you OpenAI CEO I believe his push for 'regulation' is to stymie competition. Let it grow naturally and then adjust accordingly we have so many laws already I'm sure one would play to nefarious actors if it came to it.
I like the new format with short topics. Nice way to mix it up and serve stories like these that might not merit a whole article.