28 Comments

How is Congress going to regulate something they do not understand? It be net neutrality all over again and more than likely a worse outcome. Plus some of these CEOs are being disingenuous I am looking at you OpenAI CEO I believe his push for 'regulation' is to stymie competition. Let it grow naturally and then adjust accordingly we have so many laws already I'm sure one would play to nefarious actors if it came to it.

Expand full comment
founding

I like the new format with short topics. Nice way to mix it up and serve stories like these that might not merit a whole article.

Expand full comment

That's a good article. Information technologies are extremely difficult to curb by governments. Everyone can access the dark web with the Onion browser. If you type "the pirate bay" in Google you still get a link to a leading portal to download pirated movies and software. Deep port fakes are possible to create. If the US government couldn't deal with all these illegal activities then how it is going to curb bad actors using open source AI? "capabilities of unlicensed models would be improving rapidly, just a couple of years behind the licensed models." - when comparing Stable Diffusion to Midjourney or Llama 2 to GPT-4 it looks that they are rather a couple of months behind.

Expand full comment

What area of modern life is easy to regulate? Banking? Air travel? Food supply? Commercial trade? Enforcement of civil rights? Public safety? We are way “behind the curve” on AI because we have lacked a well functioning federal government for decades. The idea that “government is the problem” is the problem. We had better “roll up our sleeves” on AI soon. Risks are both Gargantuan and Frankensteinian.

Expand full comment

Congress attempting to regulate AI is foolish. They can't regulate what anyone outside the USA is doing, so will effectively hobble development of AI technology in the USA.

Congress would do better wit direct their energies simplifying the tax code, better managing spending and reducing our debt.

Expand full comment

"If someone eventually develops safe driverless trucking technology, AB 316 will make it much more difficult to deploy it in California. That could lead to a future where California becomes something of an economic backwater. Perhaps driverless trucks will carry freight to the California border before a human driver hops into the cab and drives it on to the final destination."

------

Which will make products being shipped by truck more expensive in CA, along with everything else, from housing to cars to energy that is already significantly more expensive than almost anywhere else in the USA.

Is this what our governor wants to support a small voting block of truck drivers in CA? I suggest the answer should be no.

Expand full comment

How do we govern any technology which can simply be moved to a different server outside of the governing body's jurisdiction?? And then of course, still be made available to the same user base over the Internet.

Expand full comment

"Secure physical stuff" makes sense if you're worried about existential threats, but I imagine the more pressing need is that AI makes it much easier to do bad stuff people already do on the Internet today (fakes, spam, harassment, fraud, etc.).

It feels like we're back in that moment where record companies realized the Internet would bring about massive amounts of copyright infringement and settled on the DMCA as a way to address it. Questionable whether the DMCA has worked out, but it seems like the sort of solution senators are looking for, albeit for a much wider range of bad behavior.

Expand full comment

The Blumenthal/Hawley proposition seems far removed from the reality of the open-source community and the democratization of AI technologies. It naively hinges on the regulation of 'sophisticated general-purpose AI models,' an ill-defined category that could potentially stifle innovation and push small yet significant players out of the playing field to countries with permissive regulations.

Expand full comment

What many fail to comprehend is the Internet contains its own rules of compliance. The internet was designed to survive the most vicious of attacks. Even tho governments who may think that they can regulate with their do good regulations that would have unintentional consequences. As in the most regulated attempts,in places such as China, have discovered.

Expand full comment

The internet was initially designed to survive a nuclear attack... Its design remains impervious to the most vicious hacks by the most advanced methods. For this many thanks to those contributions to RFC and those design architects at CERN... AT&T and Berkeley for FreeUSB n UNIX who made the first software to run over the net connecting Universities across the country.

Expand full comment
Sep 20, 2023·edited Sep 20, 2023

AB 316, the California bill regarding autonomous vehicles above a given weight, would establish certain reporting requirements for manufacturers operating on public roads. That seems like a good idea - I'd rather see much more public data come out than the bill requires, tbh. And it requires a test driver to be present in the vehicles - I'd welcome that given we haven't yet gathered sufficient data to say what is safe. Besides, the test driver can deal with those out of distribution situations (construction zones, emergency vehicles) which apparently autonomous vehicles bomb. The bill also has a requirement for reporting to the CA Legislature committees in 2029 regarding performance, at which time the legislature can decide whether to change the requirements. That seems pretty reasonable.

Regulation of AI is going to be thorny, no question. I think laws regarding effect on humans & environment are going to be a better choice than ones based on model or data specifics.

Expand full comment

I’m curious where this AI regs proposal will land on the spectrum between registering and licenses, with the former being basically you have to provide some info as you get started and the latter saying you have to prove you meet various criteria before getting permission to operate. Not sure if this spectrum makes sense, but seems like a lot could hinge on what the plan really is.

Expand full comment

Focusing on physical defense seems like a much, much harder problem (or, if you prefer, a much, much more expensive solution). If you want to secure society against AI-developed pathogens, your security is only as good as the worst-secured lab in the world. If you want to secure critical systems from software hacks, your security is only as good as its worst-secured component (see the SolarWinds hack).

And, of course, you have to deploy your universal security before AI develops to where the threat is feasible! With a laissez-faire approach, surely you can see that AI will win that race. That’s why heavy AI regulation is the only approach with any real chance of success. Frankly, I think we should shut it all down.

Expand full comment