28 Comments

How is Congress going to regulate something they do not understand? It be net neutrality all over again and more than likely a worse outcome. Plus some of these CEOs are being disingenuous I am looking at you OpenAI CEO I believe his push for 'regulation' is to stymie competition. Let it grow naturally and then adjust accordingly we have so many laws already I'm sure one would play to nefarious actors if it came to it.

Expand full comment
founding

Yes, this seems correct.

We should spend more time worried about the harm that regulations might do, especially since they seem on track to be so far from perfect.

Expand full comment
founding

I like the new format with short topics. Nice way to mix it up and serve stories like these that might not merit a whole article.

Expand full comment

It's hard to comment to though.

I would prefer to see each thought post as a separate article, even if it generates 3 or 4 articles in a day. People can then focus on a specific article (or more if they have the time) and ignore the rest.

Substack writers seem to think that they have to turn out in-depth, long, analytic articles to attract subscribers. I don't believe this to be true.

Expand full comment

That's a good article. Information technologies are extremely difficult to curb by governments. Everyone can access the dark web with the Onion browser. If you type "the pirate bay" in Google you still get a link to a leading portal to download pirated movies and software. Deep port fakes are possible to create. If the US government couldn't deal with all these illegal activities then how it is going to curb bad actors using open source AI? "capabilities of unlicensed models would be improving rapidly, just a couple of years behind the licensed models." - when comparing Stable Diffusion to Midjourney or Llama 2 to GPT-4 it looks that they are rather a couple of months behind.

Expand full comment

Always will be as they lack the keys, understanding, and basic knowledge of how the internet is constructed.

Expand full comment

But don't believe that Tor/Onion is completely anonymous. Governments certainly own some nodes and they may be able to trace or recreate some content based on traffic intercepted.

https://medium.com/coinmonks/can-tor-keep-you-anonymous-see-how-fbi-arrested-an-illegal-tor-user-ef8288f3480e

Expand full comment

I see. Yet this article promises to teach you "how to stay truely anonymous while using TOR and surfing Darknet" so as I understand it claims it is possible. Likewise chatbot owners could make a deal with FBI to secretly report suspicious requests. That's another reason for bad actors to just use open source models.

Expand full comment

What area of modern life is easy to regulate? Banking? Air travel? Food supply? Commercial trade? Enforcement of civil rights? Public safety? We are way “behind the curve” on AI because we have lacked a well functioning federal government for decades. The idea that “government is the problem” is the problem. We had better “roll up our sleeves” on AI soon. Risks are both Gargantuan and Frankensteinian.

Expand full comment

Congress attempting to regulate AI is foolish. They can't regulate what anyone outside the USA is doing, so will effectively hobble development of AI technology in the USA.

Congress would do better wit direct their energies simplifying the tax code, better managing spending and reducing our debt.

Expand full comment

"If someone eventually develops safe driverless trucking technology, AB 316 will make it much more difficult to deploy it in California. That could lead to a future where California becomes something of an economic backwater. Perhaps driverless trucks will carry freight to the California border before a human driver hops into the cab and drives it on to the final destination."

------

Which will make products being shipped by truck more expensive in CA, along with everything else, from housing to cars to energy that is already significantly more expensive than almost anywhere else in the USA.

Is this what our governor wants to support a small voting block of truck drivers in CA? I suggest the answer should be no.

Expand full comment

How do we govern any technology which can simply be moved to a different server outside of the governing body's jurisdiction?? And then of course, still be made available to the same user base over the Internet.

Expand full comment

"Secure physical stuff" makes sense if you're worried about existential threats, but I imagine the more pressing need is that AI makes it much easier to do bad stuff people already do on the Internet today (fakes, spam, harassment, fraud, etc.).

It feels like we're back in that moment where record companies realized the Internet would bring about massive amounts of copyright infringement and settled on the DMCA as a way to address it. Questionable whether the DMCA has worked out, but it seems like the sort of solution senators are looking for, albeit for a much wider range of bad behavior.

Expand full comment

The Blumenthal/Hawley proposition seems far removed from the reality of the open-source community and the democratization of AI technologies. It naively hinges on the regulation of 'sophisticated general-purpose AI models,' an ill-defined category that could potentially stifle innovation and push small yet significant players out of the playing field to countries with permissive regulations.

Expand full comment

What many fail to comprehend is the Internet contains its own rules of compliance. The internet was designed to survive the most vicious of attacks. Even tho governments who may think that they can regulate with their do good regulations that would have unintentional consequences. As in the most regulated attempts,in places such as China, have discovered.

Expand full comment

The internet was initially designed to survive a nuclear attack... Its design remains impervious to the most vicious hacks by the most advanced methods. For this many thanks to those contributions to RFC and those design architects at CERN... AT&T and Berkeley for FreeUSB n UNIX who made the first software to run over the net connecting Universities across the country.

Expand full comment
Sep 20, 2023·edited Sep 20, 2023

AB 316, the California bill regarding autonomous vehicles above a given weight, would establish certain reporting requirements for manufacturers operating on public roads. That seems like a good idea - I'd rather see much more public data come out than the bill requires, tbh. And it requires a test driver to be present in the vehicles - I'd welcome that given we haven't yet gathered sufficient data to say what is safe. Besides, the test driver can deal with those out of distribution situations (construction zones, emergency vehicles) which apparently autonomous vehicles bomb. The bill also has a requirement for reporting to the CA Legislature committees in 2029 regarding performance, at which time the legislature can decide whether to change the requirements. That seems pretty reasonable.

Regulation of AI is going to be thorny, no question. I think laws regarding effect on humans & environment are going to be a better choice than ones based on model or data specifics.

Expand full comment

You do know that driverless trucks are currently running in TX and other states?

Expand full comment

TX has poor regulatory and public information standards; lack of transparency and public information is all too common. If the trucks are working well, then the companies should not fear safety information being made public. And 5 years is hardly an eternity.

Expand full comment

I’m curious where this AI regs proposal will land on the spectrum between registering and licenses, with the former being basically you have to provide some info as you get started and the latter saying you have to prove you meet various criteria before getting permission to operate. Not sure if this spectrum makes sense, but seems like a lot could hinge on what the plan really is.

Expand full comment
author

I agree. I think it will technically be licensing since these companies will need to comply with some minimal requirements to get the license, but I'm not convinced that this will be more than a box-checking exercise for them.

Expand full comment

Focusing on physical defense seems like a much, much harder problem (or, if you prefer, a much, much more expensive solution). If you want to secure society against AI-developed pathogens, your security is only as good as the worst-secured lab in the world. If you want to secure critical systems from software hacks, your security is only as good as its worst-secured component (see the SolarWinds hack).

And, of course, you have to deploy your universal security before AI develops to where the threat is feasible! With a laissez-faire approach, surely you can see that AI will win that race. That’s why heavy AI regulation is the only approach with any real chance of success. Frankly, I think we should shut it all down.

Expand full comment
author

I agree that if you believe AI is a truly existential threat, then the right solution is to just ban the development of new AI models outright—maybe even ban the creation of more powerful GPUs. I don't favor that and don't think it's going to happen but that's a coherent position.

But assuming that's not going to happen, I don't see a plausible strategy where we allow the development of powerful AI systems but prevent them from being used for evil. At a minimum, I think this would require a regulatory strategy far more onerous than the one Blumenthal and Hawley are considering, and it would need to be adopted by nations around the world so the technology doesn't migrate to more friendly countries.

Expand full comment

I agree that AI regulations would have to be onerous, and eventually universal. But the kind of regulations needed to ensure physical security would be much worse - for a concrete example, how would you plan to regulate every Chinese bio lab?

Basically, I feel like you’ve engaged with the challenges of regulating AI but not with the challenges of physical security. I agree that the former is hard, but the latter is even harder. It seems important to try both.

Expand full comment
author

I don't know these seem like symmetrical situations. Regulating US biology labs doesn't help if you don't have similar regulations in China, but by the same token regulating US biology AI won't help unless other countries have similar restrictions.

The difference I see is that billions of people have laptops that will likely be able to run powerful AI models at some point. Whereas most people do not have biology labs in their basement. So while neither technology is easy to regulate, biology labs seem like an easier lift.

Expand full comment

Plus, if we didn't do any biosecurity regulations after COVID, I don't think there's much chance of the world suddenly now implementing an effective global lab surveillance regime. A real-life maybe-lab-engineered pathogen from our most powerful geopolitical adversary, if I may belabor the point, and we've done nothing. Wouldn't count on this as a fallback strategy for survival.

Expand full comment

So, I used to think this way too: our best chance of weathering AI-based attacks was to harden human defenses. To the point where last year I left my business-software job of 11 years and signed on to do fraud prevention work at a bank. It's more fulfilling!

And here's what I've learned: serious security is impossible. Not difficult, actually impossible for any kind of normal business. Every new process we implement, every new feature we offer, opens up new security holes. But blocking every avenue would cost so much time and attention that we'd fall behind competitors in features, while annoying legitimate customers with security measures. We'd lose more money than we saved. So my team does our best, but typically we prioritize fixes only after fraudsters have already discovered and exploited a weakness.

Modern society just can't be secured the way you seem to be thinking. If AI gives more actors the ability to design deadly pathogens, or hack infrastructure, for fun or profit... well, it's going to happen. There are too many potential weak links (personnel, physical locations, the entire software supply chain) to possibly secure them all, even for a single risk category like private bio labs. Heck, even a single lab probably can't be secured forever against sufficiently dedicated attackers; everyone gets careless eventually. At best, you can make it inconvenient enough that attackers go find an easier target.

But you COULD restrict frontier model development. You can run a model on a laptop but you can't create one that way. Data centers and chip manufacturing are large operations that can be controlled, to avoid ever creating dangerous models in the first place.

Expand full comment

Timothy Lee wrote another good post about why risk of rogue AI are exaggerated. I also argue that bad state actors might be a problem not misalignment: https://medium.com/@jan.matusiewicz/autonomous-agi-with-solved-alignment-problem-49e6561b8295

Expand full comment