31 Comments
User's avatar
Alexander Clinton's avatar

Like so many California politicians, these Ai safety saviors want to gain power and control the rest of us in order to aggrandize their own careers and satisfy their own kneejerk submission to fear. California is shooting itself in the foot with this legislation. Vote against this bill and anyone who supports it.

Expand full comment
Malcolm Sharpe's avatar

SB-1047 is so strange. I'm never seen anyone who likes it who isn't a doomer.

Expand full comment
Timothy B. Lee's avatar

I honestly struggled to find supporters who would talk to me about it on the record. The only person who did is Zvi Mowshowitz. The AI safety groups supporting it are well-funded by the bill does not have very broad support.

Expand full comment
Garrison Lovely's avatar

What do you make of this poll showing broad support? https://futureoflife.org/wp-content/uploads/2024/07/AIPI-SB1047-California-Poll-Topline.pdf

(EAs are afraid of journalists, but probably more because of the media response to SBF than anything more nefarious.)

Expand full comment
Timothy B. Lee's avatar

I think most people like the idea of AI Safety in general, but that's different from an expert closely reading a bill and concluding it's a good idea. So to be more precise the bill may have broad popular support but its support among knowledgeable elites seems pretty narrow.

Expand full comment
Garrison Lovely's avatar

Then what do you make of Hinton and Bengio's support? My impression of the opposition is that it's mostly coming from industry and academics with industry ties/multibillion dollar startups.

Expand full comment
Tim Tyler's avatar

Industry, academics and VCs seem to be the groups most likely to be screwed over by this regulation. Surely, it makes sense that they would be the ones objecting to it.

Expand full comment
Michael Cohen's avatar

I think people are much more likely to closely read the bill if they fear it will affect them personally and they might need to Do Something about it. So restricting to that set of people is likely to bias the sample pretty radically. When polling is done, they do offer arguments both in support and in favor that I think make people's opinions pretty well-founded. There's also the polling result that I think it's something like +47 in support among tech workers.

Expand full comment
Tim Tyler's avatar

A poll from AIPI? Don't you think they might be biased?!? https://theaipi.org/

Expand full comment
FeepingCreature's avatar

I mean, if you like "bill that tries to curtail risk of widescale destruction from artificial intelligence", you're pretty much a doomer by definition, no? The question for non-doomers shouldn't be "will this bill save us", because you don't think you need saving in the first place, it's "will this bill harm us even if we don't need saving". And honestly, as a doomer, I don't understand how you think it does. Like... if you as a corporation want to release something that can cause mass casualties, but you think it could also do good, I *do* want you to default to not releasing it. Corporations should not take massive gambles with human lives! And if you don't think there's such a risk, why would you be worried?

Expand full comment
Malcolm Sharpe's avatar

> if you as a corporation want to release something that can cause mass casualties, but you think it could also do good, I *do* want you to default to not releasing it.

This is addressed by the article with the truck example. Trucks have been used for harmful acts that meet the SB-1047 threshold ($500m), for example in the 1995 Oklahoma City bombing and the 2016 Nice truck attack. Yet trucks are not banned.

More generally, if the benefits of a release exceed the harms of a release, then banning the release is net harmful. Let's say a model release enables $5 billion of benefits and a $1 billion harm event, for a net $4 billion of benefits. Yet the $1 billion harm crosses the $500m threshold, triggering a ban. In this example, the ban causes $4 billion of net harm.

> And if you don't think there's such a risk, why would you be worried?

Because LLM releases, especially weights releases, are on the whole net beneficial.

Expand full comment
FeepingCreature's avatar

I just don't think the truck comparison is any good. The big difference is that trucks are instances of a general kind of tool that has lots of other instances as well, such as "cars", "horses" and "walking". Generally speaking if you can cause $500m damage with a particular truck, you can cause it with any other truck, and usually also with a minivan, a car, or even a backpack. If there was a particular way that you could use *only* and *specifically* trucks to cause damage in excess of 500 million - which is, by the way, also the standard employed in SB1047 - I would in fact not want the corporation working on them to release a detailed guide on how to build and deploy trucks for free on the internet. I would want that to be authorized by a higher instance, and at least try to make a credible attempt if we can maybe get the benefits without the drawbacks, such as limiting production of trucks and licensing drivers. (In this analogy, this is equivalent to closed-source AI.)

Expand full comment
Malcolm Sharpe's avatar

It's impossible to have a productive discussion this deep in hypotheticals. Give me a specific example of a harm that happened, in real life, as a result of a model release that would have been prevented by SB-1047.

Expand full comment
FeepingCreature's avatar

Nothing of the sort has happened yet. Hopefully (though I'm not optimistic) if we pass appropriate laws now, it never will.

But also, note if you think that such harms cannot occur, then SB-1047 should be a total non-factor to you. Who cares if you're liable for something that won't happen?

Expand full comment
Malcolm Sharpe's avatar

> Who cares if you're liable for something that won't happen?

SB-1047 does a lot more than just impose liability. As a random example, it imposes an auditing requirement.

Such requirements would certainly add friction to model releases such as Llama 3.1 405B. Since those model releases are beneficial, adding friction to them is bad.

Expand full comment
Tj's avatar

Can AI companies just implement borders (Ala pornhub blocking certain states) to avoid the CA law?

Expand full comment
TR's avatar

I don't think that would be achievable for open-weights models, though. Someone who downloads the Llama weights could use them to build a quasi-Llama that can be used in California, and Meta might still be liable for what users do with that model.

Expand full comment
werdnagreb's avatar

What sort of risks is this law trying to protect against? Is it primarily a super-human AI destroying humanity (which I personally believe is ridiculous), or is it trying to prevent bad actors using it for cybercrime, fraud, and terrorism? If it’s the latter, then larger models can be as useful I protecting against cybercrime as they can be used to commit it.

My major concern with this law is that if it is passed, it will further consolidate power in those few companies with enough resources to spend $100 million + on training. Small companies and non-profits will be locked out and we will be stuck with powerful tech oligopolies.

Expand full comment
Jelle Donders's avatar

But you're not affected if it's below 100 million?

Expand full comment
Scott's avatar

It says in one of the image captions that you interviewed Lina Khan. Is there any place we can view/read that?

Expand full comment
Timothy B. Lee's avatar

I'm told a video of the event will be posted soon. My interview with Khan was fairly short—i asked four questions before the event was opened up to the audience.

Expand full comment
Timothy B. Lee's avatar

Here it is. https://www.youtube.com/watch?v=pieVtTrbDBs

Kahn's speech starts around 56 minutes, and my questions start around 1 hour, 8 minutes.

Expand full comment
Jelle Donders's avatar

I wonder what you think about this analysis that comes to a quite different conclusion: https://thezvi.substack.com/p/guide-to-sb-1047

Expand full comment
Timothy B. Lee's avatar

I think that if you think there AI could pose an existential risk to human beings in the next few years, then SB 1047 is a sensible approach to mitigating those risks. I think the latest changes are an improvement.

Where Zvi and I disagree is that Zvi thinks AI does pose an existential risk and he therefore supports the bill. I am not worried about existential risks so the bill seems unnecessary to me.

Expand full comment
Jelle Donders's avatar

Makes sense, that indeed seems like the core disagreement.

I think that both confidently claiming AI risk is 100% or 0% isn't justified, there's many uncertainties and expert wildly disagree. However, taking AI risk seriously doesn't require such (over)confidence, but dismissing it does.

So why confidently claim it's 0% anyway, despite forecasting and expert predictions being far from 0?

Expand full comment
Timothy B. Lee's avatar

I absolutely think that AI will create new risks, since every major new technology does. I'm in favor of closely monitoring these risks and regulating when it makes sense to do so. But requiring model developers to certify that no one will cause harm with their models seems like requiring automakers to certify that no one will drive their cars recklessly. I think it makes sense to wait and see what concrete harms materialize and then pass regulations that address those (as, for example, many states are doing right now to address the misuse of deepfake technology).

Expand full comment
Tim Tyler's avatar

Re: "a proprietary tier of higher-performing models and an open-weight tier of much weaker models" - that sounds quite a bit like the current situation and we could plausibly get something similar in the future - even without regulation.

Expand full comment
Timothy B. Lee's avatar

Sure. I just think the gap could get much bigger in a world with sb 1047.

Expand full comment