Conversely, the risk of regulating finished systems, rather than models, is that we might be too late: released open-weight models are, as you note, not that hard to modify, replicate, and hide. Even closed-weight models can be leaked or stolen. If they're powerful enough to have dangerous abilities, the public gets screwed. Without regulation, tech companies have insufficient incentive to prevent this.
If automated intelligence will only ever be as dangerous as computer chips, then only regulating systems will be fine. If it might be as dangerous as nuclear weapons, then I suppose we'd want to regulate its components more like fissile material.
Sam Altman was on the latter side [checks notes] sixteen months ago. How time flies.
Yes that is the argument. If I thought AI systems could be as dangerous as nuclear weapons I'd probably favor something much stronger than SB 1047. But I've spent a lot of time over the last 18 months studying the arguments for existential risk and have not been convinced. I explained some of the reasons in these articles:
Those posts are focused on risks from AGI. I don't think any company would intentionally release the weights of a model that was close to AGI. I expect the first AGI training runs to cost hundreds of billions or trillions of dollars, which is too valuable to give away and far beyond the $100M threshold of the CA bill. The bill could be more useful against catastrophic misuse of tool AI by malicious actors. Misuse still requires regulations on the models themselves. If North Korea fine-tunes their own versions of Llama-5, then creates millions instances of them to hack infrastructure or develop bioweapons, none of the regulations on specific applications will matter.
Conversely, the risk of regulating finished systems, rather than models, is that we might be too late: released open-weight models are, as you note, not that hard to modify, replicate, and hide. Even closed-weight models can be leaked or stolen. If they're powerful enough to have dangerous abilities, the public gets screwed. Without regulation, tech companies have insufficient incentive to prevent this.
If automated intelligence will only ever be as dangerous as computer chips, then only regulating systems will be fine. If it might be as dangerous as nuclear weapons, then I suppose we'd want to regulate its components more like fissile material.
Sam Altman was on the latter side [checks notes] sixteen months ago. How time flies.
Yes that is the argument. If I thought AI systems could be as dangerous as nuclear weapons I'd probably favor something much stronger than SB 1047. But I've spent a lot of time over the last 18 months studying the arguments for existential risk and have not been convinced. I explained some of the reasons in these articles:
https://www.understandingai.org/p/predictions-of-ai-doom-are-too-much
https://www.understandingai.org/p/why-im-not-afraid-of-superintelligent
Those posts are focused on risks from AGI. I don't think any company would intentionally release the weights of a model that was close to AGI. I expect the first AGI training runs to cost hundreds of billions or trillions of dollars, which is too valuable to give away and far beyond the $100M threshold of the CA bill. The bill could be more useful against catastrophic misuse of tool AI by malicious actors. Misuse still requires regulations on the models themselves. If North Korea fine-tunes their own versions of Llama-5, then creates millions instances of them to hack infrastructure or develop bioweapons, none of the regulations on specific applications will matter.
typo: should be Wiener, not Weiner
Oh crap thank you.
But, this is different! Shoggoth will kill us all! This is the last chance to repent and PAUSE!!!
/s