Discussion about this post

User's avatar
Chris M.'s avatar

Conversely, the risk of regulating finished systems, rather than models, is that we might be too late: released open-weight models are, as you note, not that hard to modify, replicate, and hide. Even closed-weight models can be leaked or stolen. If they're powerful enough to have dangerous abilities, the public gets screwed. Without regulation, tech companies have insufficient incentive to prevent this.

If automated intelligence will only ever be as dangerous as computer chips, then only regulating systems will be fine. If it might be as dangerous as nuclear weapons, then I suppose we'd want to regulate its components more like fissile material.

Sam Altman was on the latter side [checks notes] sixteen months ago. How time flies.

Expand full comment
Sid Kapur's avatar

typo: should be Wiener, not Weiner

Expand full comment
5 more comments...

No posts