Sam Altman's proposal to license foundation models isn't ready for prime time.
The EU AI Act actually isn't the worst legislation and assuming that the costs of developing different systemts for different jurisdictions are very high, it seems that the AI Act is likely to become something like the global standard, just like GDPR did for privacy. And yet, you're right, especially with foundational models, it's hard to predict exactly where harm might occur and it is likely that lawmakers will have to finetune legislation later on. But this will happen against the background of existing and pretty solid tech regulation, especially around online safety and intermediary liability, so it's not like there's no precedence with regard to how legislation could look like ...
I think the core issue with applying traditional regulatory approaches to AI is that we've never had to regulate a [black ball technology](https://nickbostrom.com/papers/vulnerable.pdf). Bioweapons are almost black balls, and could be in the future, but are currently too difficult / costly, and don't have strong economic incentives favoring their use by rational actors.
I don't know how you can write an anti-regulation piece (Congress shouldn't rush into regulating AI) when you also write that an AI in charge (of weapons systems for example) might turn against us.
>I think it's totally plausible that if we put AI in charge of everything then we're in trouble if it decides to turn against us. But in my view the right response is to avoid putting AI in control of weapons and other key infrastructure, not to try to prevent the creation of hostile AI. Maybe we collectively won't have the foresight and discipline to avoid putting the AI in charge, but I think arguments over alignment and fast takeoffs are misguided.
Surely you think that Congress should regulate AI on weapons systems then?
I have been reading warnings about dangers of generative AI since the deployment of DALL-E 2. And yet I haven't heard about any actual people hurt by DALL-E, Midjourney, ChatGPT, Bard. Yet, it doesn't seem to stop the fearmongering. Oh and, as I argue in https://firstname.lastname@example.org/misplaced-concerns-real-risks-of-chatbots-75dcc730057 , bad actors could just use open source models and ignore any regulations.
If politicians get their way, technology would advance at such a sluggish pace. It's worth remembering that they love control, so of course both parties are entertaining methods of control.
I'm having trouble understanding the difference between "regulating providers of so-called foundation models" and "a licensing regime for foundation models themselves." The rest of that section makes sense---so long as you forget that right there in the middle you seem to acknowledge that the EU approach incorporates Altman's proposed approach (if I'm understanding correctly).
Great coverage. Such a great example of the need for momentum thinking. Here was our attempt: https://www.2buts.com/p/ai-regulation-audio