12 Comments

The EU AI Act actually isn't the worst legislation and assuming that the costs of developing different systemts for different jurisdictions are very high, it seems that the AI Act is likely to become something like the global standard, just like GDPR did for privacy. And yet, you're right, especially with foundational models, it's hard to predict exactly where harm might occur and it is likely that lawmakers will have to finetune legislation later on. But this will happen against the background of existing and pretty solid tech regulation, especially around online safety and intermediary liability, so it's not like there's no precedence with regard to how legislation could look like ...

Expand full comment

I think the core issue with applying traditional regulatory approaches to AI is that we've never had to regulate a [black ball technology](https://nickbostrom.com/papers/vulnerable.pdf). Bioweapons are almost black balls, and could be in the future, but are currently too difficult / costly, and don't have strong economic incentives favoring their use by rational actors.

Expand full comment

I don't know how you can write an anti-regulation piece (Congress shouldn't rush into regulating AI) when you also write that an AI in charge (of weapons systems for example) might turn against us.

>I think it's totally plausible that if we put AI in charge of everything then we're in trouble if it decides to turn against us. But in my view the right response is to avoid putting AI in control of weapons and other key infrastructure, not to try to prevent the creation of hostile AI. Maybe we collectively won't have the foresight and discipline to avoid putting the AI in charge, but I think arguments over alignment and fast takeoffs are misguided.

Surely you think that Congress should regulate AI on weapons systems then?

Expand full comment
author

“Don’t rush” doesn’t mean don’t do it at all. There is time to do it in a thoughtful, well informed way.

Expand full comment
May 20, 2023·edited May 20, 2023

I have been reading warnings about dangers of generative AI since the deployment of DALL-E 2. And yet I haven't heard about any actual people hurt by DALL-E, Midjourney, ChatGPT, Bard. Yet, it doesn't seem to stop the fearmongering. Oh and, as I argue in https://medium.com/@jan.matusiewicz/misplaced-concerns-real-risks-of-chatbots-75dcc730057 , bad actors could just use open source models and ignore any regulations.

Expand full comment

If politicians get their way, technology would advance at such a sluggish pace. It's worth remembering that they love control, so of course both parties are entertaining methods of control.

Expand full comment

I'm having trouble understanding the difference between "regulating providers of so-called foundation models" and "a licensing regime for foundation models themselves." The rest of that section makes sense---so long as you forget that right there in the middle you seem to acknowledge that the EU approach incorporates Altman's proposed approach (if I'm understanding correctly).

Expand full comment
author

Hi good question! I think there's a distinction to be drawn between regulation and licensing. Regulation requires foundation model publishers to take certain steps (like testing and disclosure) but then leaves them free to publish their model once they've complied with the requirements. Licensing requires a publisher to ask for permission from a regulatory agency and then wait for permission before publishing. For an agency like the FDA, this process can take years and have a ton of uncertainty, so it's a much bigger impediment to innovation.

I read the AI Act's as imposing regulations on foundation models but not creating a licensing regime for foundation models. I don't see anything about foundation model publishers having to request a license from a government agency or anything like that.

However, it's possible I'm misreading it. The EU legislative process is complicated, and it's possible I'm not looking in the right place. Or maybe this will wind up being a de facto licensing regime once the EU text is implemented by member states. Please let me know if I'm misunderstanding how the EU law works.

Also I wouldn't say I'm a big fan of the AI Act. It seems likely to be a big stumbling block to any effort to create an indigenous AI industry in Europe, and that doesn't seem great for the world. If Congress does wind up passing AI legislation, I hope it takes a lighter touch.

Expand full comment

I am not quite sure what Altmann referred to with „licensing“. The AI Act demands that high-risk AI applications (and only those) have to be registered and adhere to certain standards as to ensure that the system works as intended, including testing. As a business user, that’s what I would suspect my vendors to do anyway (think of all the recruiting AI that, if looked at it closest, turned out to be snake oil). Therefore I also don’t think the AI Act will be a major factor that prevents Europe from developing „European“ LLMs. There’s other things, like financing, that I think are bigger problems to solve.

Expand full comment

Thanks! That helps clarify. I certainly don't know any more than you (though I'm noticing a trend in which I make more confident pronouncements on AI topics than my knowledge warrants).

Expand full comment

Great coverage. Such a great example of the need for momentum thinking. Here was our attempt: https://www.2buts.com/p/ai-regulation-audio

Expand full comment