Discussion about this post

User's avatar
Dean Bubley's avatar

An unanswered question is whether superintelligent AI will be able to do all its evil work with existing energy supplies. Or whether it needs to spend a decade battling the planning & licensing system to build an extra 100GW of power and transmission, just like human datacentre developers

Expand full comment
Billy's avatar

I believe you’re underestimating some crucial technical factors that make AI risk more plausible than you indicate. First, frontier models are already demonstrating emergent capabilities—behaviors that weren’t predictable from their training data. Scaling laws provide averages, but they don’t predict sudden jumps in reasoning, planning, or autonomy. That unpredictability makes it difficult to argue that risks are manageable. Second, current alignment methods don’t scale effectively. RLHF and fine-tuning are mainly surface-level controls; they don’t alter a model’s underlying goals or capabilities. We've already seen jailbreaks and deceptive responses. As models grow more agentic, shallow guardrails might fail disastrously. Third, capability externalization is speeding up: open-weights, APIs, and automated tool-use pipelines make it simple to assemble systems functioning as autonomous agents, increasing misuse risks. Finally, even if the probability of doom is low, the issue of strategic stability remains important. Geopolitical pressure to deploy rapidly reduces safety margins—similar to a nuclear arms race. The issue isn’t necessarily certainty of disaster, but that the inherent uncertainty makes AI risk dangerous. Dismissing it as “unconvincing” ignores the very unpredictability that makes AI risk credible.

Expand full comment
10 more comments...

No posts