9 Comments

I totally agree this is a core question in the Great AI Debates. I’d love to see a follow up where you posed it to a few serious researchers to see if there are more dimensions to it. (Yoshua Bengio comes to mind, e.g.).

Expand full comment

Thanks for the thorough analysis of this argument. I am afraid the only way to have a good World Simulator for AGI training is to have another AGI in the first place that can gather the data and build the Simulator :) But this leads to an interesting idea - can one LLM-like AI provide simulation of the world for another AI so that it can test various activities? It would be like people playing a role-playing game and likewise be limited by the accuracy of the world vision of the game master. It could be useful but also dangerous (I explain why in https://medium.com/@jan.matusiewicz/autonomous-agi-with-solved-alignment-problem-49e6561b8295 "Simple-minded game master")

Expand full comment

When it comes to simulating language, the answer right now is "probably." There are already experiments with using bigger LLMs to generate synthetic data to train smaller LLMs, with some promising results. And for a deterministic situation like a board game, we could almost certainly do the same thing (it would be interesting to see if the "game master" AI was able to learn from teaching the smaller AIs).

A harder task would be simulating the real world - can an AI with imperfect information teach another AI to be better than itself at a real world scenario? That's a fascinating question.

Expand full comment

I don't think an AI with imperfect information can teach another AI to have better knowledge. It may however let it train acting in different scenarios and think over implications. It might have similar functions as one of possible effects of human dreams (https://www.verywellmind.com/why-do-we-dream-top-dream-theories-2795931). The question is however - if a scenario is more typical for a book or movie than for the real world - wouldn't the "game master" treat fiction as a source of truth?

Expand full comment

I've got a few thoughts here, on both sides of the argument. First, I 100% agree that simulation (and even reasoning) alone is not sufficient enough to solve many real-world problems. Case in point: we automated washing and drying decades ago, but I'm still waiting for an affordable laundry-folding machine!

On the other hand, when thinking about potential dangers of AI (superintelligent or not) - destruction is far easier than creation. Distilling truth is much harder than "flooding the zone." Navigating roads and traffic laws is harder than disabling or destroying vehicles.

I don't say that to suggest a rogue, superintelligent AI will kill us all - but humans wielding intelligent systems will have a much bigger impact when it comes to making things messier and more chaotic.

Expand full comment

Good points!

And to add onto this, the limits of simulation also concern me from a safety perspective. Even if a system performs safely in the sandbox it may still lead to unexpected, bad outcomes in the real world because of contingencies we failed to include in the model. Of course this is compatible with Tim’s argument, which is about superintelligence specifically (and which I also agree with).

Expand full comment
founding

Great piece. You summarize it pretty well when you say "simulation isn’t an alternative to collecting data from the real world; gathering data is an essential precondition for creating a realistic simulation."

Real world data is the main thing that enabled LLMs, Alphafold, and any other successful AI system of the last 60 years. For most future AI systems you can think of, that data doesn't exist.

Expand full comment

some great comments below. I suppose it reflects the fact that we are establishing which problems are 'hard' problems to solve

Expand full comment

I appreciate the in-depth response! It's encouraged me to take the plunge and buy a paid subscription, which honestly would be well worth it just for half of the content you produce.

I don't think we disagree all that strongly about what simulation can accomplish. I expect a superintelligence to be able to run simulations more efficiently and accurately than the ones human engineers and scientists currently do, but it will certainly still need external data to inform those simulations. Where we differ seems to be the degree of improvement that we expect a superintelligence would be able to get over human simulations, and on what implications this has for its overall capability to influence the world. I'll try and crystalize my thoughts on the matter and write up a longer response a bit later.

Expand full comment