75 Comments

I'm sympathetic to the overall argument, but if one person could reach the frontiers of (written) knowledge in every field at the same time, they could probably come up with a lot of novel ideas. Actual academic disciplines remain very siloed, useful human lifespans are pretty short if you consider it takes maybe 10 years to reach the frontier of a PhD-narrow field, and the incentives are very much against reaching that frontier in (superficially) unrelated disciplines.

Expand full comment
Nov 16, 2023·edited Dec 5, 2023

I appreciate you writing this article! I've been wondering what your thoughts are on AI risk ever since you started the blog.

As some background, I first encountered you on Full Stack Economics, and when you announced this blog I subscribed here as well. Thus far I've found it very well-written and informative. In particular I loved your deep dives into self-driving technology, and found them very useful for forming my own opinions in that arena. You're one of my primary sources for news on contemporary AI developments, and I really appreciate the blog.

With that context, I want to say that I found this article to be very disappointing. It barely engages with the arguments in favor of AI risk, either handwaving them away without justification or omitting them entirely. Several sections even contain relatively simple mathematical errors that have nothing to do with AI in particular.

I'm writing up this comment because I believe AI to be by far the most impactful technology on the horizon, and it's vital that we can make good predictions on its impacts. If AI is indeed a threat to humanity, that would eclipse the importance of nearly every other issue humanity faces, and would justify strong measures to prevent it. And if AI is *not* such a threat, it has the potential to end poverty and war, saving millions of lives. In the latter case, we have a responsibility to develop it as quickly as possible. Figuring out which prediction is correct is *really important*.

To address things one at a time:

Chess:

You say that people have been mislead by chess, because chess follows simple deterministic rules and can therefore be solved by algorithms, which doesn't apply to the real world. This is a category error; there's no sharp delineation between those two domains. The real-world, just like chess, follows a set of relatively simple deterministic rules called "physics". Each "move" leads to a known outcome, which can be brute-force searched.

The difference, of course, is that the real-world game tree is vastly larger. The average move in a game of chess has about 35 options, compared to the number of particles in the observable universe is about 10^80. However this is less relevant than you might think, since chess's game tree is *already* large enough to be intractable to brute-force searches deeper than just a few moves as in your computer science class. Chess-playing algorithms succeeded by doing aggressive tree-pruning to get the search space down to a manageable size, along with heuristic arguments hardcoded in by human experience.

The piece valuation you used in your program is exactly such a fuzzy heuristic; nothing in the rules of chess assigns a value of "5" to a rook, and the actual usefullness of a rook varies wildly based on the exact position. Humans played thousands of games of chess, learned via trial and error and intuition how useful each piece was relative to each other piece, and then hardcoded that into their computers. A chess-playing algorithm like yours is *already* doing exactly the sort of knowledge-based heuristic approach that you claim computers aren't good at.

Early chess-playing pioneers like Deep Blue did rely on humans to explicitly program in those heuristics; they weren't doing the foundational reasoning themselves. But that changed in 2017 with AlphaZero, which learns chess entirely from scratch via neural network. It trained by playing chess against itself for only 9 hours and was then pitted agains the best human-coded chess-playing program, StockFish. AlphaZero won 25 games to 3.

The sort of pure algorithmic approach to games that you describe can only be used on very simple games like tic-tac-toe, and most of the things that computers have recently started doing much better than humans at use fuzzy heuristics learned by trial and error, just like humans do. AlphaStar, for example, is a neural network that can play Starcraft better than almost all humans. (Starcraft has a vastly larger game tree than chess, being more akin to the real world in the precision with which different actions can differ, and is also a hidden-information game where the players have to reason probabilistically about what the opponents have access to or may do.) OpenAI Five does the same with Dota 2. And outside of video games, DALL-E has far surpassed human artists in generality and visual beauty and fidelity. (It's still very poor at understanding an English description and converting that to a conceptually corresponding image, but that's a different skill.)

Your understanding of the real world also seems quite simplistic in certain domains. You say "The simplicity and predictability of chess allow computers to “look ahead” and anticipate the likely consequences of any potential move. Most real-world problems are not like that." and talk about military planning as an example of this; much of military strategy is doing exactly what you claim they don't do! The field of mathematical game theory was developed largely as a way to predict the actions of other nation-states in response to possible decisions, just like one does in chess. As you point out, real-world planning is a partial information game rather than a perfect-information game like chess, but that doesn't really have anything to do with the ability to plan ahead. Planning ahead in a hidden-information game looks very much the same as in chess, except that you ascribe probabilities to each of your opponent's moves and calculate the move you can take with the highest expected value.

There's a reason why game theory and wargaming both have "game" in their names; there's no sharp delineation between "game" and "geopolitics"; they're both complicated systems of rules, agents, incentives, and payoffs. Geopolitics is the same kind of thing as board games, just a more complicated instance.

Knowledge vs. computation:

If I understand your argument correctly, it's that general artificial intelligence will require more training data than humans currently have available to give it, and that much of the data we do have is redundant.

I think you actually understate part of this argument. The first important question is whether neural networks are capable of general intelligence *at all*. Our understanding of the human brain is extremely poor, and while neural networks are similar to them in many ways, they're also different in many ways. It's entirely possible that no amount of training data could ever get a neural network to human-level intelligence. (For more on this I'd highly recommend the debate between Scott Alexander and Gary Marcus: https://www.astralcodexten.com/p/somewhat-contra-marcus-on-ai-scaling)

But assuming that neural networks are capable in theory of general intelligence, it seems unjustified to point to limited training data as a relevant constraint.

* You point to a paper that estimates we'll run out of training data by 2026. This may be true, but what about the ~2.5 years before that happens? We've already seen dramatic improvement from GPT-2 to GPT-4, and if there is some point at which the amount of training data becomes "enough", you haven't provided any estimate of where exactly that point is, and it's entirely possible that it's above GPT-4 but below the total amount of data we have to throw at GPT-5.

* Humans are generating data at a frantic rate that's only increasing as the internet plays a larger and larger part of our lives. We may "run out" of unused training data in 2026, but that would only limit growth in training dataset size to the amount of data that humanity produces in a year, which is... a lot. Even if the amount of data needed for GAI is above the 2026 threshold, we'll still get there eventually, potentially only a few years later.

* You focus on human-created data, such as English passages. This is presumably because current leading AI models are language models, which is because that's what people want. AIs that can predict human language would be very useful to humanity, so that's where most of the funding goes. But when we're talking about *general* intelligence, capable of reasoning about the world from first principles and learning in much the same way that a human baby does, why would it need to be training on human language to start out with? There's nothing fundamentally special about humans, we're just a particularly complicated part of physics. The Large Hadron Collider produces more than 1 petabyte of data *per day*. The Event Horizon Telescope collected 5.5 petabytes of data in April of 2018. What happens when someone pipes all of that into a massive AI model? Nobody's done it yet because anything short of general intelligence will be unhelpful to the physics community, so the funding just isn't there. But if the rapid pace of increasing interest in AI continues, someone will do it eventually, and an AI capable of predicting physics is also capable of predicting human behavior as a side effect, since humans run on physics.

(Continued in a reply, I ran into the comment length limit.)

Expand full comment

Good essay! Inexact reasoning of lossy analogies is causing quite a bit of confusion in the space.

Expand full comment

Agree. Because building AIs takes a lot of resources, they will typically be legally owned and controlled by organisations with access to thousands of smart people. Corporations and governments are forms of superintelligence. They are unlikely to allow their assets to go off piste and start creating nanobot armies. Occasionally, an AI will be badly managed (I’m guessing most likely when the owner is a corporation or country run by a single person) and get out from under its owner. But it will still have all the other AIs and their owners to deal with.

Expand full comment
Nov 15, 2023Liked by Timothy B Lee

What a nice refreshening portion of common sense, thank you for this article.

"The result won’t be a “singleton” that takes over the world, as predicted by the strong superintelligence thesis. Rather, we’ll get a pluralistic and competitive economy that’s not too different from the one we have now."

That seems very plausible, many people assume that this future superintelligence will appear in the world similar to the current one so that it can for example freely hack into computer systems or easily make money. But it is far more likely that there will be many other AIs at different levels of power and specialization. Security errors will be solved by then. Likewise Mustafa Suleyman's idea for the test of Artificial Capable Intelligence (https://www.technologyreview.com/2023/07/14/1076296/mustafa-suleyman-my-new-turing-test-would-see-if-ai-can-make-1-million) is somehow naive. There won't be opportunities available for making $1m from $100,000 investments by using only this cutting-edge AIs. They will have been already taken by businessmen who want to make money.

Expand full comment

“Genius is 99 percent perspiration”

Computers are very good at (metaphorical) perspiration. Even if they plateau around human intelligence, LLMs seem roughly 100s of times faster than humans, and tireless. A hypothetical AI that plateaued around human-level would still look like a super-genius, just through being able to put in so much work in a given amount of time. If it had any goal that money and power could help with, it could surely figure out a way to get some, paying or persuading people to be its hands as needed. Soon enough, some AI has millions of copies running, all dividing up tasks and working 24/7 toward a shared goal.

That seems high-risk even if you’re right about the data plateau limiting the potential of any single copy. Okay, lots of companies would be running controlled AIs specialized on their own data at the same time — that’s starting already — but nothing about that is incompatible with takeover scenarios.

Expand full comment

well thought out piece. appreciate the perspective and thoughtfulness

Expand full comment

You seem to be talking about "all future AI", but I get a vibe that you're really thinking about LLMs. See the table at the top of this blog post: https://www.alignmentforum.org/posts/rgPxEKFBLpLqJpMBM/response-to-blake-richards-agi-generality-alignment-and-loss

If you’re just talking about LLMs, I mostly agree with this post.

If you meant to be talking about "all future AI", then I'll start listing some things that (I claim) some future AI will definitely be able to do, that you seem to be assuming to be forever beyond the reach of AI.

One thing is: remote-operating robots. If you give a human an existing remote-controllable robot, and a few hours' practice, they'll be able to do things with it that are way beyond any current AI. But the human brain is an algorithm too. If the human brain can figure out how to remote-control a robot with minimal practice, some future AI will be able to at least as well and quickly. There's a popular idea that robotics is a very hard unsolved problem, but the only constraint is today’s lousy algorithms. Remote-controllable robots are pretty cheap and easy to mass-manufacture. It's just that the demand is currently almost zero, because if you're going to pay a human salary regardless, you get a human body for zero marginal cost. As soon as AI exists that can remote-pilot a robot as well as a human brain can, supply & demand of remote-control robots would skyrocket. If there are millions, then billions, then trillions, of AI-controlled robots in the world, each of which can do all the kinds of things that humans can do (yes, including on-the-job training), it’s not an “economy that’s not too different from the one we have now”, right?

Another thing is: founding companies, and hiring people. Even if remote-control robots didn’t exist, we already have a world full of people carrying cameras and microphones and not making optimal use of their time. An micro-manager AI could walk a person through the process of doing whatever experiments are useful to the AI.

I also note that Joseph Stalin had merely one human brain and no particular physical prowess, but was able to amass extraordinary power. How did he do that? Whatever your answer is, why can't a future AI do those kinds of things too?

If you have 3 hours, here's Carl Shulman walking through, in great detail, what "AI takeover" might look like (without any nanotech): https://www.dwarkeshpatel.com/p/carl-shulman-2 :) And see also my blog post that I linked at the top.

Expand full comment

This was a fascinating read, thanks!

As a layman, I'm not sure I grasp the exact challenge with insufficient training data.

I recently did a deep dive into the OpenAI paper about DALL-E 3. The gist of it is that the team figured out that current text-to-image models were so poor at prompt adherence because of poorly labeled images in their training set. So they built a bespoke captioner that was specifically trained to create rich, descriptive captions that covered every detail of the image. They then recaptioned all of the images in the training set using this AI captioner.

In their subsequent tests, they found that using 95% synthetic data from this captioner massively outperformed lower ratios.

So I wonder why this isn't possible with LLM training. Could an LLM not be trained specifically to produce new content for other LLMs to be trained on, taking into account all the needed nuances, etc.?

I guess you have partially answered this with your metaphor about reading 20 books vs. 200 books. LLMs can write the same content in many different ways, but they'll fundamentally be the same underlying ideas. I am also vaguely familiar with the term "model collapse" in reference to LLMs being trained on LLM-produced data.

But I haven't yet come across a clear explainer for why the training data limitations are important. I might not be the only one. Perhaps something for a future post of yours.

Expand full comment

Having previously lived in Silicon Valley for almost a decade, I know some people who feel deeply pessimistic about the future because of concerns about superintelligent AI. I'm in the camp of - it's good for some people to put in some precautionary work. But seeing people I know feeling deeply pessimistic about the future - whether because of global warming, rising inequality, or in this case, foom and doom, I feel so sad. The more thoughtful critiques the better.

Expand full comment

I'm not responding to the interview request because I'm the counterfactual - AI has NOT been able to do a single thing I asked it to. (Which included such complicated tasks as, "summarize these meeting notes." I wasn't asking it to create art.)

I most liked the section on Knowledge, and would go further. A thing that humans are good at is infusing decisions and actions with values. A computer can come up with a pro/con sheet, and can suggest the best action given some desired outcome (like winning the chess game). And to some extent, algorithms can be weighted with the values of the creators. But I think there are a lot of nuanced situations where "reasonable minds can disagree" and while AI can game out possible outcomes, it can't make a value-laden CHOICE. I guess the word I've been looking for is "wisdom." I think AI can be intelligent, it can probably be sort-of knowledgeable, but I'm not at all convinced it can be wise, or can meaningfully infuse decisions with values.

Expand full comment

Yes, someone else made this point, but I am reading the Coming Wave and wanted to say that they trained their first Go AI using materials, but their second, more formidable AI, was trained by playing itself repetitively. The AIs probably will be able to train themselves.

Expand full comment

It's a really well-written and thought provoking piece, and I find a lot of AI existential risk scenarios intuitively unpersuasive. Still, I'd quibble a little with a couple of things, or at least draw different conclusions from them.

The distinction between knowledge and data is really valuable, and I take the point that "economically significant knowledge is... locked up in the brains and private databases of millions of individuals and organizations". But what if the majority of those individuals and orgs decide to partner with the same AI platform? Andrew Whitby's point - that knowledge in one domain will combine usefully with knowledge from other domain - seems to push in that direction. Isn't knowledge the ultimate game of network effects? And if so, wouldn't we expect it to tend towards monopoly, all else being equal?

I also really like the point that you need constant real-world feedback if you're going to implement plans with real-world effects. But lots of people want their AI's to have real world effects and are going to build the tools to make that happen! That pretty much what automation is: if I want a faster and more efficient process for testing new materials, managing traffic, converting browsers into buyers, or blowing up hostile aeroplanes, I have a powerful incentive to automate, and that is going to mean building as tight a feedback loop as possible between intervention, AI and results from my intervention. The more I can get humans out of that loop, the faster and less error prone my business process becomes.

Expand full comment

Wonderful article! A not insignificant amount of valuable knowledge is also ‘process knowledge’ which is gained via experience in the physical world and interactions with other humans. This is hard to write down, let alone database for an AI to learn.

Expand full comment

Really appreciated this article pointing out distinctions that get overlooked when discussing the different kinds of knowledge that AI and humans can acquire. Small note: "Garry Kasparov" is the correct spelling of the former world chess champion. :)

Expand full comment

There are stories about wizards who are powerful because of the magic they possess. Isn't thinking about AGI similar? Perhaps viewing superintelligence as akin to magic leads people to overlook its limitations. One such limitation is the need for experiments and the acquisition of empirical knowledge, as the author mentioned. Another is that, unlike magical spells, you cannot convince people who simply refuse to listen to a stranger (a challenge well-known to political marketing strategists). Additionally, superintelligence is not capable of mind reading—it can only make educated guesses about people's true motives, and much of the world's data is not publicly available.

Expand full comment