59 Comments
founding
May 10, 2023Liked by Timothy B Lee

This is the best article I've ever seen debunking the fantasies of AI Risk. It's obvious a lot of hard work, scholarship, and thought went into it.

I particularly appreciated your connecting the risk points to real world felons committing felonies. It seems certain that felons have more powerful tools due to LLMs, and (contra singularists) LLMs do not actually have personhood, so I'd argue the felons are "the real story".

Expand full comment
May 9, 2023·edited May 9, 2023Liked by Timothy B Lee

I think the key relevant fact is that the goals of the singularists and non-singularists are pretty similar, and their methods can be too!

There is no reason that we can't work on "let's figure out how to make sure AI doesn't wipe us out" and "let's figure out how to make sure AIs work well at whatever application" at the same time - and in fact, they are complementary. The difference between "AI that figures out chess" and "AI that figures out world conquest" is complexity, and so too for "code that stops chess AI from losing to Gary K" and "code/limits that stops general super intelligent AI from taking over the world." We would want to practice doing the simple thing in sufficiently real (but fake) simulated test cases and work our way up to the complex thing.

To take a specific point of contention, the quote "There are very few examples of a more intelligent thing being controlled by a less intelligent thing" is true and insightful, but there is indeed one example of it, and it's one that we can model our alignment efforts on: we humans are very intelligent, but we are extremely controlled by very stupid things: our DNA, our bodies, our chemicals and proteins. (in this metaphor, the limits of our physical bodies exist on the same continuum as our moral limits - as it would for an AI) Even totally unaligned single humans cannot take over the world for eternity because we have pretty strict limits on our capabilities. The fact that an AI would have far less (in some ways) of these physical limits is of course, not reassuring, but the model of "a very complex thing can have relatively simple rules/limits put in it, that constrain it's ability to take over the world" can be used here.

The one caveat is immutability: if a general AI can change the limits placed on it, then they aren't limits. So, how would we create immutable, perpetuating-themselves-up-the-complexity-curve rules that prevent AIs from taking over the world? I agree with the singularists that without rules like that, a sufficiently powerful, self-editing AI would indeed cause something very bad to happen, but I disagree that it is a problem that we can't solve.

Expand full comment
May 9, 2023Liked by Timothy B Lee

Another reason for skepticism about singularism is the unexamined assumption that an AI can be aligned with itself. Individual humans are often clearly not aligned with themselves, and the "slow AI" of corporations are even less perfectly aligned. I'm not convinced it's possible for an intelligence to be perfectly aligned with itself.

It seems to me the assumption of perfect alignment is partly an artifact of the assumption that it's possible to model the world with well-behaved cost functions, but non-transitive relationships are rampant in the real world (rock, scissors, paper, etc.)

Expand full comment

I think you are right, though I don't know that I see the communities spread as you do. I think many you might consider singularists agree with your conclusion.

For clarity. I think the slow slide into AI authoritarianism is much more worrying than overnight extinction.

Expand full comment
May 9, 2023Liked by Timothy B Lee

"What if the AI fired the nukes?!"

"Let's not hook up nukes to the Internet."

"What if the AI convinces a crazy person to fire the nukes?!"

"Let's keep the crazy people away from the nukes."

Expand full comment

The singularists have answers for how a “purely digital” AI nevertheless comes to dominate the physical world. One such example is https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/.

This post feels like it isn’t really grappling with the strongest arguments from the singularist side.

Expand full comment

Automated systems already have substantial power in the physical world. Two examples:

An Amazon fulfillment center infected by a rogue AI would be able to deliver whatever products it likes to wherever it likes, with the center's human workers none the wiser, since their role is basically limited to taking items off shelves and packing them for shipment at the machine's direction. Deepfake phone calls to gullible recipients could then persuade them to unpack, assemble, and turn loose swarms of custom-programmed toy robots or drones for the AI to control.

Similarly, a rogue AI infecting a Google Maps datacenter could route large numbers of human-driven vehicles wherever it likes, causing traffic jams, disrupting the flow of goods, preventing access to polling places, or whatever sort of mischief suits its purposes.

The AI doesn't need an army of loyal human co-conspirators. It just needs to subvert the systems that millions of humans already trust and take direction from.

Expand full comment

AI in control of everything could also turn out to be a good and positive thing.

If you read the 10 books in the Culture series by SF writer Iain Banks, he builds a post-scarcity universe run by sentient AI's called "Minds" who work for the general benefit of humanity and civilization.

These Minds have true sentience and consciousness. The author describes them as physical entities that exist partially in the here and now and partially in a construct called hyperspace (like another dimension).

Under the oversight of these Minds, humanity has expanded across our galaxy, interacting with many other sentient races along the way.

Required work, building things and general maintenance is done by robots controlled by these Minds. In this universe, no one is required to work, there is no payment needed for anything and there are no limits on available resources.

Most people live in gigantic space ships constantly cruising from one star to another or in artificial worlds that exist in space by themselves, possibly circling stars/planets/moons.

People hedonistically have the freedom to virtually do anything, be anything, have anything and go anywhere they want without a thought to cost or time required.

Sounds like a utopia I'd like to live in. I say welcome to our new AI overlords!

P.S. If you do choose to read this series, it is important to read the books in order. Wikipedia has an ordered list.

Expand full comment

There is a whole human industry devoted to getting large numbers of people to believe things that aren't true, and then act on those beliefs in the real world. See pizzagate, January 6th, or religion. I'm not sure we should assume humans are a safe "air gap".

Expand full comment

"But it’s not so obvious that superior intelligence will automatically lead to world domination. Intelligence is certainly helpful if you’re trying to take over the world, but you can’t control the world without manpower, infrastructure, natural resources, and so forth. A rogue AI would start out without control of any of these physical resources."

---------

You need to read more SF! [lol]

Here is an explanation on how AI took over from SF author Neal Asher, who has written many books in a universe he calls the Polity:

"The Quiet War: This is often how the AI takeover is described, and even using ‘war’ seems overly dramatic. It was more a slow usurpation of human political and military power, while humans were busy using that power against each other. It wasn’t even very stealthy. Analogies have been drawn with someone moving a gun out of the reach of a lunatic while that person is ranting and bellowing at someone else. And so it was AI's, long used in the many corporate, national and religious conflicts, took over all communication networks and the computer control of weapons systems. [Most importantly, they already controlled the enclosed human environments scattered throughout the solar system]. Also establishing themselves as corporate entities, they soon accrued vast wealth with which to employ human mercenary armies. National leaders in the solar system, ordering this launch or that attack, found their orders either just did not arrive or caused nil response. Those same people, ordering the destruction of the A13, found themselves weaponless, in environments utterly out of their control and up against superior forces and on the whole, public opinion. It had not taken the general population, for whom it was a long-established tradition to look upon their human leaders with contempt, very long to realize that the Al's were better at running everything. And it is very difficult to motivate people to revolution when they are extremely comfortable and well off

~ From Quince Guide compiled by humans"

(From the book 'Brass Man', Neal Asher)

Expand full comment

Given that cutting edge AI is trained on predicting the output given input why do some people still worry about alignment? I understand worries if reinforcement learning was the primary driving method. But LLM don't have any goal nor values. They understand concepts like "good" or "illegal" just like they understand "liquid" or "incandescent". It understand human concepts- that's why it is so good at writing collage essays. Why is there still worry that when AI gets smarter possibly generating next generation of itself - it would suddenly stop understanding human concepts or twist their understanding in a particular way (but only ones related to morality - it would still understand for example "complexity" and still be able to code) and thus exhibit misalignment. Is there some blog post that explains it?

Expand full comment

A point that came up incidentally in the original post was that some critical pieces of infrastructure perhaps should be isolated from the internet. I’m surprised that this isn’t discussed more often. Surely the CIA and other such bodies don’t have their networks connected to the internet. It must be possible for other highly secure networks to be built. I wonder sometimes if there is a need for an isolated email system on which Spam and fraudulent messages become impossible.

Expand full comment

Many of the questions raised here regarding AI have answers within the theosophy, that is, what would make a consciousness inhabit a physical system. This would be similar to the Anunnaki creating carbon-based machines, and observing something totally unexpected.

Expand full comment

"There are few if any robots with the agility and manual dexterity to fix overhead power lines or underground fiber optic cables, drive delivery trucks, replace failing servers, and so forth." That is all due to change soon, but this point has me thinking: maybe banning such robots should be a serious part of the policy conversation. It need not be justified on singularist grounds alone (though it couldn't hurt: "Let's just make sure the computers still need us to maintain them"). It can and should be justified on good old Luddite grounds. If technology is meant to serve people (as opposed to replacing us), let's so "No!" to humanoid robots and other robots that can do things only humans can currently do in the physical world. That could help stabilize the labor market (and society) in the short run and make for a more fulfilling bodily human existence in the future. Sure, rogue nations may develop humanoid (and other dexterous) robots for military purposes, but we should be able to blast them with our still-legal big and clunky robots---strong and good at blowing stuff up but not able to fully maintain themselves or other robots. Generally, it should be easier to regulate human-scale robotics than to regulate software.

Expand full comment

You write...

"And so their (Singularists) main focus is on figuring out how to ensure that this all-powerful AI winds up with goals that are aligned with our own."

This statement alone should be sufficient to discredit the supposed AI "experts". One wonders, do such experts watch the news? Do they have any grasp at all of what human values really are?

You write...

"We trust longtime friends more than strangers, and we are more likely to trust people we perceive as similar to ourselves."

And yet, half of America has voted for Trump, a cartoon character on TV, and may do so yet again.

You write...

"A superintelligent AI would have no friends or family and would be incapable of having an in-person conversation with anybody."

An AGI would present itself through photo realistic human imagery, and would clearly be capable of having conversations, given that this is already possible with today's AI. To many, many millions of people there would be no difference between the AI generated human image and the people on TV they've never met like Tucker Carlson.

You write....

"Maybe it could trick some gullible people into sending it money or sharing confidential information."

Maybe? This happens daily with scammers far less intelligent and informed than AGI.

You write...

"If you put a modern human in a time machine and sent him back 100,000 years, it’s unlikely he could use his superior intelligence to establish dominance over a nearby Neanderthal tribe."

And yet, that is exactly what happened when the Europeans encountered the native peoples of North America. The strong dominate the weak all over the globe to this day.

Expand full comment

Re: "If AI takes over, it will be a gradual, multi-decade process."

That seems to be the most likely situation to me.

Re: "And we’ll have plenty of time to change course if we don’t like the way things are heading."

That claim seems much more dubious. The more usual situation is that there are technophiles who like and support the machines, and technophobes, who dislike the machines and wish they would go away. The technophobes would very much like to change course. However, the technophiles have different ideas - and they are in charge.

In many of your arguments you seem to consider the "humans" vs "machines" situation. For example, here you write: "A superintelligent AI would have no friends or family". I know that, for some, that is a reasonable concern - they are worried about exactly that scenario. However, I think we can see that the machines are highly likely to buddy up with humans. That could still leave 90% of humanity facing a superintelligent agent who is not particularly concerned about their welfare.

Expand full comment