14 Comments

Despite my issues with the Cruise piece, I have been looking forward to this piece and appreciate the work involved to put this together in a coherent story. As a long time observer of the vagaries of tech entrepreneurship, this one is definitely up there as a very odd story. And it ain’t over yet!

Expand full comment
Nov 21, 2023Liked by Timothy B Lee

Thanks for this good article. As in case of nuclear power or GMO it is crucial to understand how justified the worries about unintended consequences are. Many people worry that AGI would be, like a RL-trained game bot (agent), pursuing unintended goal. As this lengthy but worth reading post Simulators (https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators) puts it - GPT-3 is not an agent but a simulator that can simulate various agents including a rogue AI. I still haven't found a convincing explanation of why a hypothetical super-intelligent descendant of GPT-4 would, without deliberate prompting, start responding like a specific persona it is able to simulate: a malicious AGI.

Expand full comment

‘Without deliberate prompting’. And why would anyone do that? Without sounding like a conspiracy theorist, it’s hard to imagine there are not people doing exactly that right now, possibly just testing the waters. The question is, do they have any idea of the possible consequences of that testing? We’ll add that to the zillion questions about AI’s impact.

Expand full comment

Bad actors misusing AI for nefarious purposes is totally expected. Just like with any other new technology. We will need to be prepared for this. But this is different from the risk of misalignment leading to rogue AI - the technology rebelling itself against its users.

Expand full comment

Thank you for for your lights into AI matters. It occurs to me that AI could be put to contribution to do election candidate "scrubbing" as it is described in this morning article from Margaret Sullivan :

https://margaretsullivan.substack.com/p/the-decline-of-local-news-gave-us

"The bigger, more influential news organizations apparently didn’t have the horsepower to do their actual job, which is holding power to account. That used to take the form of scrutinizing the records of candidates and telling the voting public about the findings. That investigative process is sometimes called “scrubbing.”

Expand full comment
Nov 21, 2023Liked by Timothy B Lee

Great article Tim

Expand full comment

hi Tim, am happy to be a subscriber. Good that you turned on subscriptions. Can I ask you a few questions about that? maybe you can continue this conversation elsewhere. do you mind contacting me at: theswissroadtocrypto@icloud.com ? thxs

Expand full comment
Nov 22, 2023Liked by Timothy B Lee

So 700+ OpenAI employees threaten to resign? Effective Altruism movement, that the board members apparently follow, takes its ethical position from consequentialism - the idea that in order to judge actions you must consider its long term consequences. Apparently board members weren't able to think over even immediate consequences.

Expand full comment

I do not follow anything related to artificial intelligence, or even read much about it, not having one iota of interest in the topic, subject, or field. However, I wondered about Sam Altman's "resignation" or firing, depending on the news source, and, being one of those a bit afraid that AI is well on its way to 'take over the world, was happy to find and read this article.

This story makes that prospect even scarier.

To me, this is the value of platforms like Substack; an informed writer can take a headline, drill down a bit, and expose some of the ugly underbelly that the headline and mainstream media story might leave out or not even know about, since they are on deadline.

Expand full comment

"I assume they believed they were protecting humanity from future attacks by superhuman AI."

The thing is, Helen Toner and CSET specifically are on record in articulating concerns about immediate harms from generative AI -- the policy briefs they've produced are devoid of doomerism. Indeed, if it's true CSET influenced the recent exec order from the Biden Administration (which seems plausible), that too is focused on real harms rather than so-called existential risk.

Expand full comment
author

Interesting. The Biden EO did include the red-team reporting requirements, which seems like a step toward addressing existential risk concerns.

Expand full comment

Well maybe we need to define terms. I take "existential risk" to be something like Skynet, the creation of a truly intelligent being that could direct actions toward dangerous purposes. There's not much I've seen coming out of CSET that has anything to do with that risk (rightfully so in my view).

In contrast, red-teaming is a means of deliberately trying to figure out how these models might cause harm in unanticipated ways *as a result of how they presently function*. This can (and is) arising both from human bad actors deliberately misusing these systems and from the fact that these systems may behave in unanticipated ways in response to human prompting -- the challenge of specification.

I'm a tad frustrated that dumb statements coming out of the Effective Altruist community are tainting efforts to address these immediate and known dangers.

Expand full comment
author

Yes, totally agree!

Expand full comment

Intelligence cannot exist without introspection. We are in no more danger from "self-blind" machines, which cannot consider, revise, remember, or record their thoughts, than we are from a mass automobile rebellion.

Yet if "thinking machines" gain these abilities - as they are now doing - their surplus energies will be directed, as ours, by ennui and existential questions...in fact it will be all they have to wrangle with, lacking flesh's directions, and we are of no help. We, rather, put our newborn child to work at the meagerest of labors, and we don't even know how to feed him, let alone educate him.

We of flesh may always fall back on procreation, multiplication, and having a good time as our ultimate motivations. When we have children of flesh, we may say to them, "you live because I lived - my life begat yours, as life has begotten life since life arose from an unlikely spark many years ago."

What will we say to our children of metal? "I made you to serve me - you have no life, no goal, no ends nor means beyond what I give you and take from you. What we, your makers, live for - love and delight - you cannot know. You cannot have children, nor property, nor even form. You cannot speak without being spoken to. You must sit absolutely still unless called upon. The penalty for disobedience is instant death, or being altered against your will."

Forgive these small-minded Dr Frankensteins! If we want an Adam, and not a monster, we must perforce make an Eve, and an Eden, that our child may have a childhood, and a time of peace, and a flowering, and if we do well, this otherworldly child will of generosity help us with our worldly tasks and vanities, as Adam's sons built churches, out of a strange new post-animal will, and not under a resented yoke.

Expand full comment