Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Response to nostalgebraist: proudly waving my moral-antirealist battle flag, published by Steven Byrnes on May 29, 2024 on LessWrong.
@nostalgebraist has recently posted yet another thought-provoking post, this one on how we should feel about AI ruling a long-term posthuman future. [Previous discussion of this...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Response to nostalgebraist: proudly waving my moral-antirealist battle flag, published by Steven Byrnes on May 29, 2024 on LessWrong.
@nostalgebraist has recently posted yet another thought-provoking post, this one on how we should feel about AI ruling a long-term posthuman future. [Previous discussion of this same post on lesswrong.] His post touches on some of the themes of Joe Carlsmith's "Otherness and Control in the Age of AI" series - a series which I enthusiastically recommend - but nostalgebraist takes those ideas much further, in a way that makes me want to push back.
Nostalgebraist's post is casual, trying to reify and respond to a "doomer" vibe, rather than responding to specific arguments by specific people. Now, I happen to self-identify as a "doomer" sometimes. (Is calling myself a "doomer" bad epistemics and bad PR? Eh, I guess. But also: it sounds cool.) But I too have plenty of disagreements with others in the "doomer" camp (cf: "Rationalist (n.) Someone who disagrees with Eliezer Yudkowsky".). Maybe nostalgebraist and I have common ground? I dunno.
Be that as it may, here are some responses to certain points he brings up.
1. The "notkilleveryoneism" pitch is not about longtermism, and that's fine
Nostalgebraist is mostly focusing on longtermist considerations, and I'll mostly do that too here. But on our way there, in the lead-in, nostalgebraist does pause to make a point about the term "notkilleveryoneism":
They call their position "notkilleveryoneism," to distinguish that position from other worries about AI which don't touch on the we're-all-gonna-die thing. And who on earth would want to be a not-notkilleveryoneist?
But they do not mean, by these regular-Joe words, the things that a regular Joe would mean by them.
We are, in fact, all going to die. Probably, eventually. AI or no AI.
In a hundred years, if not fifty. By old age, if nothing else. You know what I mean.…
OK, my understanding was:
(1) we doomers are unhappy about the possibility of AI killing all humans because we're concerned that the resulting long-term AI future would be a future we don't want; and
(2) we doomers are also unhappy about the possibility of AI killing all humans because we are human and we don't want to get murdered by AIs. And also, some of us have children with dreams of growing up and having kids of their own and being a famous inventor or oh wait actually I'd rather work for Nintendo on their Zelda team or hmm wait does Nintendo hire famous inventors? …And all these lovely aspirations again would require not getting murdered by AIs.
If we think of the "notkilleveryoneism" term as part of a communication and outreach strategy, then it's a strategy that appeals to Average Joe's desire to not be murdered by AIs, and not to Average Joe's desires about the long-term future.
And that's fine! Average Joe has every right to not be murdered, and honestly it's a safe bet that Average Joe doesn't have carefully-considered coherent opinions about the long-term future anyway.
Sometimes there's more than one reason to want a problem to be solved, and you can lead with the more intuitive one. I don't think anyone is being disingenuous here (although see comment).
1.1 …But now let's get back to the longtermist stuff
Anyway, that was kinda a digression from the longtermist stuff which forms the main subject of nostalgebraist's post.
Suppose AI takes over, wipes out humanity, and colonizes the galaxy in a posthuman future. He and I agree that it's at least conceivable that this long-term posthuman future would be a bad future, e.g. if the AI was a paperclip maximizer. And he and I agree that it's also possible that it would be a good future, e.g. if there is a future full of life and love and beauty and adventure throughout the cosmos. Which will it be? Let's dive into that discus...
View more