Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Sentience matters, published by So8res on May 29, 2023 on LessWrong.
Short version: Sentient lives matter; AIs can be people and people shouldn't be owned (and also the goal of alignment is not to browbeat AIs into doing stuff we like that they'd rather not do; it's to build them de-novo to care about valuable stuff).
Context: Writing up obvious points that I find myself repeating.
Stating the obvious:
All sentient lives matter.
Yes, including animals, insofar as they're sentient (which is possible in at least some cases).
Yes, including AIs, insofar as they're sentient (which is possible in at least some cases).
Yes, even including sufficiently-detailed models of sentient creatures (as I suspect could occur frequently inside future AIs). (People often forget this one.)
There's some ability-to-feel-things that humans surely have, and that cartoon drawings don't have, even if the cartoons make similar facial expressions.
Not knowing exactly what the thing is, nor exactly how to program it, doesn't undermine the fact that it matters.
If we make sentient AIs, we should consider them people in their own right, and shouldn't treat them as ownable slaves.
Old-school sci-fi was basically morally correct on this point, as far as I can tell.
Separately but relatedly:
The goal of alignment research is not to grow some sentient AIs, and then browbeat or constrain them into doing things we want them to do even as they'd rather be doing something else.
The point of alignment research (at least according to my ideals) is that when you make a mind de novo, then what it ultimately cares about is something of a free parameter, which we should set to "good stuff".
My strong guess is that AIs won't by default care about other sentient minds, and fun broadly construed, and flourishing civilizations, and love, and that it also won't care about any other stuff that's deeply-alien-and-weird-but-wonderful.
But we could build it to care about that stuff--not coerce it, not twist its arm, not constrain its actions, but just build another mind that cares about the grand project of filling the universe with lovely things, and that joins us in that good fight.
And we should.
(I consider questions of what sentience really is, or consciousness, or whether AIs can be conscious, to be off-topic for this post, whatever their merit; I hereby warn you that I might delete such comments here.)
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
view more