Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Social Alignment Problem, published by irving on April 28, 2023 on LessWrong.
TLDR: I think public outreach is a very hopeful path to victory. More importantly, extended large-scale conversation around questions such as whether public outreach is a hopeful path to victory would be very likely to decrease p(doom).
You’re a genius mechanical engineer in prison. You stumble across a huge bomb rigged to blow in a random supply closet. You shout for two guards passing by, but they laugh you off. You decide to try to defuse it yourself.
This is arguably a reasonable response, given that this is your exact skill set. This is what you were trained for. But after a few hours of fiddling around with the bomb, you start to realize that it's much more complicated than you thought. You have no idea when it’s going to go off, but you start despairing that you can defuse it on your own. You sink to the floor with your face in your hands. You can’t figure it out. Nobody will listen to you.
Real Talking To The Public has never been tried
Much like the general public has done with the subject of longevity, I think many people in our circle have adopted an assumption of hopelessness toward public outreach social alignment, before a relevant amount of effort has been expended. In truth, there are many reasons to expect this strategy to be quite realistic, and very positively impactful too. A world in which the cause of AI safety is as trendy as the cause of climate change, and in which society is as knowledgeable about questions of alignment as it it about vaccine efficacy (meaning not even that knowledgeable), is one where sane legislation designed to slow capabilities and invest in alignment becomes probable, and where capabilities research is stigmatized and labs find access to talent and resources harder to come by.
I’ve finally started to see individual actors taking steps towards this goal, but I’ve seen a shockingly small amount of coordinated discussion about it. When the topic is raised, there are four common objections: They Won’t Listen, Don’t Cry Wolf, Don’t Annoy the Labs, and Don’t Create More Disaster Monkeys.
They won’t listen/They won’t understand
I cannot overstate how clearly utterly false this is at this point.
It’s understandable that this has been our default belief. I think debating e/accs on Twitter has broken our brains. The experience of explaining again and again why something smarter than you that doesn’t care about you is dangerous, and being met with these arguments, is a soul-crushing experience. It made sense to expect that if it’s this hard to explain to a fellow computer enthusiast, then there’s no hope of reaching the average person. For a long time I avoided talking about it with my non-tech friends (let's call them "civilians") for that reason. However, when I finally did, it felt like the breath of life. My hopelessness broke, because they instantly vigorously agreed, even finishing some of my arguments for me. Every single AI safety enthusiast I’ve spoken with who has engaged with civilians has had the exact same experience.
I think it would be very healthy for anyone who is still pessimistic about convincing people to just try talking to one non-tech person in their life about this. It’s an instant shot of hope.
The truth is, if we were to decide that getting the public on our side is our goal, I think we would have one of the easiest jobs any activists social alignment researchers have ever had.
Far from being closed to the idea, civilians in general literally already get it. It turns out, Terminator and the Matrix have been in their minds this whole time. We assumed they'd been inoculated against serious AI risk concern - turns out, they walked out of the theaters thinking “wow, that’ll probably happen someday”. They’ve been thinking that th...
view more