Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: No, You Need to Write Clearer, published by NicholasKross on April 29, 2023 on LessWrong.
This post is aimed solely at people in AI alignment/safety.
So I was reading this post, which basically asks "How do we get Eliezer Yudkowsky to realize this obviously bad thing he's doing, and either stop doing it or go away?"
That post was linking this tweet, which basically says "Eliezer Yudkowsky is doing something obviously bad."
Now, I had a few guesses as to the object-level thing that Yudkowsky was doing wrong. The person who made the first post said this:
he's burning respectability that those who are actually making progress on his worries need. he has catastrophically broken models of social communication and is saying sentences that don't mean the same thing when parsed even a little bit inaccurately. he is blaming others for misinterpreting him when he said something confusing. etc.
A-ha! A concrete explanation!
Buried in the comments. As a reply to someone innocently asking what EY did wrong.
Not in the post proper. Not in the linked tweet.
The Problem
Something about this situation got under my skin, and not just for the run-of-the-mill "social conflict is icky" reasons.
Specifically, I felt that if I didn't write this post, and directly get it in front of every single person involved in the discussion... then not only would things stall, but the discussion might never get better at all.
Let me explain.
Everyone, everyone, literally everyone in AI alignment is severely wrong about at least one core thing, and disagreements still persist on seemingly-obviously-foolish things.
This is because the field is "pre-paradigmatic". That is, we don't have many common assumptions that can all agree on, no "frame" that we all think is useful.
In biology, they have a paradigm involving genetics and evolution and cells. If somebody shows up saying that God created animals fully-formed... they can just kick that person out of their meetings! And they can tell them "go read a biology textbook".
If a newcomer disagrees with the baseline assumptions, they need to either learn them, challenge them (using other baseline assumptions!), or go away.
We don't have that luxury.
AI safety/alignment is pre-paradigmatic. Every word in this sentence is a hyperlink to an AI safety approach. Many of them overlap. Lots of them are mutually-exclusive. Some of these authors are downright surprised and saddened that people actually fall for the bullshit in the other paths.
Many of these people have even read the same Sequences.
Inferential gaps are hard to cross. In this environment, the normal discussion norms are necessary but not sufficient.
What You, Personally, Need to Do Differently
Write super clearly and super specifically.
Be ready and willing to talk and listen, on levels so basic that without context they would seem condescending. "I know the basics, stop talking down to me" is a bad excuse when the basics are still not known.
Draw diagrams. Draw cartoons. Draw flowcharts with boxes and arrows. The cheesier, the more "obvious", the better.
"If you think that's an unrealistic depiction of a misunderstanding that would never happen in reality, keep reading."
Eliezer Yudkowsky, about something else.
If you're smart, you probably skip some steps when solving problems. That's fine, but don't skip writing them down! A skipped step will confuse somebody. Maybe that "somebody" needed to hear your idea.
Read The Sense of Style by Steven Pinker. You can skip chapter 6 and the appendices, but read the rest. Know the rules of "good writing". Then make different tradeoffs, sacrificing beauty for clarity. Even when the result is "overwritten" or "repetitive".
Make it obvious which (groupings of words) within (the sentences that you write) belong together. This helps people "parse" your
Explain the sam...
view more