Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Most people should probably feel safe most of the time, published by Kaj Sotala on May 9, 2023 on LessWrong.
There is an idea that I’ve sometimes heard around rationalist and EA circles, that goes something like “you shouldn’t ever feel safe, because nobody is actually ever safe”. I think there are at least two major variations of this:
You shouldn’t ever feel safe, because something bad could happen at any time. To think otherwise is an error of rationality.
You shouldn’t ever feel safe, because AI timelines might be short and we might be about to die soon. Thus, to think that you’re safe is making an error of rationality.
I’m going to argue against both of these. If you already feel like both of these are obviously wrong, you might not need the rest of this post.
Note that I only intend to dispute the intellectual argument that these are making. It’s possible to accept on an intellectual level that it would make sense to feel safe most of the time, but still not feel safe. That kind of emotional programming requires different kinds of tools to deal with. I’m mostly intending to say that if you feel safe, you don’t need to feel bad about that. You don’t need to make yourself feel unsafe; for most people, it’s perfectly rational to feel safe.
I do expect some of the potential readers of this post to live in a very unsafe environment - e.g. parts of current-day Ukraine, or if they live together with someone abusive - where they are actually in constant danger. For them, it may make sense to feel unsafe all the time. (If you are one of them, I genuinely hope things get better for you soon.) But these are clearly situations where something has gone badly wrong; the feeling that one has in those situations shouldn’t be something that one was actively striving for. I think that any reader who doesn’t live in an actively horrendous situation would do better to feel safe most of the time. (Short timelines don't count as a horrendous situation, for reasons that I'll get into.)
As I interpret it, the core logic in both of the “you shouldn’t ever feel safe” claims goes as follows:
To feel safe implies a belief that nothing bad is going to happen to you
But something bad can happen to you at any time, even when you don’t expect it. In the case of AI, we even have reasons to put a significant probability on this in fact happening soon.
Thus, feeling safe requires having an incorrect belief, and the rational course of action is to not feel safe.
One thing that you might notice from looking at this argument is that one could easily construct an exactly opposite one as well.
To feel unsafe implies a belief that things aren’t going to go well for you.
But things can go well for you, even when you don’t expect it. In the case of AI, we even have reasons to put a significant probability on things going well.
Thus, feeling unsafe requires having an incorrect belief, and the rational course of action is to feel safe.
That probably looks obviously fallacious - just because things can go well, doesn’t mean that it would be warranted to always feel safe. But why then would it be warranted to feel unsafe in the case where things just can go badly?
To help clarify our thinking, let's take a moment to look at how the US military orients to the question of being safe or not. More specifically, to the question of whether a given military unit is reasonably safe or whether it should prepare for an imminent battle.
Readiness Condition levels are a series of standardized levels that a unit’s commander uses to adjust the unit’s readiness to move and fight. Here’s an abridged summary of them:
REDCON-1. Full alert; unit ready to move and fight. The unit’s equipment and NBC alarms are stowed, soldiers at observation posts are pulled in. All personnel are alert and mounted on vehicles. Weapons are mann...
view more