Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropical Paradoxes are Paradoxes of Probability Theory, published by Ape in the coat on December 7, 2023 on LessWrong.
This is the fourth post in my series on Anthropics. The previous one is Anthropical probabilities are fully explained by difference in possible outcomes.
Introduction
If there is...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropical Paradoxes are Paradoxes of Probability Theory, published by Ape in the coat on December 7, 2023 on LessWrong.
This is the fourth post in my series on Anthropics. The previous one is Anthropical probabilities are fully explained by difference in possible outcomes.
Introduction
If there is nothing special about anthropics, if it's just about correctly applying standard probability theory, why do we keep encountering anthropical paradoxes instead of general probability theory paradoxes? Part of the answer is that people tend to be worse at applying probability theory in some cases than in the others.
But most importantly, the whole premise is wrong. We do encounter paradoxes of probability theory all the time. We are just not paying enough attention to them, and occasionally attribute them to anthropics.
Updateless Dilemma and Psy-Kosh's non-anthropic problem
As an example, let's investigate Updateless Dilemma, introduced by Eliezer Yudkowsky in 2009.
Let us start with a (non-quantum) logical coinflip - say, look at the heretofore-unknown-to-us-personally 256th binary digit of pi, where the choice of binary digit is itself intended not to be random.
If the result of this logical coinflip is 1 (aka "heads"), we'll create 18 of you in green rooms and 2 of you in red rooms, and if the result is "tails" (0), we'll create 2 of you in green rooms and 18 of you in red rooms.
After going to sleep at the start of the experiment, you wake up in a green room.
With what degree of credence do you believe - what is your posterior probability - that the logical coin came up "heads"?
Eliezer (2009) argues, that updating on the anthropic evidence and thus answering 90% in this situation leads to a dynamic inconsistency, thus anthropical updates should be illegal.
I inform you that, after I look at the unknown binary digit of pi, I will ask all the copies of you in green rooms whether to pay $1 to every version of you in a green room and steal $3 from every version of you in a red room. If they all reply "Yes", I will do so.
Suppose that you wake up in a green room. You reason, "With 90% probability, there are 18 of me in green rooms and 2 of me in red rooms; with 10% probability, there are 2 of me in green rooms and 18 of me in red rooms. Since I'm altruistic enough to at least care about my xerox-siblings, I calculate the expected utility of replying 'Yes' as (90% * ((18 * +$1) + (2 * -$3))) + (10% * ((18 * -$3) + (2 * +$1))) = +$5.60." You reply yes.
However, before the experiment, you calculate the general utility of the conditional strategy "Reply 'Yes' to the question if you wake up in a green room" as (50% * ((18 * +$1) + (2 * -$3))) + (50% * ((18 * -$3) + (2 * +$1))) = -$20. You want your future selves to reply 'No' under these conditions.
This is a dynamic inconsistency - different answers at different times - which argues that decision systems which update on anthropic evidence will self-modify not to update probabilities on anthropic evidence.
However, in the comments Psy-Kosh notices that this situation doesn't have anything to do with anthropics at all. The problem can be reformulated as picking marbles from two buckets with the same betting rule. The dynamic inconsistency doesn't go anywhere, and if previously it was a sufficient reason not to update on anthropic evidence, now it becomes a sufficient reason against the general case of Bayesian updating in the presence of logical uncertainty.
Solving the Problem
Let's solve these problems. Or rather this problem - as they are fully isomorphic and have the same answer.
For simplicity, as a first step, let's ignore the betting rule and dynamic inconsistency and just address it in terms of the Law of Conservation of Expected Evidence. Do I get new evidence while waking up in a green room or picking a green marble? O...
View more