Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.
This is: A Critique of Functional Decision Theory, published by wdmacaskill on the AI Alignment Forum.
A Critique of Functional Decision Theory
NB: My writing this note was prompted by Carl Shulman, who suggested we could try a low-time-commitment way of attempting to understanding the disagreement between some folks in the rationality community and academic decision theorists (including myself, though I’m not much of a decision theorist). Apologies that it’s sloppier than I’d usually aim for in a philosophy paper, and lacking in appropriate references. And, even though the paper is pretty negative about FDT, I want to emphasise that my writing this should be taken as a sign of respect for those involved in developing FDT. I’ll also caveat I’m unlikely to have time to engage in the comments; I thought it was better to get this out there all the same rather than delay publication further.
Introduction
There’s a long-running issue where many in the rationality community take functional decision theory (and its variants) very seriously, but the academic decision theory community does not. But there’s been little public discussion of FDT from academic decision theorists (one exception is here); this note attempts to partly address this gap.
So that there’s a clear object of discussion, I’m going to focus on Yudkowsky and Soares’ ‘Functional Decision Theory’ (which I’ll refer to as Y&S), though I also read a revised version of Soares and Levinstein’s Cheating Death in Damascus.
This note is structured as follows. Section II describes causal decision theory (CDT), evidential decision theory (EDT) and functional decision theory (FDT). Sections III-VI describe problems for FDT: (i) that it sometimes makes bizarre recommendations, recommending an option that is certainly lower-utility than another option; (ii) that it fails to one-box in most instances of Newcomb’s problem, even though the correctness of one-boxing is supposed to be one of the guiding motivations for the theory; (iii) that it results in implausible discontinuities, where what is rational to do can depend on arbitrarily small changes to the world; and (iv) that, because there’s no real fact of the matter about whether a particular physical process implements a particular algorithm, it’s deeply indeterminate what FDT’s implications are. In section VII I discuss the idea that FDT ‘does better at getting utility’ than EDT or CDT; I argue that Y&S’s claims to this effect are unhelpfully vague, and on any more precise way of understanding their claim, aren’t plausible. In section VIII I briefly describe a view that captures some of the motivation behind FDT, and in my view is more plausible. I conclude that FDT faces a number of deep problems and little to say in its favour.
In what follows, I’m going to assume a reasonable amount of familiarity with the debate around Newcomb’s problem.
II. CDT, EDT and FDT
Informally: CDT, EDT and FDT differ in what non-causal correlations they care about when evaluating a decision. For CDT, what you cause to happen is all that matters; if your action correlates with some good outcome, that’s nice to know, but it’s not relevant to what you ought to do. For EDT, all correlations matter: you should pick whatever action will result in you believing you will have the highest expected utility. For FDT, only some non-causal correlations matter, namely only those correlations between your action and events elsewhere in time and space that would be different in the (logically impossible) worlds in which the output of the algorithm you’re running is different. Other than for those correlations, FDT behaves in the same way as CDT.
Formally, where S represents states of nature, A, B etc represent acts, P is a probability function, and
U
S
i
A
represents the utility the agent gains from the outcome of...
view more