Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Valence series] 3. Valence & Beliefs, published by Steven Byrnes on December 13, 2023 on LessWrong.
3.1 Post summary / Table of contents
Part of the
Valence series.
So far in the series, we defined valence (
Post 1) and talked about how it relates to the "normative" world of desires, values,...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Valence series] 3. Valence & Beliefs, published by Steven Byrnes on December 13, 2023 on LessWrong.
3.1 Post summary / Table of contents
Part of the
Valence series.
So far in the series, we defined valence (
Post 1) and talked about how it relates to the "normative" world of desires, values, preferences, and so on (
Post 2). Now we move on to the "positive" world of beliefs, expectations, concepts, etc. Here, valence is no longer the sine qua non at the center of everything, as it is in the "normative" magisterium. But it still plays a leading role. Actually, two leading roles!
Section 3.2 distinguishes two paths by which valence affects beliefs: first, in its role as a control signal, and second, in its role as "interoceptive" sense data, which I discuss in turn:
Section 3.3 discusses how valence-as-a-control-signal affects beliefs. This is the domain of motivated reasoning, confirmation bias, and related phenomena. I explain how it works both in general and through a nuts-and-bolts toy model. I also elaborate on "voluntary attention" versus "involuntary attention", in order to explain anxious rumination, which goes against the normal pattern (it involves thinking about something despite a strong motivation not to think about it).
Section 3.4 discusses how valence-as-interoceptive-sense-data affects beliefs. I argue that, if concepts are
"clusters in thingspace", then valence is one of the axes used by this clustering algorithm. I discuss how this relates to various difficulties in modeling and discussing the world separately from how we feel about it, along with the related "affect heuristic" and "halo effect".
Section 3.5 briefly muses on whether future AI will have motivated reasoning, halo effect, etc., as we humans do. (My answer is "yes, but maybe it doesn't matter too much".)
Section 3.6 is a brief conclusion.
3.2 Two paths for normative to bleed into positive
Here's a diagram from the
previous post:
We have two paths by which valence can impact the world-model (a.k.a. "Thought Generator"): the normative path (upward black arrow) that helps control which thoughts get strengthened versus thrown out, and the positive path (curvy green arrow) that treats valence as one of the input signals to be incorporated into the world model. Corresponding to these two paths, we get two ways for valence to impact factual beliefs:
Motivated reasoning / thinking / observing and confirmation bias - related to the upward black arrow, and discussed in §3.3 below;
The entanglement of valence into our conceptual categories, which makes it difficult to think or talk about the world independently from how we feel about it - related to the curvy green arrow, and discussed in §3.4 below.
Let's proceed with each in turn!
3.3 Motivated reasoning / thinking / observing, including confirmation bias
Of the fifty-odd biases discovered by Kahneman, Tversky, and their successors, forty-nine are cute quirks, and one is destroying civilization. This last one is confirmation bias - our tendency to interpret evidence as confirming our pre-existing beliefs instead of changing our minds.…
Scott Alexander
3.3.1 Attention-control and motor-control provide loopholes through which desires can manipulate beliefs
Wishful thinking - where you believe something because it would be nice if it were true - is generally maladaptive: Imagine spending all day opening your wallet, over and over, expecting each time to find it overflowing with cash. We don't actually do that, which is an indication that our brains have effective systems to mitigate (albeit not eliminate, as we'll see) wishful thinking.
How do those mitigations work?
As discussed in
Post 1, the brain works by model-based reinforcement learning (RL). Oversimplifying as usual, the "model" (predictive world-model, a.k.a. "Thought Generator") is traine...
View more