Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How can one rationally have very high or very low probabilities of extinction in a pre-paradigmatic field?, published by shminux on April 30, 2023 on LessWrong.
It is generally accepted in the local AI alignment circles that the whole field is pre-paradigmatic, in the Kuhnian sense (phase 1, as summarized here, if longer reading is not your thing). And yet, plenty of people are quite confident in their predictions of either doom or fizzle. A somewhat caricature way of representing their logic is, I think, "there are so many disjunctive ways to die, only one chance to get it right, and we don't have a step-by-step how-to, so we are hooped" vs "this is just one of many disruptive inventions whose real impact can only be understood way down the road, and all of them so far have resulted in net benefit, AI is just another example" (I have low confidence in the accuracy of the latter description, feel free to correct.) I can see the logic in both of those, what I do not see is how one can rationally have very high or very low confidence, given how much inherent uncertainty there is in our understanding of what is going on.
My default is something more cautious, akin to Scott Alexander's/
where one has to recognize their own reasoning limitations in the absence of hard empirical data, not "The Lens That Sees Its Flaws", but more like "The Lens That Knows It Has Flaws" without necessarily being able to identify them.
So, how can one be very very very sure of something that has neither empirical confirmation, nor sound science behind it? Or am I misrepresenting the whole argument?
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
view more