Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Dwarkesh's 3rd Podcast With Tyler Cowen, published by Zvi on February 4, 2024 on LessWrong.
This post is extensive thoughts on Tyler Cowen's excellent talk with Dwarkesh Patel.
It is interesting throughout. You can read this while listening, after listening or instead of listening, and is written...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Dwarkesh's 3rd Podcast With Tyler Cowen, published by Zvi on February 4, 2024 on LessWrong.
This post is extensive thoughts on Tyler Cowen's excellent talk with Dwarkesh Patel.
It is interesting throughout. You can read this while listening, after listening or instead of listening, and is written to be compatible with all three options. The notes are in order in terms of what they are reacting to, and are mostly written as I listened.
I see this as having been a few distinct intertwined conversations. Tyler Cowen knows more about more different things than perhaps anyone else, so that makes sense. Dwarkesh chose excellent questions throughout, displaying an excellent sense of when to follow up and how, and when to pivot.
The first conversation is about Tyler's book GOAT about the world's greatest economists. Fascinating stuff, this made me more likely to read and review GOAT in the future if I ever find the time. I mostly agreed with Tyler's takes here, to the extent I am in position to know, as I have not read that much in the way of what these men wrote, and at this point even though I very much loved it at the time (don't skip the digression on silver, even, I remember it being great) The Wealth of Nations is now largely a blur to me.
There were also questions about the world and philosophy in general but not about AI, that I would mostly put in this first category. As usual, I have lots of thoughts.
The second conversation is about expectations given what I typically call mundane AI. What would the future look like, if AI progress stalls out without advancing too much? We cannot rule such worlds out and I put substantial probability on them, so it is an important and fascinating question.
If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements but Tyler is an excellent thinker about these scenarios. Broadly our expectations are not so different here.
That brings us to the third conversation, about the possibility of existential risk or the development of more intelligent and capable AI that would have greater affordances. For a while now, Tyler has asserted that such greater intelligence likely does not much matter, that not so much would change, that transformational effects are highly unlikely, whether or not they constitute existential risks.
That the world will continue to seem normal, and follow the rules and heuristics of economics, essentially Scott Aaronson's Futurama. Even when he says AIs will be decentralized and engage in their own Hayekian trading with their own currency, he does not think this has deep implications, nor does it imply much about what else is going on beyond being modestly (and only modestly) productive.
Then at other times he affirms the importance of existential risk concerns, and indeed says we will be in need of a hegemon, but the thinking here seems oddly divorced from other statements, and thus often rather confused. Mostly it seems consistent with the view that it is much easier to solve alignment quickly, build AGI and use it to generate a hegemon, than it would be to get any kind of international coordination. And also that failure to quickly build AI risks our civilization collapsing. But also I notice this implies that the resulting AIs will be powerful enough to enable hegemony and determine the future, when in other contexts he does not think they will even enable sustained 10% GDP growth.
Thus at this point, I choose to treat most of Tyler's thoughts on AI as if they are part of the second conversation, with an implicit 'assuming an AI at least semi-fizzle' attached ...
View more