Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Contra Yudkowsky on Doom from Foom #2, published by jacob cannell on April 27, 2023 on LessWrong.
This is a follow up and partial rewrite to/of an earlier part #1 post critiquing EY's specific argument for doom from AI go foom, and a partial clarifying response to DaemonicSigil's reply on efficiency.
AI go Foom?
By Foom I refer to the specific idea/model (as popularized by EY, MIRI, etc) that near future AGI will undergo a rapid intelligence explosion (hard takeoff) to become orders of magnitude more intelligent (ex from single human capability to human civilization capability) - in a matter of only days or hours - and then dismantle humanity (figuratively as in disempower or literally as in "use your atoms for something else"). Variants of this idea still seems important/relevant drivers of AI risk arguments today: Rob Besinger recently says "STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly)."
I believe the probability of these scenarios is small and the arguments lack technical engineering prowress concerning the computational physics of - and derived practical engineering constraints on - intelligence.
During the manhattan project some physicists became concerned about the potential of a nuke detonation igniting the atmosphere. Even a small non-epsilon possibility of destroying the entire world should be taken very seriously. So they did some detailed technical analysis which ultimately output a probability below their epsilon allowing them to continue on their merry task of creating weapons of mass destruction.
In the 'ideal' scenario, the doom foomers (EY/MIRI) would present a detailed technical proposal that could be risk evaluated. They of course have not provided that, and indeed it would seem to be an implausible ask. Even if they were claiming to have the technical knowledge on how to produce a fooming AGI, providing that analysis itself could cause someone to create said AGI and thereby destroy the world![1] In the historical precedent of the manhattan project, the detailed safety analysis only finally arrived during the first massive project that succeeded at creating the technology to destroy the world.
So we are left with indirect, often philosophical arguments, which I find unsatisfying. To the extent that EY/MIRI has produced some technical work related to AGI[2], I find it honestly to be more philosophical than technical, and in the latter capacity more amateurish than expert.
I have spent a good chunk of my life studying the AGI problem as an engineer (neuroscience, deep learning, hardware, GPU programming, etc), and reached the conclusion that fast FOOM is unlikely. Proving that of course is very difficult, so I instead gather much of the evidence that led me to that conclusion. However I can't reveal all of the evidence, as the process is rather indistinguishable from searching for the design of AGI itself.[3]
The valid technical arguments for/against the Foom mostly boils down to various efficiency considerations.
Quick background: pareto optimality/efficiency
Engineering is complex and full of fundamental practical tradeoffs:
larger automobiles are safer via higher mass, but have lower fuel economy
larger wings produce more lift but also more drag at higher speeds
highly parallel circuits can do more total work per clock and are more energy efficient but the corresponding parallel algorithms are more complex to design/code, require somewhat more work to accomplish a task, delay/latency becomes more problematic for larger circuits, etc
adiabatic and varying degrees of reversible circuit designs are possible but they are slower, larger, more complex, less noise tolerant, and still face largely unresolved design challenges with practical clock synchronization, etc
quan...
view more