Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Goal-Completeness is like Turing-Completeness for AGI, published by Liron on December 20, 2023 on LessWrong.
Turing-completeness is a useful analogy we can use to grasp why AGI will inevitably converge to "goal-completeness".
By way of definition: An AI whose input is an arbitrary goal, which outputs actions to...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Goal-Completeness is like Turing-Completeness for AGI, published by Liron on December 20, 2023 on LessWrong.
Turing-completeness is a useful analogy we can use to grasp why AGI will inevitably converge to "goal-completeness".
By way of definition: An AI whose input is an arbitrary goal, which outputs actions to effectively steer the future toward that goal, is goal-complete.
A goal-complete AI is analogous to a Universal Turing Machine: its ability to optimize toward any other AI's goal is analogous to a UTM's ability to run any other TM's same computation.
Let's put the analogy to work:
Imagine the year is 1970 and you're explaining to me how all video games have their own logic circuits.
You're not wrong, but you're also apparently not aware of the importance of Turing-completeness and why to expect architectural convergence across video games.
Flash forward to today. The fact that you can literally emulate Doom inside of any modern video games (through a weird tedious process with a large constant-factor overhead, but still) is a profoundly important observation: all video games are computations.
More precisely, two things about the Turing-completeness era that came after the specific-circuit era are worth noticing:
The gameplay specification of sufficiently-sophisticated video games, like most titles being released today, embeds the functionality of Turing-complete computation.
Computer chips replaced application-specific circuits for the vast majority of applications, even for simple video games like Breakout whose specified behavior isn't Turing-complete.
Expecting Turing-Completeness
From Gwern's classic page, Surprisingly Turing-Complete:
[Turing Completeness] is also weirdly common: one might think that such universality as a system being smart enough to be able to run any program might be difficult or hard to achieve, but it turns out to be the opposite - it is difficult to write a useful system which does not immediately tip over into TC.
"Surprising" examples of this behavior remind us that TC lurks everywhere, and security is extremely difficult...
Computation is not something esoteric which can exist only in programming languages or computers carefully set up, but is something so universal to any reasonably complex system that TC will almost inevitably pop up unless actively prevented.
The Cascading Style Sheets (CSS) language that web pages use for styling HTML is a pretty representative example of surprising Turing Completeness:
If you look at any electronic device today, like your microwave oven, you won't see a microwave-oven-specific circuit design. What you'll see in virtually every device is the same two-level architecture:
A Turing-complete chip that can run any program
An installed program specifying application-specific functionality, like a countdown timer
It's a striking observation that your Philips Sonicare toothbrush and the guidance computer on the Apollo moonlander are now architecturally similar. But with a good understanding of Turing-completeness, you could've predicted it half a century ago. You could've correctly anticipated that the whole electronics industry would abandon application-specific circuits and converge on a Turing-complete architecture.
Expecting Goal-Completeness
If you don't want to get blindsided by what's coming in AI, you need to apply the thinking skills of someone who can look at a Breakout circuit board in 1976 and understand why it's not representative of what's coming.
When people laugh off AI x-risk because "LLMs are just a feed-forward architecture!" or "LLMs can only answer questions that are similar to something in their data!" I hear them as saying "Breakout just computes simple linear motion!" or "You can't play Doom inside Breakout!"
OK, BECAUSE AI HASN'T CONVERGED TO GOAL-COMPLETENESS YET. We're not ...
View more