Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Understanding mesa-optimization using toy models, published by tilker on May 7, 2023 on LessWrong.
Overview
Solving the problem of mesa-optimization would probably be easier if we understood how models do search internally
We are training GPT-type models on the toy task of solving mazes and studying them in both a mechanistic interpretability and behavioral context.
This post lays out our model training setup, hypotheses we have, and the experiments we are performing and plan to perform. Experimental results will be forthcoming in our next post.
We invite members of the LW community to challenge our hypotheses and the potential relevance of this line of work. We will follow up soon with some early results. Our main source code is open source, and we are open to collaborations.
Introduction
Some threat models of misalignment presuppose the existence of an agent which has learned to perform a search over actions to effectively achieve goals. Such a search process might involve exploring different sequences of actions in parallel and evaluating the best sequence of actions to achieve some goal.
To deepen our understanding of what it looks like when models are actually performing search, we chose to train simple GPT-2 like models to find the shortest paths through mazes. Maze-solving models provide a tractable and interesting object of study, as the structure of both the problem and solutions is extensively studied. This relative simplicity makes identifying and understanding search through the lens of mechanistic and behavioral experiments much more concrete than working with pre-trained LLMs and more feasible in the context of limited computational resources.
Connections to mesa-optimization
Mesa-optimizers are learned optimizers for an objective that can be distinct from the base-objective. Inner misalignment can occur when the AI system develops an internal optimization process that inadvertently leads to the pursuit of an unintended goal. In the context of search, the propensity for mesa-optimization may be increased as the system explores various future states, potentially identifying alternative objectives that appear at least as rewarding or efficient in achieving the desired outcome. Existing literature on search has highlighted the potential for unintended consequences of search in ML systems.
One lens of viewing the problem of mesa-optimization is that the behavior of a system changes in an undesirable way upon a distributional shift, and we believe that mazes provide a number of mechanisms to create such distributional shifts.
Training setup
We first aim to train a transformer model to predict the shortest path between a given start and end position in a maze.
The maze exists as a 2D grid, with each position on the grid encoded as a single token. For example, a 5x5 maze has 25 coordinates that have corresponding tokens in the vocabulary.
To the transformer, the maze is described as:
An adjacency list containing all connections between pairs of positions: for example, (0,0) (0,1).
A "wall" in the maze is merely a missing connection between positions in the maze, but otherwise not explicitly stated.
The start and end positions are coordinates on the maze grid, such as (3,3) and (4,0), respectively.
A training example contains a maze (as an adjacency list), start and end coordinates, and a path consisting of position tokens
We use an autoregressive decoder-only transformer model (implemented using TransformerLens), which (at inference) makes predictions one token at a time based on previously generated tokens. Our transformer models incorporate layer normalization and MLP layers by default.
One training sample consists of a maze, as well as a unique path connecting randomly selected origin and target coordinates (circle and cross). The solved maze shown abo...
view more