Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The theory of Proximal Policy Optimisation implementations, published by salman.mohammadi on April 11, 2024 on The AI Alignment Forum.
Prelude
The aim of this post is to share my understanding of some of the conceptual and theoretical background behind implementations of the Proximal Policy Optimisation (PPO) reinforcement learning (RL) algorithm. PPO is widely used due to its stability and sample efficiency - popular applications include beating the Dota 2 world champions and aligning language models.
While the PPO paper provides quite a general and straightforward overview of the algorithm, modern implementations of PPO use several additional techniques to achieve state-of-the-art performance in complex environments [1]. You might discover this if you try to implement the algorithm solely based on the paper. I try and present a coherent narrative here around these additional techniques.
I'd recommend reading parts one, two, and three of SpinningUp if you're new to reinforcement learning. There's a few longer-form educational resources that I'd recommend if you'd like a broader understanding of the field [2], but this isn't comprehensive. You should be familiar with common concepts and terminology in RL [3]. For clarity, I'll try to spell out any jargon I use here.
Recap
Policy Gradient Methods
PPO is an on-policy reinforcement learning algorithm. It directly learns a stochastic policy function parameterised by θ representing the likelihood of action a in state s, πθ(a|s). Consider that we have some differentiable function, J(θ), which is a continuous performance measure of the policy πθ. In the simplest case, we have J(θ)=Eτπθ[R(τ)], which is known as the return [4] over a trajectory [5], τ.
PPO is a kind of policy gradient method [6] which directly optimizes the policy parameters θ against J(θ). The policy gradient theorem shows that:
θJ(θ)=E[inft=0θlnπθ(at|st)Rt]
In other words, the gradient of our performance measure J(θ) with respect to our policy parameters θ points in the direction of maximising the return Rt. Crucially, this shows that we can estimate the true gradient using an expectation of the sample gradient - the core idea behind the REINFORCE [7] algorithm. This is great. This expression has the more general form which substitutes Rt for some lower-variance estimator of the total expected reward, Φ [8] :
θJ(θ)=E[inft=0θlnπθ(at|st)Φt](1)
Modern implementations of PPO make the choice of Φt=Aπ(st,at), the advantage function. This function estimates the advantage of a particular action in a given state over the expected value of following the policy, i.e. how much better is taking this action in this state over all other actions? Briefly described here, the advantage function takes the form
Aπ(s,a)=Qπ(s,a)Vπ(s)
where V(s) is the state-value function, and Q(s,a) is the state-action -value function, or Q-function [9]. I've found it easier to intuit the nuances of PPO by following the narrative around its motivations and predecessor. PPO iterates on the Trust Region Policy Optimization (TRPO) method which constrains the objective function with respect to the size of the policy update. The TRPO objective function is defined as [10][11] :
J(θ)=E[πθ(at,st)πθold(at,st)At]subject toE[KL(πθold||πθ)]δ
Where KL is the Kullback-Liebler divergence (a measure of distance between two probability distributions), and the size of policy update is defined as the ratio between the new policy and the old policy:
r(θ)=πθ(at,st)πθold(at,st)
Policy gradient methods optimise policies through (ideally small) iterative gradient updates to parameters θ. The old policy, πθold(at,st), is the one used to generate the current trajectory, and the new policy, πθ(at,st) is the policy currently being optimised [12]. If the advantage is positive, then the new policy becomes greedier relative ...
view more