Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Experiences and learnings from both sides of the AI safety job market, published by Marius Hobbhahn on November 15, 2023 on The AI Alignment Forum.
I'm writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I'm...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Experiences and learnings from both sides of the AI safety job market, published by Marius Hobbhahn on November 15, 2023 on The AI Alignment Forum.
I'm writing this in my own capacity. The views expressed are my own, and should not be taken to represent the views of Apollo Research or any other program I'm involved with.
In 2022, I applied to multiple full-time AI safety positions. Now, I switched sides and ran multiple hiring processes for Apollo Research. Because of this, I feel like I understand the AI safety job market much better and may be able to help people who are looking for AI safety jobs to get a better perspective.
This post obviously draws a lot from my personal experiences many of which may not apply to your particular situation, so take my word with a grain of salt.
Executive summary
In the late Summer of 2022, I applied to various organizations working on AI safety. I got to the final stages of multiple interview processes but never received an offer. I think in all cases, the organization chose correctly. The person who received the offer in my stead always seemed like a clearly better fit than me. At Apollo Research, we receive a lot of high-quality applications despite being a new organization. The demand for full-time employment in AI safety is really high.
Focus on getting good & provide legible evidence: Your social network helps a bit but doesn't substitute bad skills and grinding Leetcode (or other hacks for the interview process) probably doesn't make a big difference. In my experience, the interview processes of most AI safety organizations are meritocratic and high signal.
If you want to get hired for an evals/interpretability job, do work on evals/interpretability and put it on your GitHub, do a SERI MATS stream with an evals/interpretability mentor, etc. This is probably my main advice, don't overcomplicate it, just get better at the work you want to get hired for and provide evidence for that.
Misc:
Make a plan: I found it helpful to determine a "default path" that I'd choose if all applications failed, rank the different opportunities, and get feedback on my plan from trusted friends.
The application process provides a lot of information: Most public writings of orgs are 3-6 months behind their current work. In the interviews, you typically learn about their latest work and plans which is helpful even if you don't get an offer.
You have to care about the work you do: I often hear people talking about the instrumental value of doing some work, e.g. whether they should join an org for CV value. In moderation this is fine, when overdone, this will come back to haunt you. If you don't care about the object-level work you do, you'll be worse at it and it will lead to a range of problems.
Honesty is a good policy: Being honest throughout the interview process is better for the system and probably also better for you. Interviewers typically spot when you lie about your abilities and even if they didn't you'd be found out the moment you start. The same is true to a lesser extent for "soft lies" like overstating your abilities or omitting important clarifications.
It can be hard & rejection feels bad
There is a narrative that there aren't enough AI safety researchers and many more people should work on AI safety. Thus, my (arguably naive) intuition when applying to different positions in 2022 was something like "I'm doing a Ph.D. in ML; I have read about AI safety extensively; there is a need for AI safety researchers; So it will be easy for me to find a position". In practice, this turned out to be wrong.
After running multiple hiring rounds within Apollo Research and talking to others who are hiring in AI safety, I understand why. There are way more good applicants than positions and even very talented applicants might struggle to find a full...
View more