Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Altman Returns, published by Zvi on November 30, 2023 on LessWrong.
As of this morning, the new board is in place and everything else at OpenAI is otherwise officially back to the way it was before.
Events seem to have gone as expected. If you have read my previous two posts on the OpenAI situation,...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: OpenAI: Altman Returns, published by Zvi on November 30, 2023 on LessWrong.
As of this morning, the new board is in place and everything else at OpenAI is otherwise officially back to the way it was before.
Events seem to have gone as expected. If you have read my previous two posts on the OpenAI situation, nothing here should surprise you.
Still seems worthwhile to gather the postscripts, official statements and reactions into their own post for future ease of reference.
What will the ultimate result be? We likely only find that out gradually over time, as we await both the investigation and the composition and behaviors of the new board.
I do not believe Q* played a substantive roll in events, so it is not included here. I also do not include discussion here of how good or bad Altman has been for safety.
Sam Altman's Statement
Here is the official OpenAI statement from Sam Altman. He was magnanimous towards all, the classy and also smart move no matter the underlying facts. As he has throughout, he has let others spread hostility, work the press narrative and shape public reaction, while he himself almost entirely offers positivity and praise. Smart.
Before getting to what comes next, I'd like to share some thanks.
I love and respect Ilya, I think he's a guiding light of the field and a gem of a human being. I harbor zero ill will towards him. While Ilya will no longer serve on the board, we hope to continue our working relationship and are discussing how he can continue his work at OpenAI.
I am grateful to Adam, Tasha, and Helen for working with us to come to this solution that best serves the mission. I'm excited to continue to work with Adam and am sincerely thankful to Helen and Tasha for investing a huge amount of effort in this process.
Thank you also to Emmett who had a key and constructive role in helping us reach this outcome. Emmett's dedication to AI safety and balancing stakeholders' interests was clear.
Mira did an amazing job throughout all of this, serving the mission, the team, and the company selflessly throughout. She is an incredible leader and OpenAI would not be OpenAI without her. Thank you.
Greg and I are partners in running this company. We have never quite figured out how to communicate that on the org chart, but we will. In the meantime, I just wanted to make it clear. Thank you for everything you have done since the very beginning, and for how you handled things from the moment this started and over the last week.
The leadership team-Mira, Brad, Jason, Che, Hannah, Diane, Anna, Bob, Srinivas, Matt, Lilian, Miles, Jan, Wojciech, John, Jonathan, Pat, and many more-is clearly ready to run the company without me. They say one way to evaluate a CEO is how you pick and train your potential successors; on that metric I am doing far better than I realized. It's clear to me that the company is in great hands, and I hope this is abundantly clear to everyone. Thank you all.
Let that last paragraph sink in. The leadership team ex-Greg is clearly ready to run the company without Altman.
That means that whatever caused the board to fire Altman, whether or not Altman forced the board's hand to varying degrees, if everyone involved had chosen to continue without Altman then OpenAI would have been fine. We can choose to believe or not believe Altman's claims in his Verge interview that he only considered returning after the board called him on Saturday, and we can speculate on what Altman otherwise did behind the scenes during that time. We don't know. We can of course guess, but we do not know.
He then talks about his priorities.
So what's next?
We have three immediate priorities.
Advancing our research plan and further investing in our full-stack safety efforts, which have always been critical to our work. Our research roadmap is clear; this was a wonde...
View more