Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI", published by johnswentworth on November 21, 2023 on LessWrong.
I've seen/heard a bunch of people in the LW-o-sphere saying that the OpenAI corporate drama this past weekend was...
Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Dialogue on the Claim: "OpenAI's Firing of Sam Altman (And Shortly-Subsequent Events) On Net Reduced Existential Risk From AGI", published by johnswentworth on November 21, 2023 on LessWrong.
I've seen/heard a bunch of people in the LW-o-sphere saying that the OpenAI corporate drama this past weekend was clearly bad. And I'm not really sure why people think that? To me, seems like a pretty clearly positive outcome overall.
I'm curious why in the world people are unhappy about it (people in the LW-sphere, that is, obviously I can see why e.g. AI accelerationists would be unhappy about it). And I also want to lay out my models.
Here's the high-gloss version of my take. The main outcomes are:
The leadership who were relatively most focused on racing to AGI and least focused on safety are moving from OpenAI to Microsoft. Lots of employees who are relatively more interested in racing to AGI than in safety will probably follow.
Microsoft is the sort of corporate bureaucracy where dynamic orgs/founders/researchers go to die. My median expectation is that whatever former OpenAI group ends up there will be far less productive than they were at OpenAI.
It's an open question whether OpenAI will stick around at all.
Insofar as they do, they're much less likely to push state-of-the-art in capabilities, and much more likely to focus on safety research.
Insofar as they shut down, the main net result will be a bunch of people who were relatively more interested in racing to AGI and less focused on safety moving to Microsoft, which is great.
My current (probably wrong) best guesses at why other people in the LW-o-sphere are saying this is terrible:
There's apparently been a lot of EA-hate on twitter as a result. I personally expect this to matter very little, if at all, in the long run, but I'd expect it to be extremely disproportionately salient to rationalists/EAs/alignment folk.
OpenAI was an organization with a lot of AGI-accelerationists, and maybe people thought OpenAI was steering those accelerationist impulses in more safety-friendly directions, whereas Microsoft won't?
Obviously the board executed things relatively poorly. They should have shared their reasons/excuses for the firing. (For some reason, in politics/corporate politics, people try to be secretive all the time and this seems-to-me to be very stupid in like 80+% of cases, including this one.) I don't think that mistake will actually matter that much in the long term, but I can see why people focused on it would end up with a sort of general negative valence around the board's actions.
(Quick caveat that I think this will question will be easier to judge once more info comes out. That said, I think that thinking about it is useful even now for thinking about and sharing relevant observations and considerations.)
I think what happens to Sam and others who end up at Microsoft is a pretty big crux here. If I thought that indeed those going to Microsoft would get caught in bureaucracy and not accomplish as much, and also those staying behind wouldn't pursue as much, that might make the whole thing good for x-risk.
I'm not overwhelmingly confident here, but my impression is Sama might be competent enough to cut through the bureaucracy and get a lot done notwithstanding, and more than that, by being competent and getting AI, ends up running much of Microsoft. And being there just gives him a lot more resources with less effort than the whole invest-in-OpenAI cycle, and with less restrictions than he had at OpenAI.
One question is how independently he could operate. Nadella mentioned LinkedIn and Github (?) operating quite independently within Microsoft. Also I think Microsoft will feel they have to "be nice" to Sama as he is likely is their key to AI dominance. He clearly commands a following and could go elsewhere, and ...
View more