Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they're worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still.
Today's guest — machine learning researcher Rohin Shah — goes into the Google DeepMind offices each day with that peculiar backdrop to his work.
Links to learn more, summary and full transcript.
He's on the team dedicated to maintaining 'technical AI safety' as these models approach and exceed human capabilities: basically that the models help humanity accomplish its goals without flipping out in some dangerous way. This work has never seemed more important.
In the short-term it could be the key bottleneck to deploying ML models in high-stakes real-life situations. In the long-term, it could be the difference between humanity thriving and disappearing entirely.
For years Rohin has been on a mission to fairly hear out people across the full spectrum of opinion about risks from artificial intelligence -- from doomers to doubters -- and properly understand their point of view. That makes him unusually well placed to give an overview of what we do and don't understand. He has landed somewhere in the middle — troubled by ways things could go wrong, but not convinced there are very strong reasons to expect a terrible outcome.
Today's conversation is wide-ranging and Rohin lays out many of his personal opinions to host Rob Wiblin, including:
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris
Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell
Transcriptions: Katy Moore
#190 – Eric Schwitzgebel on whether the US is conscious
#189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems
#188 – Matt Clancy on whether science is good
#187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard"
#186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives
#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals
#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT
AI governance and policy (Article)
#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more
#182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more
#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond
#180 – Hugo Mercier on why gullibility and misinformation are overrated
#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety
#178 – Emily Oster on what the evidence actually says about pregnancy and parenting
#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps
#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be
#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications
#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms
2023 Mega-highlights Extravaganza
#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome
Create your
podcast in
minutes
It is Free
Navigating Life After 40
Teaching Learning Leading K-12
Regenerative Skills
The Jordan B. Peterson Podcast
The Mel Robbins Podcast