As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.
With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.
But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?
In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.
Links to learn more, summary and full transcript.
As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.
If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources.
According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.
We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?
But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem.
Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.
By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.
With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long.
If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?
Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making.
Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.
Lennart and Rob discuss the above as well as:
Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.
Producer: Keiran Harris
Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell
Transcriptions: Katy Moore
#190 – Eric Schwitzgebel on whether the US is conscious
#189 – Rachel Glennerster on how “market shaping” could help solve climate change, pandemics, and other global problems
#188 – Matt Clancy on whether science is good
#187 – Zach Weinersmith on how researching his book turned him from a space optimist into a "space bastard"
#186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives
#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals
#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT
AI governance and policy (Article)
#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more
#182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more
#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond
#180 – Hugo Mercier on why gullibility and misinformation are overrated
#179 – Randy Nesse on why evolution left us so vulnerable to depression and anxiety
#178 – Emily Oster on what the evidence actually says about pregnancy and parenting
#177 – Nathan Labenz on recent AI breakthroughs and navigating the growing rift between AI safety and accelerationist camps
#90 Classic episode – Ajeya Cotra on worldview diversification and how big the future could be
#112 Classic episode – Carl Shulman on the common-sense case for existential risk work and its practical implications
#111 Classic episode – Mushtaq Khan on using institutional economics to predict effective government reforms
2023 Mega-highlights Extravaganza
#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome
Create your
podcast in
minutes
It is Free
Navigating Life After 40
Teaching Learning Leading K-12
Regenerative Skills
The Jordan B. Peterson Podcast
The Mel Robbins Podcast