Link to original article
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Demis Hassabis - Google DeepMind: The Podcast, published by Zach Stein-Perlman on August 16, 2024 on LessWrong.
The YouTube "chapters" are mixed up, e.g. the question about regulation comes 5 minutes after the regulation chapter ends. Ignore them.
Noteworthy parts:
8:40: Near-term AI is hyped too much (think current startups, VCs, exaggerated claims about what AI can do, crazy ideas that aren't ready) but AGI is under-hyped and under-appreciated.
16:45: "Gemini is a project that has only existed for a year . . . our trajectory is very good; when we talk next time we should hopefully be right at the forefront."
17:20-18:50: Current AI doesn't work as a digital assistant. The next era/generation is agents. DeepMind is well-positioned to work on agents: "combining AlphaGo with Gemini."
24:00: Staged deployment is nice: red-teaming then closed beta then public deployment.
28:37 Openness (at Google: e.g. publishing transformers, AlphaCode, AlphaFold) is almost always a universal good. But dual-use technology - including AGI - is an exception. With dual-use technology, you want good scientists to still use the technology and advance as quickly as possible, but also restrict access for bad actors. Openness is fine today but in 2-4 years or when systems are more agentic it'll be dangerous.
Maybe labs should only open-source models that are lagging a year behind the frontier (and DeepMind will probably take this approach, and indeed is currently doing ~this by releasing Gemma weights).
31:20 "The problem with open source is if something goes wrong you can't recall it. With a proprietary model if your bad actor starts using it in a bad way you can close the tap off . . . but once you open-source something there's no pulling it back. It's a one-way door, so you should be very sure when you do that."
31:42: Can an AGI be contained? We don't know how to do that [this suggests a misalignment/escape threat model but it's not explicit]. Sandboxing and normal security is good for intermediate systems but won't be good enough to contain an AGI smarter than us. We'll have to design protocols for AGI in the future: "when that time comes we'll have better ideas for how to contain that, potentially also using AI systems and tools to monitor the next versions of the AI system."
33:00: Regulation? It's good that people in government are starting to understand AI and AISIs are being set up before the stakes get really high. International cooperation on safety and deployment norms will be needed since AI is digital and if e.g. China deploys an AI it won't be contained to China. Also:
Because the technology is changing so fast, we've got to be very nimble and light-footed with regulation so that it's easy to adapt it to where the latest technology's going. If you'd regulated AI five years ago, you'd have regulated something completely different to what we see today, which is generative AI. And it might be different again in five years; it might be these agent-based systems that [] carry the highest risks.
So right now I would [] beef up existing regulations in domains that already have them - health, transport, and so on - I think you can update them for AI just like they were updated for mobile and internet. That's probably the first thing I'd do, while . . . making sure you understand and test the frontier systems. And then as things become [clearer] start regulating around that, maybe in a couple years time would make sense.
One of the things we're missing is [benchmarks and tests for dangerous capabilities].
My #1 emerging dangerous capability to test for is deception because if the AI can be deceptive then you can't trust other tests [deceptive alignment threat model but not explicit]. Also agency and self-replication.
37:10: We don't know how to design a system that could come up with th...
view more