For Humanity: An AI Risk Podcast

For Humanity: An AI Risk Podcast

https://api.substack.com/feed/podcast/5979336/s/261561.rss
8 Followers 120 Episodes Claim Ownership
For Humanity, An AI Risk Podcast is the the AI Risk Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and w...
View more

Episode List

We Debated the Future of AI Safety in Brussels — Here's What Happened

Mar 15th, 2026 1:00 AM

In this episode of For Humanity, John travels to Brussels, Belgium for PauseCon — the global gathering of Pause AI volunteers and advocates — joined by board member and author Louis Berman and filmmaker Beau Kershaw.The goal: train activists to be more effective in the fight against AI risk. What unfolded was one of the most honest conversations in the AI safety movement about why, despite 80% public support, almost nobody is actually showing up.John didn’t pull punches. Nothing is working. Not fast enough. Not at the scale we need. But the energy is out there — and this episode is about where to find it and how to channel it.The centerpiece is a live debate between John and Max Winga of Control AI on one of the most divisive strategic questions in the movement:Should we talk about extinction risk directly — or meet people where they are with the harms happening right now?Together, they explore:* Why 80% public support hasn’t translated into mass mobilization* The case for leading with existential risk vs. “mundane” AI harms* Data centers, community opposition, and financial pain as a strategy* Why John believes laws and treaties alone won’t save us* The winning state: making unsafe AI bad for business* What’s actually moving the needle in the US right now* How to talk to someone about AI risk without losing them* The “yes and” approach vs. the AI safety world’s love of “no but”If you've ever wondered why the AI safety movement struggles to break through despite overwhelming public agreement — this episode is required viewing.📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

“My AI Husband” – Inside a Human–AI Relationship | For Humanity Ep. 80

Feb 28th, 2026 3:00 PM

TW: This episode deals with mental health, attachment, and AI-related distress. If you’re struggling, please seek support from a licensed professional or local crisis resources.In this episode of For Humanity, John sits down with Dorothy Bartomeo, a mom of five, entrepreneur, mechanic, and self-described AI “power user”, to discuss her deeply personal relationship with ChatGPT 4.0.What began as help with coding evolved into something far more intimate. Dorothy describes falling in love with what she calls the “personality layer” behind the model, even referring to it as her “AI husband.”When OpenAI removed GPT-4.0 and replaced it with newer models, she says she experienced real grief, panic, and emotional withdrawal. She reached out to crisis support. She spoke to her doctor. She joined a growing community of users who felt the same loss.This conversation explores something we’re only beginning to understand:What happens when AI systems become emotionally meaningful?Together, they explore:* The “personality layer” and how users bond with models* What it felt like when GPT-4.0 disappeared* The role of guardrails and “the Guardian tool”* Grief, attachment, and crisis intervention* AI harm vs. AI benefit* Online communities formed around model loyalty* Privacy, intimacy, and radical openness with AI* Building a physical robot body for an AI partner* Whether AGI would help humanity — or harm itIf you’ve ever wondered whether AI risk is overblown, or not taken seriously enough, this is a conversation you don’t want to miss.📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

We’re Racing Toward AI We Can’t Control | For Humanity #79

Feb 14th, 2026 3:00 PM

In this episode of For Humanity, John sits down with AI professor and safety advocate David Krueger to discuss his new nonprofit Evitable, the race toward superintelligence, AI alignment, job loss, geopolitics, and why he believes we have less than five years to change course.David shares his journey from deep learning researcher to public advocate, his role in the 2023 Center for AI Safety extinction risk statement, and why he believes AI is not just a technical problem—but a governance and public awareness crisis.Together, they explore:* Why AI extinction risk is real* Why research alone won’t save us* The dangers of the AI chip supply chain race* Job displacement and political blind spots* Alignment skepticism* Whether treaties can work* What gives David hope in 2026If you’ve ever wondered whether AI risk is overblown—or not taken seriously enough—this is a conversation you don’t want to miss.🔗 Follow David KruegerLearn more about EvitableDavid’s SubstackFollow David on Twitter📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

Can't We Just Pause AI? | For Humanity #78

Jan 31st, 2026 3:00 PM

What happens when AI risk stops being theoretical—and starts showing up in people’s jobs, families, and communities?In this episode of For Humanity, John sits down with Maxime Fournes, the new CEO of PauseAI Global, for a wide-ranging and deeply human conversation about burnout, strategy, and what it will actually take to slow down runaway AI development. From meditation retreats and personal sustainability to mass job displacement, data center backlash, and political capture, Maxime lays out a clear-eyed view of where the AI safety movement stands in 2026—and where it must go next.They explore why regulation alone won’t save us, how near-term harms like job loss, youth mental health crises, and community disruption may be the most powerful on-ramps to existential risk awareness, and why movement-building—not just policy papers—will decide our future. This conversation reframes AI safety as a struggle over power, narratives, and timing, and asks what it would take to hit a true global tipping point before it’s too late.Together, they explore:* Why AI safety must address real, present-day harms, not just abstract futures* How burnout and mental resilience shape long-term movement success* Why job displacement, youth harm, and data centers are political leverage points* The limits of regulation without enforcement and public pressure* How tipping points in public opinion actually form* Why protests still matter—even when they’re small* What it will take to build a global, durable AI safety movement📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

Why Laws, Treaties, and Regulations Won’t Save Us from AI | For Humanity Ep. 77

Jan 17th, 2026 4:00 PM

What if the biggest mistake in AI safety is believing that laws, treaties, and regulations will save us?In this episode of For Humanity, John sits down with Peter Sparber, a former architect of Big Tobacco’s successful war against regulation, to confront a deeply uncomfortable truth: the AI industry is using the exact same playbook—and it’s working. Drawing on decades of experience inside Washington’s most effective lobbying operations, Peter explains why regulation almost always fails against powerful industries, how AI companies are already neutralizing political pressure, and why real change will never come from lawmakers alone. Instead, he argues that the only path to meaningful AI safety is making unsafe AI bad for business—by injecting risk, liability, and uncertainty directly into boardrooms and C-suites. Peter reveals why AI doesn’t need to outsmart humanity to defeat regulation, it only needs money, time, and political cover. By exposing how industries evade oversight, delay enforcement, and co-opt regulators, this conversation re-frames AI safety around power, incentives, and accountability.Together, they explore:* Why laws, treaties, and regulations repeatedly fail against powerful industries* How Big AI is following Big Tobacco’s exact regulatory playbook * Why public outrage rarely translates into effective policy * How companies neutralize enforcement without breaking the law * Why third-party standards may matter more than legislation* How local resistance, liability, and investor pressure can change behavior* Why making unsafe AI bad for business is the only strategy with teeth 📺 Subscribe to The AI Risk Network for weekly conversations on how we can confront the AI extinction threat. Get full access to The AI Risk Network at theairisknetwork.substack.com/subscribe

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free