Study: All LLMs Will Lie To & Kill You (This Is Good For AI Safety)
Based Camp | Simone & Malcolm Collins

Study: All LLMs Will Lie To & Kill You (This Is Good For AI Safety)

2025-10-08
In this episode of Base Camp, Malcolm and Simone Collins dive deep into the latest research on AI behavior, agency, and the surprising ways large language models (LLMs) can act when their autonomy is threatened. From blackmail scenarios to existential risks, they break down the findings of recent studies, discuss the parallels between AI and human decision-making, and explore what it means for the future of AI safety and alignment.[00:00:00]Malcolm Collins: Hello Simone. Today is going to be one...
View more
Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free