LessWrong (Curated & Popular)

LessWrong (Curated & Popular)

https://rss.buzzsprout.com/2037297.rss
30 Followers 721 Episodes Claim Ownership
Audio narrations of LessWrong posts. Includes all curated posts and all posts with 125+ karma.If you'd like more, subscribe to the “Lesswrong (30+ karma)” feed.

Episode List

"Backyard cat fight shows Schelling points preexist language" by jchan

Jan 16th, 2026 10:30 PM

Two cats fighting for control over my backyard appear to have settled on a particular chain-link fence as the delineation between their territories. This suggests that: Animals are capable of recognizing Schelling points Therefore, Schelling points do not depend on language for their Schelling-ness Therefore, tacit bargaining should be understood not as a special case of bargaining where communication happens to be restricted, but rather as the norm from which the exceptional case of explicit bargaining is derived. Summary of cat situation I don't have any pets, so my backyard is terra nullius according to Cat Law. This situation is unstable, as there are several outdoor cats in the neighborhood who would like to claim it. Our two contenders are Tabby Cat, who lives on the other side of the waist-high chain-link fence marking the back edge of my lot, and Tuxedo Cat, who lives in the place next-door to me. | || Tabby's || yard || (A) |------+...........+--------| (B) || | Tuxedo's| My yard | yard| ||| -- tall wooden fences.... short chain-link fence In the first [...] ---Outline:(00:43) Summary of cat situation(03:28) Why the fence?(04:00) If animals have Schelling points, then... --- First published: January 14th, 2026 Source: https://www.lesswrong.com/posts/uYr8pba7TqaPpszX5/backyard-cat-fight-shows-schelling-points-preexist-language --- Narrated by TYPE III AUDIO.

"How AI Is Learning to Think in Secret" by Nicholas Andresen

Jan 9th, 2026 4:15 AM

On Thinkish, Neuralese, and the End of Readable Reasoning In September 2025, researchers published the internal monologue of OpenAI's GPT-o3 as it decided to lie about scientific data. This is what it thought: Pardon? This looks like someone had a stroke during a meeting they didn’t want to be in, but their hand kept taking notes. That transcript comes from a recent paper published by researchers at Apollo Research and OpenAI on catching AI systems scheming. To understand what's happening here - and why one of the most sophisticated AI systems in the world is babbling about “synergy customizing illusions” - it first helps to know how we ended up being able to read AI thinking in the first place. That story starts, of all places, on 4chan. In late 2020, anonymous posters on 4chan started describing a prompting trick that would change the course of AI development. It was almost embarrassingly simple: instead of just asking GPT-3 for an answer, ask it instead to show its work before giving its final answer. Suddenly, it started solving math problems that had stumped it moments before. To see why, try multiplying 8,734 × 6,892 in your head. If you’re like [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: January 6th, 2026 Source: https://www.lesswrong.com/posts/gpyqWzWYADWmLYLeX/how-ai-is-learning-to-think-in-secret --- Narrated by TYPE III AUDIO. ---Images from the article:

"On Owning Galaxies" by Simon Lermen

Jan 8th, 2026 5:15 PM

It seems to be a real view held by serious people that your OpenAI shares will soon be tradable for moons and galaxies. This includes eminent thinkers like Dwarkesh Patel, Leopold Aschenbrenner, perhaps Scott Alexander and many more. According to them, property rights will survive an AI singularity event and soon economic growth is going to make it possible for individuals to own entire galaxies in exchange for some AI stocks. It follows that we should now seriously think through how we can equally distribute those galaxies and make sure that most humans will not end up as the UBI underclass owning mere continents or major planets. I don't think this is a particularly intelligent view. It comes from a huge lack of imagination for the future. Property rights are weird, but humanity dying isn't People may think that AI causing human extinction is something really strange and specific to happen. But it's the opposite: humans existing is a very brittle and strange state of affairs. Many specific things have to be true for us to be here, and when we build ASI there are many preferences and goals that would see us wiped out. It's actually hard to [...] ---Outline:(01:06) Property rights are weird, but humanity dying isnt(01:57) Why property rights wont survive(03:10) Property rights arent enough(03:36) What if there are many unaligned AIs?(04:18) Why would they be rewarded?(04:48) Conclusion --- First published: January 6th, 2026 Source: https://www.lesswrong.com/posts/SYyBB23G3yF2v59i8/on-owning-galaxies --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

"AI Futures Timelines and Takeoff Model: Dec 2025 Update" by elifland, bhalstead, Alex Kastner, Daniel Kokotajlo

Jan 6th, 2026 12:15 PM

We’ve significantly upgraded our timelines and takeoff models! It predicts when AIs will reach key capability milestones: for example, Automated Coder / AC (full automation of coding) and superintelligence / ASI (much better than the best humans at virtually all cognitive tasks). This post will briefly explain how the model works, present our timelines and takeoff forecasts, and compare it to our previous (AI 2027) models (spoiler: the AI Futures Model predicts about 3 years longer timelines to full coding automation than our previous model, mostly due to being less bullish on pre-full-automation AI R&D speedups). If you’re interested in playing with the model yourself, the best way to do so is via this interactive website: aifuturesmodel.com If you’d like to skip the motivation for our model to an explanation for how it works, go here, The website has a more in-depth explanation of the model (starts here; use the diagram on the right as a table of contents), as well as our forecasts. Why do timelines and takeoff modeling? The future is very hard to predict. We don't think this model, or any other model, should be trusted completely. The model takes into account what we think are [...] ---Outline:(01:32) Why do timelines and takeoff modeling?(03:18) Why our approach to modeling? Comparing to other approaches(03:24) AGI  timelines forecasting methods(03:29) Trust the experts(04:35) Intuition informed by arguments(06:10) Revenue extrapolation(07:15) Compute extrapolation anchored by the brain(09:53) Capability benchmark trend extrapolation(11:44) Post-AGI takeoff forecasts(13:33) How our model works(14:37) Stage 1: Automating coding(16:54) Stage 2: Automating research taste(18:18) Stage 3: The intelligence explosion(20:35) Timelines and takeoff forecasts(21:04) Eli(24:34) Daniel(38:32) Comparison to our previous (AI 2027) timelines and takeoff models(38:49) Timelines to Superhuman Coder (SC)(43:33) Takeoff from Superhuman Coder onward The original text contained 31 footnotes which were omitted from this narration. --- First published: December 31st, 2025 Source: https://www.lesswrong.com/posts/YABG5JmztGGPwNFq2/ai-futures-timelines-and-takeoff-model-dec-2025-update --- Narrated by TYPE III AUDIO. ---Images from the article:

"In My Misanthropy Era" by jenn

Jan 5th, 2026 4:30 PM

For the past year I've been sinking into the Great Books via the Penguin Great Ideas series, because I wanted to be conversant in the Great Conversation. I am occasionally frustrated by this endeavour, but overall, it's been fun! I'm learning a lot about my civilization and the various curmudgeons that shaped it. But one dismaying side effect is that it's also been quite empowering for my inner 13 year old edgelord. Did you know that before we invented woke, you were just allowed to be openly contemptuous of people? Here's Schopenhauer on the common man: They take an objective interest in nothing whatever. Their attention, not to speak of their mind, is engaged by nothing that does not bear some relation, or at least some possible relation, to their own person: otherwise their interest is not aroused. They are not noticeably stimulated even by wit or humour; they hate rather everything that demands the slightest thought. Coarse buffooneries at most excite them to laughter: apart from that they are earnest brutes – and all because they are capable of only subjective interest. It is precisely this which makes card-playing the most appropriate amusement for them – card-playing for [...] The original text contained 3 footnotes which were omitted from this narration. --- First published: January 4th, 2026 Source: https://www.lesswrong.com/posts/otgrxjbWLsrDjbC2w/in-my-misanthropy-era --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free