977: Attention, World Models and the Future of AI, with Prof. Kyunghyun Cho
What’s going to be the next big step function that blasts us forward in AI capabilities? To find out, Jon Krohn sits down with Professor Kyunghyun Cho, whose 200,000 citations and co-authorship of the first paper on attention place him among the most influential AI researchers in the world. In this episode, Kyunghyun explains why today’s models have already captured most correlations in passive data, making the real challenge about actively choosing which data to collect. He also weighs in on the open debate around world models, whether AI needs high-fidelity, step-by-step imagination or whether a high-level latent representation that lets it skip ahead is sufficient and shares the surprising discovery that 80% of his 200 computer science students had never installed a coding agent. Additional materials: www.superdatascience.com/977 Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information. In this episode you will learn: (06:43) The story behind the attention mechanism (28:43) Sample efficiency and active data collection (39:04) World models and latent planning (49:52) Teaching undergrads with coding agents (58:21) Reranking, multi-stage ranking, and the foundations of RAG
976: NVIDIA’s Nemotron 3 Super: The Perfect LLM for Multi-Agent Systems
NVIDIA just dropped Nemotron 3 Super, a 120-billion-parameter open-weight model that only activates 12 billion parameters at a time and it’s built for the agentic AI era. In this Five-Minute Friday, Jon Krohn breaks down the model’s hybrid Mamba-Transformer architecture, its million-token context window, and why its combination of frontier-class reasoning with blazing-fast throughput matters for anyone building multi-agent systems. Find out how Nemotron 3 Super claimed the #1 spot on the DeepResearch Bench leaderboards, which companies are already adopting it, and where you can start using it today. Additional materials: www.superdatascience.com/976 Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information.
975: Unmetered Intelligence is Heralding the Next Renaissance, with Zack Kass
Zack Kass speaks to Jon Krohn about his bestselling, tech-positive book, The Next Renaissance, that charts the rapid progress of humanity and the benefits that artificial intelligence will bring to us, as well as why a future where intelligence is a cheap and abundant resource will give humanity an edge. Elsewhere in the show, Zack discusses why it’s important to hold parents, teachers and students accountable for their education, why it is incumbent on us to build a healthier relationship with technology, and his 4 principles for thriving in the age of AI. This episode is brought to you by the Cisco, by Acceldata and by ODSC, the Open Data Science Conference. Additional materials: www.superdatascience.com/975 Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information. In this episode you will learn: (03:14) About Zack Kass’ book, The Next Renaissance (20:18) The importance of literacy skills in the age of AI (28:01) AI in education (41:01) Principles for living in the era of AI
974: When Will The AI Bubble Burst? How Bad Will It Be?
In this week’s Five-Minute Friday, Jon Krohn holds the AI bubble up to the light. He points to the deep greyzone found in AI startups like Cluely that are established on dubious ideas (Cluely’s tagline was “cheat on everything”) and funding bluster, as well as the staggering spending by companies on infrastructure and researcher salaries. Listen to the episode to hear about the historical precedents to the AI bubble that go all the way back to the invention of the railway, what to make of current investments in AI, and what you can do about these changes as an AI practitioner. Additional materials: www.superdatascience.com/974 Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information.
973: AI Systems Performance Engineering, with Chris Fregly
No one should be manually writing code in 2026, thinks Chris Fregly, Jon Krohn’s guest on this week’s episode. In this interview about Chris’ latest book, AI Systems Performance Engineering, he explains why it’s so important to consider memory bandwidth when evaluating GPU performance, that understanding the full hardware software stack is the most valuable skill for anyone working in AI development, and which shortcuts we still shouldn’t ever take when writing code, even though we might be outsourcing a great deal to generative AI. This episode is brought to you by the Cisco, by Acceldata and by ODSC, the Open Data Science Conference. Additional materials: www.superdatascience.com/973 Interested in sponsoring a SuperDataScience Podcast episode? Email natalie@superdatascience.com for sponsorship information. In this episode you will learn: (03:39) Why Chris wrote AI Systems Performance Engineering (21:39) Essential coding metrics (37:24) The importance of inference when coding (42:11) How to manage workflows while using AI agents (51:37) Where and how to invest in the AI market