Today we’re joined by Ben Prystawski, a PhD student in the Department of Psychology at Stanford University working at the intersection of cognitive science and machine learning. Our conversation centers on Ben’s recent paper, “Why think step by step? Reasoning emerges from the locality of experience,” which he recently presented at NeurIPS 2023. In this conversation, we start out exploring basic questions about LLM reasoning, including whether it exists, how we can define it, and how techniques like chain-of-thought reasoning appear to strengthen it. We then dig into the details of Ben’s paper, which aims to understand why thinking step-by-step is effective and demonstrates that local structure is the key property of LLM training data that enables it.
The complete show notes for this episode can be found at twimlai.com/go/673.
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
Inverse Reinforcement Learning Without RL with Gokul Swamy - #643
Explainable AI for Biology and Medicine with Su-In Lee - #642
Transformers On Large-Scale Graphs with Bayan Bruss - #641
The Enterprise LLM Landscape with Atul Deo - #640
BloombergGPT - an LLM for Finance with David Rosenberg - #639
Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness - #638
Privacy vs Fairness in Computer Vision with Alice Xiang - #637
Unifying Vision and Language Models with Mohit Bansal - #636
Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635
Mojo: A Supercharged Python for AI with Chris Lattner - #634
Stable Diffusion and LLMs at the Edge with Jilei Hou - #633
Modeling Human Behavior with Generative Agents with Joon Sung Park - #632
Towards Improved Transfer Learning with Hugo Larochelle - #631
Language Modeling With State Space Models with Dan Fu - #630
Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629
AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628
Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627
Are LLMs Overhyped or Underappreciated? with Marti Hearst - #626
Are Large Language Models a Path to AGI? with Ben Goertzel - #625
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week