Today we’re joined by Roland Memisevic, a Senior Director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models, state-augmented architectures for AI agents, and using deductive reasoning to verify the results of Chain-of-Thought prompting strategies with ChatGPT.
The complete show notes for this episode can be found at twimlai.com/go/646.
Synthetic Data Generation for Robotics with Bill Vass - #588
Multi-Device, Multi-Use-Case Optimization with Jeff Gehlhaar - #587
Causal Conceptions of Fairness and their Consequences with Sharad Goel - #586
Brain-Inspired Hardware and Algorithm Co-Design with Melika Payvand - #585
Equivariant Priors for Compressed Sensing with Arash Behboodi - #584
Managing Data Labeling Ops for Success with Audrey Smith - #583
Engineering an ML-Powered Developer-First Search Engine with Richard Socher - #582
On The Path Towards Robot Vision with Aljosa Osep - #581
More Language, Less Labeling with Kate Saenko - #580
Optical Flow Estimation, Panoptic Segmentation, and Vision Transformers with Fatih Porikli - #579
Data Governance for Data Science with Adam Wood - #578
Feature Platforms for Data-Centric AI with Mike Del Balso - #577
The Fallacy of "Ground Truth" with Shayan Mohanty - #576
Principle-centric AI with Adrien Gaidon - #575
Data Debt in Machine Learning with D. Sculley - #574
AI for Enterprise Decisioning at Scale with Rob Walker - #573
Data Rights, Quantification and Governance for Ethical AI with Margaret Mitchell - #572
Studying Machine Intelligence with Been Kim - #571
Advances in Neural Compression with Auke Wiggers - #570
Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week