Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs.
The complete show notes for this episode can be found at twimlai.com/go/663.
Brain-Inspired Hardware and Algorithm Co-Design with Melika Payvand - #585
Equivariant Priors for Compressed Sensing with Arash Behboodi - #584
Managing Data Labeling Ops for Success with Audrey Smith - #583
Engineering an ML-Powered Developer-First Search Engine with Richard Socher - #582
On The Path Towards Robot Vision with Aljosa Osep - #581
More Language, Less Labeling with Kate Saenko - #580
Optical Flow Estimation, Panoptic Segmentation, and Vision Transformers with Fatih Porikli - #579
Data Governance for Data Science with Adam Wood - #578
Feature Platforms for Data-Centric AI with Mike Del Balso - #577
The Fallacy of "Ground Truth" with Shayan Mohanty - #576
Principle-centric AI with Adrien Gaidon - #575
Data Debt in Machine Learning with D. Sculley - #574
AI for Enterprise Decisioning at Scale with Rob Walker - #573
Data Rights, Quantification and Governance for Ethical AI with Margaret Mitchell - #572
Studying Machine Intelligence with Been Kim - #571
Advances in Neural Compression with Auke Wiggers - #570
Mixture-of-Experts and Trends in Large-Scale Language Modeling with Irwan Bello - #569
Daring to DAIR: Distributed AI Research with Timnit Gebru - #568
Hierarchical and Continual RL with Doina Precup - #567
Open-Source Drug Discovery with DeepChem with Bharath Ramsundar - #566
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week