Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs.
The complete show notes for this episode can be found at twimlai.com/go/663.
Advancing Hands-On Machine Learning Education with Sebastian Raschka - #565
Big Science and Embodied Learning at Hugging Face 🤗 with Thomas Wolf - #564
Full-Stack AI Systems Development with Murali Akula - #563
100x Improvements in Deep Learning Performance with Sparsity, w/ Subutai Ahmad - #562
Scaling BERT and GPT for Financial Services with Jennifer Glore - #561
Trends in Deep Reinforcement Learning with Kamyar Azizzadenesheli - #560
Deep Reinforcement Learning at the Edge of the Statistical Precipice with Rishabh Agarwal - #559
Designing New Energy Materials with Machine Learning with Rafael Gomez-Bombarelli - #558
Differentiable Programming for Oceanography with Patrick Heimbach - #557
Trends in Machine Learning & Deep Learning with Zachary Lipton - #556
Solving the Cocktail Party Problem with Machine Learning, w/ Jonathan Le Roux - #555
Machine Learning for Earthquake Seismology with Karianne Bergen - #554
The New DBfication of ML/AI with Arun Kumar - #553
Building Public Interest Technology with Meredith Broussard - #552
A Universal Law of Robustness via Isoperimetry with Sebastien Bubeck - #551
Trends in NLP with John Bohannon - #550
Trends in Computer Vision with Georgia Gkioxari - #549
Kids Run the Darndest Experiments: Causal Learning in Children with Alison Gopnik - #548
Hypergraphs, Simplicial Complexes and Graph Representations of Complex Systems with Tina Eliassi-Rad - #547
Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week