Today we’re joined by Markus Nagel, research scientist at Qualcomm AI Research, who helps us kick off our coverage of NeurIPS 2023. In our conversation with Markus, we cover his accepted papers at the conference, along with other work presented by Qualcomm AI Research scientists. Markus’ first paper, Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing, focuses on tackling activation quantization issues introduced by the attention mechanism and how to solve them. We also discuss Pruning vs Quantization: Which is Better?, which focuses on comparing the effectiveness of these two methods in achieving model weight compression. Additional papers discussed focus on topics like using scalarization in multitask and multidomain learning to improve training and inference, using diffusion models for a sequence of state models and actions, applying geometric algebra with equivariance to transformers, and applying a deductive verification of chain of thought reasoning performed by LLMs.
The complete show notes for this episode can be found at twimlai.com/go/663.
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
Inverse Reinforcement Learning Without RL with Gokul Swamy - #643
Explainable AI for Biology and Medicine with Su-In Lee - #642
Transformers On Large-Scale Graphs with Bayan Bruss - #641
The Enterprise LLM Landscape with Atul Deo - #640
BloombergGPT - an LLM for Finance with David Rosenberg - #639
Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness - #638
Privacy vs Fairness in Computer Vision with Alice Xiang - #637
Unifying Vision and Language Models with Mohit Bansal - #636
Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635
Mojo: A Supercharged Python for AI with Chris Lattner - #634
Stable Diffusion and LLMs at the Edge with Jilei Hou - #633
Modeling Human Behavior with Generative Agents with Joon Sung Park - #632
Towards Improved Transfer Learning with Hugo Larochelle - #631
Language Modeling With State Space Models with Dan Fu - #630
Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629
AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628
Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627
Are LLMs Overhyped or Underappreciated? with Marti Hearst - #626
Are Large Language Models a Path to AGI? with Ben Goertzel - #625
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week