Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardware architects to address the pain points in frameworks and create a better cohesion between the distinct communities. Sara also highlights the impact and the emotional connection that language models have created in society, the benefits and the current safety concerns of universal models, and the significance of having grounded conversations to characterize and mitigate the risk and development of AI models. Along the way, we also dive deep into Cohere and Cohere for AI, along with their Aya project, an open science project that aims to build a state-of-the-art multilingual generative language model as well as some of their recent research papers.
The complete show notes for this episode can be found at twimlai.com/go/651.
Ensuring LLM Safety for Production Applications with Shreya Rajpal - #647
What’s Next in LLM Reasoning? with Roland Memisevic - #646
Is ChatGPT Getting Worse? with James Zou - #645
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
Inverse Reinforcement Learning Without RL with Gokul Swamy - #643
Explainable AI for Biology and Medicine with Su-In Lee - #642
Transformers On Large-Scale Graphs with Bayan Bruss - #641
The Enterprise LLM Landscape with Atul Deo - #640
BloombergGPT - an LLM for Finance with David Rosenberg - #639
Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness - #638
Privacy vs Fairness in Computer Vision with Alice Xiang - #637
Unifying Vision and Language Models with Mohit Bansal - #636
Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635
Mojo: A Supercharged Python for AI with Chris Lattner - #634
Stable Diffusion and LLMs at the Edge with Jilei Hou - #633
Modeling Human Behavior with Generative Agents with Joon Sung Park - #632
Towards Improved Transfer Learning with Hugo Larochelle - #631
Language Modeling With State Space Models with Dan Fu - #630
Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629
AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week