Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardware architects to address the pain points in frameworks and create a better cohesion between the distinct communities. Sara also highlights the impact and the emotional connection that language models have created in society, the benefits and the current safety concerns of universal models, and the significance of having grounded conversations to characterize and mitigate the risk and development of AI models. Along the way, we also dive deep into Cohere and Cohere for AI, along with their Aya project, an open science project that aims to build a state-of-the-art multilingual generative language model as well as some of their recent research papers.
The complete show notes for this episode can be found at twimlai.com/go/651.
Powering AI with the World's Largest Computer Chip with Joel Hestness - #684
AI for Power & Energy with Laurent Boinot - #683
Controlling Fusion Reactor Instability with Deep Reinforcement Learning with Aza Jalalvand - #682
GraphRAG: Knowledge Graphs for AI Applications with Kirk Marple - #681
Teaching Large Language Models to Reason with Reinforcement Learning with Alex Havrilla - #680
Localizing and Editing Knowledge in LLMs with Peter Hase - #679
Coercing LLMs to Do and Reveal (Almost) Anything with Jonas Geiping - #678
V-JEPA, AI Reasoning from a Non-Generative Architecture with Mido Assran - #677
Video as a Universal Interface for AI Reasoning with Sherry Yang - #676
Assessing the Risks of Open AI Models with Sayash Kapoor - #675
OLMo: Everything You Need to Train an Open Source LLM with Akshita Bhagia - #674
Training Data Locality and Chain-of-Thought Reasoning in LLMs with Ben Prystawski - #673
Reasoning Over Complex Documents with DocLLM with Armineh Nourbakhsh - #672
Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671
AI Trends 2024: Reinforcement Learning in the Age of LLMs with Kamyar Azizzadenesheli - #670
Building and Deploying Real-World RAG Applications with Ram Sriharsha - #669
Nightshade: Data Poisoning to Fight Generative AI with Ben Zhao - #668
Learning Transformer Programs with Dan Friedman - #667
AI Trends 2024: Machine Learning & Deep Learning with Thomas Dietterich - #666
AI Trends 2024: Computer Vision with Naila Murray - #665
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week