Today we’re joined by Tim Jurka, Head of Feed AI at LinkedIn.
As you can imagine Feed AI is responsible for curating all the content you see daily on the LinkedIn site. What’s less apparent to those that don’t work on this type of product is the wide variety of opposing factors that need to be considered in organizing the feed. As you’ll learn in our conversation, Tim calls this the holistic optimization of the feed and we discuss some of the interesting technical and business challenges associated with trying to do this. We talk through some of the specific techniques used at LinkedIn like Multi-arm Bandits and Content Embeddings, and also jump into a really interesting discussion about organizing for machine learning at scale.
We’d like to send a huge thanks to LinkedIn for sponsoring today’s show! LinkedIn Engineering solves complex problems at scale to create economic opportunity for every member of the global workforce. AI and ML are integral aspects of almost every product the company builds for its members and customers. LinkedIn’s highly structured dataset gives their data scientists and researchers the ability to conduct applied research to improve member experiences. To learn more about the work of LinkedIn Engineering, please visit https://engineering.linkedin.com/blog.
The complete show notes can be found at https://twimlai.com/talk/224.
Is ChatGPT Getting Worse? with James Zou - #645
Why Deep Networks and Brains Learn Similar Features with Sophia Sanborn - #644
Inverse Reinforcement Learning Without RL with Gokul Swamy - #643
Explainable AI for Biology and Medicine with Su-In Lee - #642
Transformers On Large-Scale Graphs with Bayan Bruss - #641
The Enterprise LLM Landscape with Atul Deo - #640
BloombergGPT - an LLM for Finance with David Rosenberg - #639
Are LLMs Good at Causal Reasoning? with Robert Osazuwa Ness - #638
Privacy vs Fairness in Computer Vision with Alice Xiang - #637
Unifying Vision and Language Models with Mohit Bansal - #636
Data Augmentation and Optimized Architectures for Computer Vision with Fatih Porikli - #635
Mojo: A Supercharged Python for AI with Chris Lattner - #634
Stable Diffusion and LLMs at the Edge with Jilei Hou - #633
Modeling Human Behavior with Generative Agents with Joon Sung Park - #632
Towards Improved Transfer Learning with Hugo Larochelle - #631
Language Modeling With State Space Models with Dan Fu - #630
Building Maps and Spatial Awareness in Blind AI Agents with Dhruv Batra - #629
AI Agents and Data Integration with GPT and LLaMa with Jerry Liu - #628
Hyperparameter Optimization through Neural Network Partitioning with Christos Louizos - #627
Are LLMs Overhyped or Underappreciated? with Marti Hearst - #626
Create your
podcast in
minutes
It is Free
20/20
The Dropout
Ten Percent Happier with Dan Harris
World News Tonight with David Muir
NEJM This Week