In this episode of 8minuteAI, Becka, Michael, and Garvin break down the magic behind Large Language Models (LLMs), from how LLMs predict the next word to why they sometimes get weird (mayonnaise in a PB&J sandwich, anyone?), we explore how these models work, why they’re powerful, and what makes them occasionally... hallucinate. Tune in to learn how LLMs really “think”, without needing a computer science degree.
And don’t forget to subscribe, share, and vote in our poll on Spotify or here.
Mentioned in the show:
OpenAI Tokenizer