Some of the most powerful NLP models like BERT and GPT-2 have one thing in common: they all use the transformer architecture.
Such architecture is built on top of another important concept already known to the community: self-attention.
In this episode I explain what these mechanisms are, how they work and why they are so powerful.
Don't forget to subscribe to our Newsletter or join the discussion on our Discord server
References
Episode 40: Deep learning and image compression
Episode 39: What is L1-norm and L2-norm?
Episode 38: Collective intelligence (Part 2)
Episode 38: Collective intelligence (Part 1)
Episode 37: Predicting the weather with deep learning
Episode 36: The dangers of machine learning and medicine
Episode 35: Attacking deep learning models
Episode 34: Get ready for AI winter
Episode 33: Decentralized Machine Learning and the proof-of-train
Episode 32: I am back. I have been building fitchain
Founder Interview – Francesco Gadaleta of Fitchain
Episode 31: The End of Privacy
Episode 30: Neural networks and genetic evolution: an unfeasible approach
Episode 29: Fail your AI company in 9 steps
Episode 28: Towards Artificial General Intelligence: preliminary talk
Episode 27: Techstars accelerator and the culture of fireflies
Episode 26: Deep Learning and Alzheimer
Episode 25: How to become data scientist [RB]
Episode 24: How to handle imbalanced datasets
Episode 23: Why do ensemble methods work?
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Lex Fridman Podcast