Some of the most powerful NLP models like BERT and GPT-2 have one thing in common: they all use the transformer architecture.
Such architecture is built on top of another important concept already known to the community: self-attention.
In this episode I explain what these mechanisms are, how they work and why they are so powerful.
Don't forget to subscribe to our Newsletter or join the discussion on our Discord server
References
Building reproducible machine learning in production (Ep. 96)
Bridging the gap between data science and data engineering: metrics (Ep. 95)
A big welcome to Pryml: faster machine learning applications to production (Ep. 94)
It's cold outside. Let's speak about AI winter (Ep. 93)
The dark side of AI: bias in the machine (Ep. 92)
The dark side of AI: metadata and the death of privacy (Ep. 91)
The dark side of AI: recommend and manipulate (Ep. 90)
The dark side of AI: social media and the optimization of addiction (Ep. 89)
How to improve the stability of training a GAN (Ep. 88)
What if I train a neural network with random data? (with Stanisław Jastrzębski) (Ep. 87)
Deeplearning is easier when it is illustrated (with Jon Krohn) (Ep. 86)
[RB] How to generate very large images with GANs (Ep. 85)
More powerful deep learning with transformers (Ep. 84)
[RB] Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 83)
What is wrong with reinforcement learning? (Ep. 82)
Have you met Shannon? Conversation with Jimmy Soni and Rob Goodman about one of the greatest minds in history (Ep. 81)
Attacking machine learning for fun and profit (with the authors of SecML Ep. 80)
[RB] How to scale AI in your organisation (Ep. 79)
Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 78)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Acquired