In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning.
I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far.
Feel free to listen to the previous episode, share it, re-broadcast or just download for your commute.
In this episode I would like to continue that conversation about some additional strategies for optimising gradient descent in deep learning and introduce you to some tricks that might come useful when your neural network stops learning from data or when the learning process becomes so slow that it really seems it reached a plateau even by feeding in fresh data.
Bridging the gap between data science and data engineering: metrics (Ep. 95)
A big welcome to Pryml: faster machine learning applications to production (Ep. 94)
It's cold outside. Let's speak about AI winter (Ep. 93)
The dark side of AI: bias in the machine (Ep. 92)
The dark side of AI: metadata and the death of privacy (Ep. 91)
The dark side of AI: recommend and manipulate (Ep. 90)
The dark side of AI: social media and the optimization of addiction (Ep. 89)
More powerful deep learning with transformers (Ep. 84) (Rebroadcast)
How to improve the stability of training a GAN (Ep. 88)
What if I train a neural network with random data? (with Stanisław Jastrzębski) (Ep. 87)
Deeplearning is easier when it is illustrated (with Jon Krohn) (Ep. 86)
[RB] How to generate very large images with GANs (Ep. 85)
More powerful deep learning with transformers (Ep. 84)
[RB] Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 83)
What is wrong with reinforcement learning? (Ep. 82)
Have you met Shannon? Conversation with Jimmy Soni and Rob Goodman about one of the greatest minds in history (Ep. 81)
Attacking machine learning for fun and profit (with the authors of SecML Ep. 80)
[RB] How to scale AI in your organisation (Ep. 79)
Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 78)
Training neural networks faster without GPU [RB] (Ep. 77)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
Black Wolf Feed (Chapo Premium Feed Bootleg)
Bannon`s War Room