In the last episode How to master optimisation in deep learning I explained some of the most challenging tasks of deep learning and some methodologies and algorithms to improve the speed of convergence of a minimisation method for deep learning.
I explored the family of gradient descent methods - even though not exhaustively - giving a list of approaches that deep learning researchers are considering for different scenarios. Every method has its own benefits and drawbacks, pretty much depending on the type of data, and data sparsity. But there is one method that seems to be, at least empirically, the best approach so far.
In this episode I would like to continue that conversation about some additional strategies for optimising gradient descent in deep learning and introduce you to some tricks that might come useful when your neural network stops learning from data or when the learning process becomes so slow that it really seems it reached a plateau even by feeding in fresh data.
Episode 57: Neural networks with infinite layers
Episode 56: The graph network
Episode 55: Beyond deep learning
Episode 54: Reproducible machine learning
Episode 53: Estimating uncertainty with neural networks
Episode 52: why do machine learning models fail? [RB]
Episode 51: Decentralized machine learning in the data marketplace (part 2)
Episode 50: Decentralized machine learning in the data marketplace
Episode 49: The promises of Artificial Intelligence
Episode 48: Coffee, Machine Learning and Blockchain
Episode 47: Are you ready for AI winter? [Rebroadcast]
Episode 46: why do machine learning models fail? (Part 2)
Episode 45: why do machine learning models fail?
Episode 44: The predictive power of metadata
Episode 43: Applied Text Analysis with Python (interview with Rebecca Bilbro)
Episode 42: Attacking deep learning models (rebroadcast)
Episode 41: How can deep neural networks reason
Episode 40: Deep learning and image compression
Episode 39: What is L1-norm and L2-norm?
Episode 38: Collective intelligence (Part 2)
Code and preview