Make Stochastic Gradient Descent Fast Again (Ep. 113)
Data Science at Home

Make Stochastic Gradient Descent Fast Again (Ep. 113)

2020-07-22

There is definitely room for improvement in the family of algorithms of stochastic gradient descent. In this episode I explain a relatively simple method that has shown to improve on the Adam optimizer. But, watch out! This approach does not generalize well.

Join our Discord channel and chat with us.

 

References
  • More descent, less gradient
  • Taylor Series

 

Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free