Over the past few years, neural networks have re-emerged as powerful machine-learning models, reaching state-of-the-art results in several fields like image recognition and speech processing. More recently, neural network models started to be applied also to textual data in order to deal with natural language, and there too with promising results. In this episode I explain why is deep learning performing the way it does, and what are some of the most tedious causes of failure.
Episode 59: How to fool a smart camera with deep learning
Episode 58: There is physics in deep learning!
Episode 57: Neural networks with infinite layers
Episode 56: The graph network
Episode 55: Beyond deep learning
Episode 54: Reproducible machine learning
Episode 53: Estimating uncertainty with neural networks
Episode 52: why do machine learning models fail? [RB]
Episode 51: Decentralized machine learning in the data marketplace (part 2)
Episode 50: Decentralized machine learning in the data marketplace
Episode 49: The promises of Artificial Intelligence
Episode 48: Coffee, Machine Learning and Blockchain
Episode 47: Are you ready for AI winter? [Rebroadcast]
Episode 46: why do machine learning models fail? (Part 2)
Episode 45: why do machine learning models fail?
Episode 44: The predictive power of metadata
Episode 43: Applied Text Analysis with Python (interview with Rebecca Bilbro)
Episode 42: Attacking deep learning models (rebroadcast)
Episode 41: How can deep neural networks reason
Episode 40: Deep learning and image compression
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Lex Fridman Podcast