Over the past few years, neural networks have re-emerged as powerful machine-learning models, reaching state-of-the-art results in several fields like image recognition and speech processing. More recently, neural network models started to be applied also to textual data in order to deal with natural language, and there too with promising results. In this episode I explain why is deep learning performing the way it does, and what are some of the most tedious causes of failure.
Why you care about homomorphic encryption (Ep. 116)
Test-First machine learning (Ep. 115)
GPT-3 cannot code (and never will) (Ep. 114)
Make Stochastic Gradient Descent Fast Again (Ep. 113)
What data transformation library should I use? Pandas vs Dask vs Ray vs Modin vs Rapids (Ep. 112)
[RB] It’s cold outside. Let’s speak about AI winter (Ep. 111)
Rust and machine learning #4: practical tools (Ep. 110)
Rust and machine learning #3 with Alec Mocatta (Ep. 109)
Rust and machine learning #2 with Luca Palmieri (Ep. 108)
Rust and machine learning #1 (Ep. 107)
Protecting workers with artificial intelligence (with Sandeep Pandya CEO Everguard.ai)(Ep. 106)
Compressing deep learning models: rewinding (Ep.105)
Compressing deep learning models: distillation (Ep.104)
Pandemics and the risks of collecting data (Ep. 103)
Why average can get your predictions very wrong (ep. 102)
Activate deep learning neurons faster with Dynamic RELU (ep. 101)
WARNING!! Neural networks can memorize secrets (ep. 100)
Attacks to machine learning model: inferring ownership of training data (Ep. 99)
Don't be naive with data anonymization (Ep. 98)
Why sharing real data is dangerous (Ep. 97)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Acquired