In the last episode of 2019 I speak with Filip Piekniewski about some of the most worth noting findings in AI and machine learning in 2019. As a matter of fact, the entire field of AI has been inflated by hype and claims that are hard to believe. A lot of the promises made a few years ago have revealed quite hard to achieve, if not impossible. Let's stay grounded and realistic on the potential of this amazing field of research, not to bring disillusion in the near future.
Join us to our Discord channel to discuss your favorite episode and propose new ones.
I would like to thank all of you for supporting and inspiring us. I wish you a wonderful 2020!
Francesco and the team of Data Science at Home
Rust and deep learning with Daniel McKenna (Ep. 135)
Scaling machine learning with clusters and GPUs (Ep. 134)
What is data ethics? (Ep. 133)
A Standard for the Python Array API (Ep. 132)
What happens to data transfer after Schrems II? (Ep. 131)
Test-First Machine Learning [RB] (Ep. 130)
Similarity in Machine Learning (Ep. 129)
Distill data and train faster, better, cheaper (Ep. 128)
Machine Learning in Rust: Amadeus with Alec Mocatta [RB] (ep. 127)
Top-3 ways to put machine learning models into production (Ep. 126)
Remove noise from data with deep learning (Ep.125)
What is contrastive learning and why it is so powerful? (Ep. 124)
Neural search (Ep. 123)
Let's talk about federated learning (Ep. 122)
How to test machine learning in production (Ep. 121)
Why synthetic data cannot boost machine learning (Ep. 120)
Machine learning in production: best practices [LIVE from twitch.tv] (Ep. 119)
Testing in machine learning: checking deeplearning models (Ep. 118)
Testing in machine learning: generating tests and data (Ep. 117)
Why you care about homomorphic encryption (Ep. 116)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Well There‘s Your Problem