Over the past few years, neural networks have re-emerged as powerful machine-learning models, reaching state-of-the-art results in several fields like image recognition and speech processing. More recently, neural network models started to be applied also to textual data in order to deal with natural language, and there too with promising results. In this episode I explain why is deep learning performing the way it does, and what are some of the most tedious causes of failure.
State of Artificial Intelligence 2022 (Ep. 196)
Improving your AI by finding issues within data pockets (Ep. 195)
Fake data that looks, feels, and behaves like production.(Ep.194)
Batteries and AI in Automotive (Ep. 193)
Collect data at the edge [RB] (Ep. 192)
Bayesian Machine Learning with Ravin Kumar (Ep. 191)
What is spatial data science? With Matt Forest from Carto (Ep. 190)
Connect. Collect. Normalize. Analyze. An interview with the people from Railz AI (Ep. 189)
History of data science [RB] (Ep. 188)
Artificial Intelligence and Cloud Automation with Leon Kuperman from Cast.ai (Ep. 187)
Embedded Machine Learning: Part 5 - Machine Learning Compiler Optimization (Ep. 186)
Embedded Machine Learning: Part 4 - Machine Learning Compilers (Ep. 185)
Embedded Machine Learning: Part 3 - Network Quantization (Ep. 184)
Embedded Machine Learning: Part 2 (Ep. 183)
Embedded Machine Learning: Part 1 (Ep.182)
History of Data Science (Ep. 181)
Capturing Data at the Edge (Ep. 180)
[RB] Composable Artificial Intelligence (Ep. 179)
What is a data mesh and why it is relevant (Ep. 178)
Environmentally friendly AI (Ep. 177)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Acquired