In this episode I continue the conversation from the previous one, about failing machine learning models.
When data scientists have access to the distributions of training and testing datasets it becomes relatively easy to assess if a model will perform equally on both datasets. What happens with private datasets, where no access to the data can be granted?
At fitchain we might have an answer to this fundamental problem.
How to generate very large images with GANs (Ep. 76)
[RB] Complex video analysis made easy with Videoflow (Ep. 75)
[RB] Validate neural networks without data with Dr. Charles Martin (Ep. 74)
How to cluster tabular data with Markov Clustering (Ep. 73)
Waterfall or Agile? The best methodology for AI and machine learning (Ep. 72)
Training neural networks faster without GPU (Ep. 71)
Validate neural networks without data with Dr. Charles Martin (Ep. 70)
Complex video analysis made easy with Videoflow (Ep. 69)
Episode 68: AI and the future of banking with Chris Skinner [RB]
Episode 67: Classic Computer Science Problems in Python
Episode 66: More intelligent machines with self-supervised learning
Episode 65: AI knows biology. Or does it?
Episode 64: Get the best shot at NLP sentiment analysis
Episode 63: Financial time series and machine learning
Episode 62: AI and the future of banking with Chris Skinner
Episode 61: The 4 best use cases of entropy in machine learning
Episode 60: Predicting your mouse click (and a crash course in deeplearning)
Episode 59: How to fool a smart camera with deep learning
Episode 58: There is physics in deep learning!
Episode 57: Neural networks with infinite layers
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
A Prairie Home Companion: News from Lake Wobegon