In machine learning and data science in general it is very common to deal at some point with imbalanced datasets and class distributions. This is the typical case where the number of observations that belong to one class is significantly lower than those belonging to the other classes. Actually this happens all the time, in several domains, from finance, to healthcare to social media, just to name a few I have personally worked with.
Think about a bank detecting fraudulent transactions among millions or billions of daily operations, or equivalently in healthcare for the identification of rare disorders.
In genetics but also with clinical lab tests this is a normal scenario, in which, fortunately there are very few patients affected by a disorder and therefore very few cases wrt the large pool of healthy patients (or not affected).
There is no algorithm that can take into account the class distribution or the amount of observations in each class, if it is not explicitly designed to handle such situations.
In this episode I speak about some effective techniques to handle imbalanced datasets, advising the right method, or the most appropriate one to the right dataset or problem.
In this episode I explain how to deal with such common and challenging scenarios.
Bridging the gap between data science and data engineering: metrics (Ep. 95)
A big welcome to Pryml: faster machine learning applications to production (Ep. 94)
It's cold outside. Let's speak about AI winter (Ep. 93)
The dark side of AI: bias in the machine (Ep. 92)
The dark side of AI: metadata and the death of privacy (Ep. 91)
The dark side of AI: recommend and manipulate (Ep. 90)
The dark side of AI: social media and the optimization of addiction (Ep. 89)
More powerful deep learning with transformers (Ep. 84) (Rebroadcast)
How to improve the stability of training a GAN (Ep. 88)
What if I train a neural network with random data? (with Stanisław Jastrzębski) (Ep. 87)
Deeplearning is easier when it is illustrated (with Jon Krohn) (Ep. 86)
[RB] How to generate very large images with GANs (Ep. 85)
More powerful deep learning with transformers (Ep. 84)
[RB] Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 83)
What is wrong with reinforcement learning? (Ep. 82)
Have you met Shannon? Conversation with Jimmy Soni and Rob Goodman about one of the greatest minds in history (Ep. 81)
Attacking machine learning for fun and profit (with the authors of SecML Ep. 80)
[RB] How to scale AI in your organisation (Ep. 79)
Replicating GPT-2, the most dangerous NLP model (with Aaron Gokaslan) (Ep. 78)
Training neural networks faster without GPU [RB] (Ep. 77)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Well There‘s Your Problem