In this episode I briefly explain the concept behind activation functions in deep learning. One of the most widely used activation function is the rectified linear unit (ReLU).
While there are several flavors of ReLU in the literature, in this episode I speak about a very interesting approach that keeps computational complexity low while improving performance quite consistently.
This episode is supported by pryml.io. At pryml we let companies share confidential data. Visit our website.
Don't forget to join us on discord channel to propose new episode or discuss the previous ones.
ReferencesDynamic ReLU https://arxiv.org/abs/2003.10027
Test-First machine learning (Ep. 115)
GPT-3 cannot code (and never will) (Ep. 114)
Make Stochastic Gradient Descent Fast Again (Ep. 113)
What data transformation library should I use? Pandas vs Dask vs Ray vs Modin vs Rapids (Ep. 112)
[RB] It’s cold outside. Let’s speak about AI winter (Ep. 111)
Rust and machine learning #4: practical tools (Ep. 110)
Rust and machine learning #3 with Alec Mocatta (Ep. 109)
Rust and machine learning #2 with Luca Palmieri (Ep. 108)
Rust and machine learning #1 (Ep. 107)
Protecting workers with artificial intelligence (with Sandeep Pandya CEO Everguard.ai)(Ep. 106)
Compressing deep learning models: rewinding (Ep.105)
Compressing deep learning models: distillation (Ep.104)
Pandemics and the risks of collecting data (Ep. 103)
Why average can get your predictions very wrong (ep. 102)
WARNING!! Neural networks can memorize secrets (ep. 100)
Attacks to machine learning model: inferring ownership of training data (Ep. 99)
Don't be naive with data anonymization (Ep. 98)
Why sharing real data is dangerous (Ep. 97)
Building reproducible machine learning in production (Ep. 96)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
Black Wolf Feed (Chapo Premium Feed Bootleg)
Bannon`s War Room