Masking, obfuscating, stripping, shuffling.
All the above techniques try to do one simple thing: keeping the data private while sharing it with third parties. Unfortunately, they are not the silver bullet to confidentiality.
All the players in the synthetic data space rely on simplistic techniques that are not secure, might not be compliant and risky for production.
At pryml we do things differently.
Why you care about homomorphic encryption (Ep. 116)
Test-First machine learning (Ep. 115)
GPT-3 cannot code (and never will) (Ep. 114)
Make Stochastic Gradient Descent Fast Again (Ep. 113)
What data transformation library should I use? Pandas vs Dask vs Ray vs Modin vs Rapids (Ep. 112)
[RB] It’s cold outside. Let’s speak about AI winter (Ep. 111)
Rust and machine learning #4: practical tools (Ep. 110)
Rust and machine learning #3 with Alec Mocatta (Ep. 109)
Rust and machine learning #2 with Luca Palmieri (Ep. 108)
Rust and machine learning #1 (Ep. 107)
Protecting workers with artificial intelligence (with Sandeep Pandya CEO Everguard.ai)(Ep. 106)
Compressing deep learning models: rewinding (Ep.105)
Compressing deep learning models: distillation (Ep.104)
Pandemics and the risks of collecting data (Ep. 103)
Why average can get your predictions very wrong (ep. 102)
Activate deep learning neurons faster with Dynamic RELU (ep. 101)
WARNING!! Neural networks can memorize secrets (ep. 100)
Attacks to machine learning model: inferring ownership of training data (Ep. 99)
Why sharing real data is dangerous (Ep. 97)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Lex Fridman Podcast