Extracting knowledge from large datasets with large number of variables is always tricky. Dimensionality reduction helps in analyzing high dimensional data, still maintaining most of the information hidden behind complexity. Here are some methods that you must try before further analysis (Part 1).
Time to take your data back with Tapmydata (Ep. 156)
True Machine Intelligence just like the human brain (Ep. 155)
Delivering unstoppable data with Streamr (Ep. 154)
MLOps: the good, the bad and the ugly (Ep. 153)
MLOps: what is and why it is important Part 2 (Ep. 152)
MLOps: what is and why it is important (Ep. 151)
Can I get paid for my data? With Mike Andi from Mytiki (Ep. 150)
Building high-growth data businesses with Lillian Pierson (Ep. 149)
Learning and training in AI times (Ep. 148)
You are the product [RB] (Ep. 147)
Polars: the fastest dataframe crate in Rust - with Ritchie Vink (Ep. 146)
Apache Arrow, Ballista and Big Data in Rust with Andy Grove (Ep. 145)
Pandas vs Rust (Ep. 144)
Concurrent is not parallel - Part 2 (Ep. 143)
Concurrent is not parallel - Part 1 (Ep. 142)
Backend technologies for machine learning in production (Ep. 141)
You are the product (Ep. 140)
How to reinvent banking and finance with data and technology (Ep. 139)
What's up with WhatsApp? (Ep. 138)
Is Rust flexible enough for a flexible data model? (Ep. 137)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Lex Fridman Podcast