Continuing the discussion of the last two episodes, there is one more aspect of deep learning that I would love to consider and therefore left as a full episode, that is parallelising and distributing deep learning on relatively large clusters.
As a matter of fact, computing architectures are changing in a way that is encouraging parallelism more than ever before. And deep learning is no exception and despite the greatest improvements with commodity GPUs - graphical processing units, when it comes to speed, there is still room for improvement.
Together with the last two episodes, this one completes the picture of deep learning at scale. Indeed, as I mentioned in the previous episode, How to master optimisation in deep learning, the function optimizer is the horsepower of deep learning and neural networks in general. A slow and inaccurate optimisation method leads to networks that slowly converge to unreliable results.
In another episode titled “Additional strategies for optimizing deeplearning” I explained some ways to improve function minimisation and model tuning in order to get better parameters in less time. So feel free to listen to these episodes again, share them with your friends, even re-broadcast or download for your commute.
While the methods that I have explained so far represent a good starting point for prototyping a network, when you need to switch to production environments or take advantage of the most recent and advanced hardware capabilities of your GPU, well... in all those cases, you would like to do something more.
The new dimension of AI: Vector Databases (Ep. 236)
Building Self Serve Business Intelligence With AI and LLMs at Zenlytic (Ep. 235)
Money, Cryptocurrencies, and AI: Exploring the Future of Finance with Chris Skinner (Ep. 234)
Debunking AGI Hype and Embracing Reality (Ep. 233)
Full steam ahead! Unraveling Forward-Forward Neural Networks (Ep. 232)
The LLM Battle Begins: Google Bard vs ChatGPT (Ep. 231)
Unleashing the Force: Blending Neural Networks and Physics for Epic Predictions (Ep. 230)
AI’s Impact on Software Engineering: Killing Old Principles? [RB] (Ep. 229)
Warning! Mathematical Mayhem Ahead: Demystifying Liquid Time-Constant Networks (Ep. 228)
Efficiently Retraining Language Models: How to Level Up Without Breaking the Bank (Ep. 227)
Revolutionize Your AI Game: How Running Large Language Models Locally Gives You an Unfair Advantage Over Big Tech Giants (Ep. 226)
Rust: A Journey to High-Performance and Confidence in Code at Amethix Technologies (Ep. 225)
The Power of Graph Neural Networks: Understanding the Future of AI - Part 2/2 (Ep.224)
The Power of Graph Neural Networks: Understanding the Future of AI - Part 1/2 (Ep.223)
Leveling Up AI: Reinforcement Learning with Human Feedback (Ep. 222)
The promise and pitfalls of GPT-4 (Ep. 221)
AI’s Impact on Software Engineering: Killing Old Principles? (Ep. 220)
Edge AI applications for military and space [RB] (Ep. 219)
Prove It Without Revealing It: Exploring the Power of Zero-Knowledge Proofs in Data Science (Ep. 218)
Deep learning vs tabular models (Ep. 217)
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
A Prairie Home Companion: News from Lake Wobegon