Practical AI: Machine Learning, Data Science
Technology
You might have heard a lot about code generation tools using AI, but could LLMs and generative AI make our existing code better? In this episode, we sit down with Mike from TurinTech to hear about practical code optimizations using AI “translation” of slow to fast code. We learn about their process for accomplishing this task along with impressive results when automated code optimization is run on existing open source projects.
Leave us a comment
Changelog++ members save 2 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? PRs welcome!
Timestamps:
(00:07) - Welcome to Practical AI
(00:43) - Code optimizing with Mike Basios
(03:19) - Solving code
(07:24) - The AI code ecosystem
(10:41) - Other targets
(12:58) - AI rephrasing?
(15:28) - Sponsor: Changelog News
(16:40) - State of current models
(20:31) - Improvements to devs
(22:31) - Managing your AI intern
(25:09) - Custom LLM models
(29:49) - Biggest challenges
(33:19) - Hallucination & optimization
(35:42) - Test chaining?
(39:09) - LLM workflow
(41:25) - Most exciting developments
(43:40) - Looking forward to faster code
(44:14) - Outro
AI in the U.S. Congress
First impressions of GPT-4o
Full-stack approach for effective AI agents
Autonomous fighter jets?!
Private, open source chat UIs
Mamba & Jamba
Udio & the age of multi-modal AI
RAG continues to rise
Should kids still learn to code?
AI vs software devs
Prompting the future
Generating the future of art & entertainment
YOLOv9: Computer vision is alive and well
Representation Engineering (Activation Hacking)
Leading the charge on AI in National Security
Gemini vs OpenAI
Data synthesis for SOTA LLMs
Large Action Models (LAMs) & Rabbits 🐇
Collaboration & evaluation for LLM apps
Advent of GenAI Hackathon recap
Create your
podcast in
minutes
It is Free
Podcast – Kitchen Sink WordPress
The Goat Farm
Away From The Keyboard
Arrested DevOps
Build Phase