Practical AI: Machine Learning, Data Science
Technology
Small changes in prompts can create large changes in the output behavior of generative AI models. Add to that the confusion around proper evaluation of LLM applications, and you have a recipe for confusion and frustration. Raza and the Humanloop team have been diving into these problems, and, in this episode, Raza helps us understand how non-technical prompt engineers can productively collaborate with technical software engineers while building AI-driven apps.
Leave us a comment
Changelog++ members save 4 minutes on this episode because they made the ads disappear. Join today!
Sponsors:
Featuring:
Show Notes:
Something missing or broken? PRs welcome!
AI in the U.S. Congress
First impressions of GPT-4o
Full-stack approach for effective AI agents
Autonomous fighter jets?!
Private, open source chat UIs
Mamba & Jamba
Udio & the age of multi-modal AI
RAG continues to rise
Should kids still learn to code?
AI vs software devs
Prompting the future
Generating the future of art & entertainment
YOLOv9: Computer vision is alive and well
Representation Engineering (Activation Hacking)
Leading the charge on AI in National Security
Gemini vs OpenAI
Data synthesis for SOTA LLMs
Large Action Models (LAMs) & Rabbits 🐇
Advent of GenAI Hackathon recap
Create your
podcast in
minutes
It is Free
Podcast – Kitchen Sink WordPress
The Goat Farm
Away From The Keyboard
Arrested DevOps
Build Phase