637: SEGA Christmas Special 25
Mike's Year End Post Mike on LinkedIn Mike's Blog Show on Discord Alice Promo Dreamcast assorted references: Dreamcast overview https://sega.fandom.com/wiki/Dreamcast History of Dreamcast development https://segaretro.org/History_of_the_Sega_Dreamcast/Development The Rise and Fall of the Dreamcast: A Legend Gone Too Soon (Simon Jenner) https://sabukaru.online/articles/he-rise-and-fall-of-the-dreamcast-a-legend-gone-too-soon The Legacy of the Sega Dreamcast | 20 Years Later https://medium.com/@Amerinofu/the-legacy-of-the-sega-dreamcast-20-years-later-d6f3d2f7351c Socials & Plugs The R Podcast https://r-podcast.org/ R Weekly Highlights https://serve.podhome.fm/r-weekly-highlights Shiny Developer Series https://shinydevseries.com/ Eric on Bluesky https://bsky.app/profile/rpodcast.bsky.social Eric on Mastodon https://podcastindex.social/@rpodcast Eric on LinkedIn https://www.linkedin.com/in/eric-nantz-6621617/
636: Red Hat's James Huang
Links James on LinkedIn Mike on LinkedIn Mike's Blog Show on Discord Alice Promo AI on Red Hat Enterprise Linux (RHEL) Trust and Stability: RHEL provides the mission-critical foundation needed for workloads where security and reliability cannot be compromised. Predictive vs. Generative: Acknowledging the hype of GenAI while maintaining support for traditional machine learning algorithms. Determinism: The challenge of bringing consistency and security to emerging AI technologies in production environments. Rama-Llama & Containerization Developer Simplicity: Rama-Llama helps developers run local LLMs easily without being "locked in" to specific engines; it supports Podman, Docker, and various inference engines like Llama.cpp and Whisper.cpp. Production Path: The tool is designed to "fade away" after helping package the model and stack into a container that can be deployed directly to Kubernetes. Behind the Firewall: Addressing the needs of industries (like aircraft maintenance) that require AI to stay strictly on-premises. Enterprise AI Infrastructure Red Hat AI: A commercial product offering tools for model customization, including pre-training, fine-tuning, and RAG (Retrieval-Augmented Generation). Inference Engines: James highlights the difference between Llama.cpp (for smaller/edge hardware) and vLLM, which has become the enterprise standard for multi-GPU data center inferencing.
635: Tabnine's Eran Yahav
Tabnine Eran on LinkedIn Alice for Snowflake Mike on X Coder on X Show Discord Alice & Custom Dev
634: MongoDB's Frank Pachot
Frank on LinkedIn MongoDB Alice for Snowflake Mike on X Coder on X Show Discord Alice & Custom Dev Mike's Recent Omakub Blog Post
633: Hotwire Native with Joe Masilotti
Joe on LinkedIn Joe's Blog Joe on X Alice for Snowflake Mike on X Mike on BlueSky Coder on X Show Discord Alice & Custom Dev Mike's Recent Omakub Blog Post