Arrested DevOps

Arrested DevOps

https://www.arresteddevops.com/episode/index.xml
409 Followers 205 Episodes Claim Ownership
Arrested DevOps is the podcast that helps you achieve understanding, develop good practices, and operate your team and organization for maximum DevOps awesomeness.

Episode List

How AI Is Changing the SDLC With Hannah Foxwell and Robert Werner

Oct 1st, 2025 1:36 PM

The Trust Problem Returns Hannah Foxwell, who has spent over a decade in DevOps and platform engineering, draws a striking parallel to earlier transformations: “It used to be that testers didn’t trust developers and ops didn’t trust testers and there were all these silos. Now we’re putting AI agents in the mix. Can we trust them? Should we trust them?” This isn’t just déjà vu—it’s a fundamental challenge that resurfaces with every major shift in how we build software. As Robert Werner points out, management had to give up control and push trust to the edges of organizations during the agile transformation. With cloud adoption came self-service and automation. Now, with AI, we’re dealing with non-deterministic black boxes that we need to trust to be “right often enough.” The Fluency Gap One of the biggest challenges isn’t the technology itself—it’s the lack of shared understanding. Hannah launched “AI for the Rest of Us,” a community now with over 1,000 members, after realizing that AI fluency is essential for making good decisions about where and how to use these tools. “I went to a talk at a conference thinking I’d learn about AI in one talk and become an expert by tomorrow,” Hannah recalls. “It just didn’t happen like that. There’s a whole new domain with new vocabulary, new concepts, new techniques.” The community focuses on making AI accessible without dumbing it down—providing talks and content that explain complex concepts in simple language so more people can participate in the conversation about AI’s role in software development. The Speed-Responsibility Paradox The technology is evolving so rapidly that best practices barely have time to solidify before they’re obsolete. Robert describes how hiring strategies at startups are changing every few weeks as new capabilities emerge. “Things that weren’t feasible last week are suddenly possible,” he notes. But this speed creates a dangerous tension. Organizations are pushing hard for AI adoption while the guardrails, workflows, and cultural practices needed to use it safely are still being figured out. As Matty observes, this leads to perverse incentives—developers required to “use AI” who find ways to tick the box without actually deriving value, just like teams that once added meaningless tests to meet sprint requirements. Who Owns the Code? A critical question emerges: if AI generates the code, who owns it? Who’s responsible when something goes wrong? Hannah frames it in familiar DevOps terms: “Does anybody really want to own a service if they didn’t write it and they don’t understand how it works? It’s the ops challenge again—AI throwing code over the wall to us.” Robert’s answer is pragmatic and honest: humans will need to take responsibility for validating AI-generated code, even if it’s tedious work most developers won’t enjoy. His company, Leap, is building tools specifically to make that verification process as convenient and enjoyable as possible, because he believes there’s simply no other way to do it safely. The Documentation Double-Bind There’s an ironic twist in how AI agents work best: they need excellent documentation. Organizations improving their documentation to support AI-powered development are inadvertently following DevOps best practices that benefit human developers too. But as Matty discovered building his own project, AI-generated documentation can be dangerously unreliable. The tools will confidently document features that don’t exist, pulling from incomplete PRDs or speculative notes in the codebase. Great documentation trains better agents, but agents shouldn’t write that documentation—creating a challenge that requires human judgment and oversight. Lessons from Past Transformations The parallels to earlier shifts are instructive. Hannah remembers enterprise clients who insisted continuous delivery would “never work here.” Now it’s common practice. The same resistance appeared with cloud adoption and agile methodologies. What worked then still matters now: Guardrails enable freedom: Constraints and safety nets let people explore confidently Make the right way the easy way: Transformations succeed when good practices are more convenient than bad ones Community and shared learning: Success stories and failures shared openly help everyone navigate change faster Start with good practices: Teams with solid engineering fundamentals—blue-green deployments, A/B testing, safe-to-fail production environments—are better positioned to benefit from AI-assisted development Practical Advice for Explorers For developers and teams trying to navigate this transformation, Hannah and Robert offer grounded guidance: Keep your eyes open: Watch for patterns of success and failure. Who’s making this work, and what do they have in common? Build community: Find or create spaces where people can share honestly about what’s working and what isn’t, without the pressure to pretend everything’s perfect. Be selective about information sources: With so much noise and hype, focus on quality outlets. Ignore things for a few weeks, and if they keep coming up, that’s when to invest your time. Practice regularly: The technology evolves so fast that hands-on experience goes stale quickly. Even if it’s not your main job, refresh your skills every few months. Be specific and constrained: AI coding assistants work best with clear, narrow requests. Frustration comes from asking too much or being too vague. The Future We’re Building We’re in the Nokia phone stage of AI-assisted development, as Robert puts it—the technology will look completely different in just a few years. But unlike waiting passively for that future to arrive, developers and teams are actively creating it through the choices they make today about how to integrate these tools. The question isn’t whether AI will transform software development—it already is. The question is whether we’ll learn from past transformations to build better practices, stronger safety nets, and more trustworthy systems. Or whether we’ll repeat old mistakes at unprecedented speed. As Hannah emphasizes, having more people with AI fluency means better conversations and better decisions at a pivotal moment in history. The rollercoaster is moving whether we’re ready or not. The best approach is to keep your eyes open, stay connected to community, and remain thoughtfully critical about what works and what doesn’t. Learn more about Hannah’s work at AI for the Rest of Us. Use code ADO20 for 20% off tickets to their London conference on October 15-16, 2025.

Digging Into Security With Kat Cosgrove

Aug 25th, 2025 5:22 PM

Security: the one topic that’s guaranteed to turn any DevOps conversation into a mix of fear, eye rolls, and nervous laughter. In this episode of Arrested DevOps, Matty welcomes back Kat Cosgrove to talk about the “never not hot” world of security and why it’s always lurking just over your shoulder (like that one compliance auditor who swears they’re just “observing”). Kat and Matty cover: Why vulnerabilities never seem to stop showing up in your containers (spoiler: they don’t). How teams can respond without spiraling into full-blown panic. The realities of securing Kubernetes and containerized environments (without pretending there’s a magic “easy button.”) Why security culture matters as much as the tools you’re using. Along the way, expect the usual mix of snark, sarcasm, and the occasional tangent about how everything in tech eventually becomes a security problem. If you’ve ever patched the same vulnerability three times in a week, or found yourself yelling at a CVE like it’s a personal enemy, this one’s for you.

AI, Ethics, and Empathy With Kat Morgan

Jun 3rd, 2025 10:18 AM

We’ve all been there: burning out on volatile tech jobs, tangled in impossible systems, and wondering what our work actually means. On this episode of Arrested DevOps, Matty Stratton sits down with Kat Morgan for a heartfelt, funny, and sharply observant conversation about AI: what it helps with, what it hurts, and how we navigate all of that as humans in tech. They dive deep into how large language models (LLMs) both assist and frustrate us, the ethics of working with machines trained on the labor of others, and why staying kind—to the robots and to ourselves—might be one of the most important practices we have. “We actually have to respect our own presence enough to appreciate that what we put out in the world will also change ourselves.” – Kat Morgan Topics Why strong opinions about AI often miss the nuance Using LLMs to support neurodivergent workflows (executive function as a service!) Treating agents like colleagues and the surprising benefits of that mindset Code hygiene, documentation, and collaborating with AI in GitHub issues Building private, local dev environments to reduce risk and improve trust Ethical tensions: intellectual property, environmental impact, and the AI value chain Why we should be polite to our agents—and what that says about how we treat people Key Takeaways AI isn’t magic, but it can be a helpful colleague. Kat shares how she uses LLMs to stay on task, avoid executive dysfunction, and manage complex projects with greater ease. Good context design matters. When working with AI, things like encapsulated code, clean interfaces, and checklists aren’t just best practices. They’re vital for productive collaboration. Skepticism is healthy. Kat reminds us that while AI can be useful, it also messes up. A lot. And without guardrails and critical thinking, it can become more of a liability than a partner. Build humane systems. From privacy risks to climate concerns, this episode underscores that responsible AI use requires ethical intent, which starts with practitioners.

Open Communities With Andrew Zigler

Feb 1st, 2024 5:18 PM

Andrew Zigler (Mattermost) delves into the world of open-source development and the unique challenges faced by an "open-first" developer community. Andrew shares his deep insights into fostering collaboration, building trust, and navigating the intricate dynamics of open-source projects.

Machine Learning Ops With Chelsea Troy

Jan 18th, 2024 3:15 PM

Jessitron is joined by Chelsea Troy, Staff Data Engineer at Mozilla, and one of the all-around most interesting people in software today, to discuss staff engineering, machine learning operations, and maybe also surfing.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free