TestGuild Automation Podcast

TestGuild Automation Podcast

https://testtalks.libsyn.com/rss
185 Followers 589 Episodes Claim Ownership
TestGuild Automation Podcast (formally Test Talks) is a weekly podcast hosted by Joe Colantonio, which geeks out on all things software test automation. TestGuild Automation covers news found in the testing space, reviews books about automation, and speaks with some of the thought leaders in the test automation field. We'll aim to interview some of today's most successful and inspiring software engineers, and test automation thought leaders.

Episode List

Scaling Quality Engineering: How to Deliver Faster Across Global Teams with Sunita McCoy

Apr 7th, 2026 4:05 PM

AI is changing how we build and test software, but most teams are struggling to turn that promise into real results. In this episode, we break down what it actually takes to scale quality engineering across global teams without creating bottlenecks, burnout, or broken processes. You'll learn: why most test automation and transformation initiatives fail how to separate AI hype from reality what high-performing teams are doing differently to ship faster with confidence Today's expert, Sunita McCoy, a Global Engineering Leader and Transformation Specialist, shares practical insights from leading large-scale engineering transformations, including: how to build a culture that supports AI adoption why "quality as a phase" is dead how to shift toward treating quality as a product If you're a QA leader, automation engineer, or DevOps professional trying to improve reliability, reduce risk, and future-proof your skills in the age of AI, this episode gives you a clear path forward.

Mobile Test Automation is Broken. Here's How QApilot Fixes It with Aditya Challa

Mar 31st, 2026 3:29 PM

Mobile test automation is still one of the biggest bottlenecks in modern software delivery. In this interview, QApilot's Co-founder Aditya Challa explains why most AI testing approaches fail and how to fix them. Learn more about QApilot: https://links.testguild.com/flutterqa If your mobile tests are flaky, slow, or hard to trust, you're not alone. Most teams are trying to apply LLM-based AI to problems that actually require deterministic reliability—and that's where things break down. In this video, you'll learn: Why mobile test automation breaks at scale The real issue with "99% accurate" AI in testing LLMs vs deterministic AI (and why it matters for mobile apps) How flaky tests destroy confidence in your pipeline How QApilot approaches mobile testing differently What reliable, scalable mobile automation should look like What this means for you: Fewer false positives, faster releases, and mobile tests you can actually trust. 00:00 Why Mobile Test Automation Is Still Broken 01:10 QApilot Overview 01:51 Why Mobile Testing Tools Fail 03:13 Why Appium Isn't Enough 05:09 QApilot's Approach to Mobile Testing 07:10 Scaling Mobile Testing Across Devices 08:02 Autonomous Testing + Human in the Loop 10:55 How QApilot Works (Architecture + Agents) 13:45 Real Example: Mobile App Crawling in Action 16:31 Finding Bugs Automatically (Performance + Accessibility) 18:52 Device Farms & Real Device Testing 21:50 Future of Mobile Testing (SRE + AI + Quality Layer) 27:06 Real Customer Results & Case Study 31:02 Why QApilot Focuses Only on Mobile 34:04 Where QApilot Fits in CI/CD 36:00 How to Try QApilot + Final Advice

AI Testing: How Solo Testers Stay Confident in Releases with Christine Pinto

Mar 25th, 2026 7:31 AM

Are you the only tester on your team—and expected to ensure quality across everything? In this episode, we break down the growing challenge of solo QA testing in the age of AI-driven development—where code is generated faster than ever, but confidence hasn't caught up. Christine Pinto shares real-world insights from her experience as a solo tester and now as a founder building tools designed to help testers reduce risk, collaborate better, and make smarter release decisions. You'll learn: Why "all tests passing" doesn't mean your product is safe The hidden risks of AI-generated code and test automation How to shift from test coverage to risk-based testing Practical ways solo testers can avoid burnout and isolation How to bring collaboration back into QA—even if you're the only tester Why better requirements still matter more than better AI

AI Testing from Production Logs: Generate Smarter Regression Tests with Tanvi Mittal

Mar 17th, 2026 9:56 PM

What if your production logs could automatically generate new test cases? In this episode, Joe Colantonio sits down with Tanvi Mittal to break down how AI-powered log mining is changing the way teams approach software testing, quality engineering, and DevOps. Most teams ignore production logs or use them only for debugging. But those logs contain real user behavior, real failures, and real edge cases—the exact scenarios your test suite is probably missing. 👉 Learn how to: Convert production logs into automated regression tests Use AI to detect real-world failure patterns Apply shift-right testing to catch bugs earlier (and smarter) Handle the challenge of testing non-deterministic AI systems Reduce flaky tests and automation debt with real data If you're working with Playwright, Selenium, Cypress, or AI-driven testing tools, this episode will give you a completely new way to think about test coverage.

AI Testing: How to Ensure Quality in Non-Deterministic Systems with Adam Sandman

Mar 10th, 2026 10:33 PM

How do you ensure software quality when the system you're testing doesn't give the same output twice? That's the core challenge facing every QA team building or testing AI-powered applications today and it's breaking all the rules we've relied on for decades. In this episode of the TestGuild Automation Podcast, I sit down with Adam Sandman, co-founder of Inflectra, to get into what non-deterministic AI testing actually means in practice, why traditional pass/fail testing no longer cuts it, and what quality professionals need to do differently right now. We cover: Why AI-generated code is raising the stakes for QA teams while budgets stay flat The fundamental difference between deterministic and non-deterministic systems — and why it changes everything about how you test How to set acceptable risk thresholds for AI systems (hint: it depends on whether you're building an e-commerce chatbot or an air traffic control system) Why testers who embrace AI as a tool — not a threat — will be the ones leading their organizations forward How a live demo failure at a conference inspired Inflectra's new non-deterministic testing tool, SureWire If you're a tester, QA manager, or automation engineer trying to figure out how to keep up with AI-driven development without losing your mind — or your job — this one's for you.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free