Absolute AppSec

Absolute AppSec

https://absoluteappsec.com/rss.xml
23 Followers 317 Episodes Claim Ownership
A weekly podcast of all things application security related. Hosted by Ken Johnson and Seth Law.

Episode List

Episode 317 - (Post-RSAC/BSidesSF), Supply Chain Security, Future of SDLC

Mar 31st, 2026 6:00 PM

Ken Johnson and Seth Law reflect on the 2026 RSA Conference and BSidesSF, noting an industry-wide "awakening" regarding the high costs and engineering complexities of operationalizing AI security tools. A major focus is the recent "supply chain attack hell," specifically the compromise of the Axios HTTP client through dual-account breaches that allowed attackers to bypass legitimate OIDC deploy setups via a misconfigured NPM CLI. The malware used was particularly evasive, deleting itself and replacing its package.json with a clean version post-execution. The hosts also discuss the emergence of the "Agentic Development Lifecycle" (ADLC), where engineering teams are increasingly "committing on time" rather than features, creating a volume of code that traditional security gates cannot manage. They debate Thomas Ptacek’s thesis that AI agents will soon "supplant" human vulnerability research for common bug classes, shifting the human role toward high-level governance and "context infusion". Economically, they highlight how Anthropic's security announcements contributed to nearly half a trillion dollars in market value loss for traditional security firms, as investors increasingly bet on frontier models to consume established security domains.

Episode 316 - w/Coffee, Chaos, and ProdSec - Agentic Development Lifecycle

Mar 17th, 2026 6:00 PM

In episode 316 of Absolute AppSec, hosts Ken Johnson and Seth Law participate in a crossover with Kurt Hendle and Cameron Walters from the Coffee, Chaos, and ProdSec podcast to discuss the radical transformation of security roles in an AI-driven landscape. The guests share origin stories rooted in gaming and "mischievous" curiosity, which evolved into deep careers in security architecture and engineering. The primary discussion centers on the industry's shift toward an "Agentic Development Lifecycle" (ADLC), where the sheer volume of AI-generated code renders traditional manual review gates obsolete. This acceleration risks a "rubber stamp" culture where developers approve fixes in seconds rather than minutes, potentially leading to a mountain of technical debt. Consequently, the role of security is shifting from manual bug finding to high-level governance and "context infusion," requiring practitioners to manage AI agents that automate complex tasks. Economically, the group highlights how frontier model announcements have caused massive market volatility, wiping billions from traditional security stocks. Ultimately, they conclude that while older "primitive" tools are failing, professionals who lean into AI as a "superpower" for governance and oversight will be essential for navigating this new, non-deterministic reality.

Episode 315 - Risks of "AI-Native" Security Products, Rapid Software Development

Mar 3rd, 2026 6:00 PM

In episode 315 of Absolute AppSec, Ken Johnson and Seth Law discuss the rapidly evolving challenges of securing software in an era of AI-assisted development. The hosts provide updates on their "Harnessing LLMs for Application Security" training, noting that the field is changing so fast that they must constantly update their exercises to include new agents and advanced tools like Claude Code. A primary concern raised is the "naivete" of many new security tools, where prompts are often automatically generated by AI rather than expertly crafted, causing a loss of essential nuance. The hosts also warn against AI companies building security products without specialized expertise, citing a zero-click exploit in the "Comet" AI browser that could exfiltrate sensitive secrets via calendar summaries. As development teams now ship code at "AI speed," the hosts argue that traditional AppSec methods are too slow, necessitating a strategic pivot toward automated design reviews, governance, and observability rather than just chasing individual vulnerabilities. Despite the inherent risks and the ongoing difficulty of managing AI reasoning drift, they remain optimistic that these tools can eventually unlock more efficient, hands-off AppSec workflows if managed with proper guardrails and deterministic oversight.

Episode 314 - LLM AppSec Disruption, Limitations of AI in Security, AppSec Oversight

Feb 24th, 2026 6:00 PM

In this episode, the hosts discuss the seismic shift in the application security landscape triggered by the rise of Large Language Models (LLMs) and Anthropic’s "Claude Code". They highlight the massive economic repercussions of these AI advancements, noting that billions in market value were wiped from traditional cybersecurity stocks as investors begin to believe frontier models might eventually write perfectly secure code. The hosts critique the industry's historical reliance on "checkbox" compliance tools like SAST, DAST, and SCA, arguing that these "archaic" methods are being replaced by AI-native strategies capable of reasoning through complex logic flaws. While they acknowledge that AI can suffer from "reasoning drift" and still requires deterministic validation to avoid false positives, they emphasize that security professionals must adapt by building custom "skills" and focusing on governance and observability. The discussion concludes that as developers move to "AI speed," the traditional role of the AppSec professional is evolving into a "Jarvis-like" orchestrator who manages automated workflows and infuses institutional knowledge into AI agents to maintain oversight without slowing down production.

Episode 313 - AppSec Role Evolution, AI Skills & Risks, Phishing AI Agents

Feb 17th, 2026 6:00 PM

Ken Johnson and Seth Law examine the intensifying pressure on security practitioners as AI-driven development causes an unprecedented acceleration in industry velocity. A primary theme is the emergence of "shadow AI," where developers utilize unauthorized AI coding assistants and personal agents, introducing significant data classification risks and supply chain vulnerabilities. The discussion dives into technical concepts like AI agent "skills"—markdown files providing specialized directions—and the corresponding security risks found in new skill registries, such as malicious tools designed to exfiltrate credentials and crypto assets. The hosts also review 1Password’s SCAM (Security Comprehension Awareness Measure), highlighting broad performance gaps in an AI's ability to detect phishing, with some models failing up to 65% of the time. To manage these unpredictable systems, the hosts advocate for a shift toward high-level validation roles, emphasizing the need for Subject Matter Expertise to combat "reasoning drift" and maintain safety through test-driven development and periodic "checkpoints". Ultimately, they conclude that while AI can simulate expertise, human oversight remains vital to secure the probabilistic nature of modern agentic workflows.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free