Zero Trust, Real Talk: A Conversation with Dr. Chase Cunningham
How do you know your cybersecurity investments are actually making you safer? That’s the question at the heart of the latest TechSpective Podcast episode, where Dr. Chase Cunningham—better known to many as “Dr. Zero Trust”—joins me for an unfiltered, candid conversation about the state of modern cybersecurity. And no, this isn’t a puff piece on policy frameworks or the latest silver bullet tool. If you’ve read Chase’s recent LinkedIn post “Misaligned Zero Trust Spend = 1999 Firewall FOMO, But Worse,” you already know where this is going: straight into the hard truths about how organizations are still getting Zero Trust fundamentally wrong. In his post, Chase makes a blunt observation that became the foundation for our discussion: too many companies treat Zero Trust like a shopping list—buying products instead of outcomes. “If your ‘Zero Trust’ line items don’t move incident frequency, blast radius, or time to contain, you’re not buying security—you’re buying feelings.” That line stood out to me and was part of why I reached out to invite Chase to join me on the podcast. No Silver Bullets, Just Smarter Questions This isn’t an episode full of buzzwords or vendor shout-outs. It’s a reminder that there’s no shortcut around the work. Whether we’re talking about identity-anchored access control, microsegmentation, or reducing dwell time through automation, Chase repeatedly returns to a central theme: strategy over spectacle. He compares some security spending habits to crash diets and “cyber fat pills”—quick fixes that sound great in a pitch deck but collapse under scrutiny. Just like with fitness, real security gains come from consistency, not gimmicks. We also explore the often-overlooked relationship between breach economics and stock price behavior—another area where Chase has done deep research. The myth that a breach will destroy a brand? It’s more complicated than that. Sometimes (pro tip: most of the time) the dip is a buying opportunity, not a death sentence. Why You Should Listen If you’re a CISO, security architect, board member—or just someone trying to make sense of your security stack—this conversation will challenge your assumptions in all the right ways. It’s part therapy session, part strategy clinic, and entirely grounded in real-world experience. Check out the full episode:
Algorithms, Thought Leadership, and the Future of Digital Influence
It’s getting harder to have a “normal” conversation about content, social media, or visibility anymore—mostly because the rules keep changing while you're still mid-sentence. Just a few years ago, you could create a blog post, optimize it for SEO, promote it on Twitter (back when it was still Twitter and not a dumpster fire of right-wing conspiracy lunacy rebranded as X), and expect a decent number of eyeballs to land on it. That’s not the game anymore. Now we’re living in a world of algorithmic gatekeeping, AI-generated content slop, and platforms that are slowly morphing into echo chambers of their own making. And as someone who spends a lot of time thinking, writing, and talking about tech, marketing, and cybersecurity, I wanted to have an actual conversation about what this means—beyond the usual recycled talking points. So, I invited Evan Kirstel onto the TechSpective Podcast to dig in. If you’re not familiar with Evan, you should be. He’s one of the more influential voices in B2B tech media—part content creator, part live streamer, part analyst, part TV host, depending on the day. He’s also been doing this for a while, and more importantly, doing it well. That makes him a great sounding board for the increasingly murky topic of digital thought leadership. One of the first things we talked about was the rise of formulaic, AI-generated content. You know the kind—it reads like it was built from a checklist of “engagement best practices,” and while it may technically be “on brand,” it’s rarely interesting. The irony, of course, is that the platforms boosting this kind of content are simultaneously rewarding quantity over quality, while drowning users in sameness. From there, we explored how visibility really works in 2025. Hint: it’s no longer about who you know—it’s about which large language model knows you. If you’re not showing up in ChatGPT summaries or Google’s new generative answers, you’re basically invisible to a big chunk of your potential audience. Which raises the question: how do you actually earn mindshare in a world where traditional SEO has been replaced by AI synthesis? We didn’t land on a one-size-fits-all answer—but we did agree on a few things. First, content that sounds like content for content’s sake? It’s dead. Thought leadership that merely echoes what 20 other people are already saying? Also dead. What works now is originality, consistency, and credibility—backed by actual lived experience. Another key theme we unpacked: platforms. Everyone likes to say “meet your audience where they are,” but it’s harder than it sounds when the audience is splintered across LinkedIn, Reddit, YouTube, TikTok, and a dozen other niche platforms—each with its own expectations and formats. Evan shared how he tailors his content for each platform without diluting the message, and why companies that try to be “cool” without context usually fall flat. I’ll also say this—this episode reminded me that high-quality conversations are still one of the most underutilized forms of content out there. When it’s not scripted or polished within an inch of its life, a good conversation can cut through the noise and resonate on a level most polished op-eds or templated videos never will. So if you’re feeling stuck, wondering why your content isn’t landing like it used to, or trying to figure out how to show up where it matters—this episode is worth your time. Check out my conversation with Evan Kirstel on the TechSpective Podcast. And yes, we get into Gary Vaynerchuk, TikTok, zero-click search, and why it might be time to completely rethink your content strategy.
Shadow AI, Cybersecurity, and the Evolving Threat Landscape
The cybersecurity landscape never sits still—and neither do the conversations I aim to have on the TechSpective Podcast. In the latest episode, I sit down with Etay Maor, Chief Security Strategist at Cato Networks and a founding member of Cato CTRL, the company’s cyber threats research lab. Etay brings a rare mix of technical depth and practical perspective—something increasingly necessary as we navigate the murky waters of modern cyber threats. This time, the conversation centers on the rise of Shadow AI—a topic gaining urgency but still underappreciated in many organizations. If Shadow IT was the quiet rule-breaker of the past decade, Shadow AI is its unpredictable, algorithmically supercharged cousin. It’s showing up in boardrooms, workflows, and marketing departments—often without security teams even knowing it’s there. Here’s the thing: banning AI tools or blocking access doesn’t work. People find a way around it. We’ve seen this play out with cloud storage, collaboration tools, and other “unsanctioned” technologies. The same logic applies here. Etay and I explore why organizations need to move beyond a binary yes/no mindset and instead think in terms of guardrails, visibility, and enablement. We also get into the tension between innovation and risk—how fear-based decision-making can put companies at a disadvantage, and why the bigger threat might be not using AI at all. That may sound counterintuitive coming from two people steeped in cybersecurity, but context matters. The risk of falling behind could be greater than the risk of exposure—if companies don’t take a strategic approach. Naturally, the conversation expands into how threat actors are adapting AI for offensive purposes—crafting more convincing phishing emails, automating reconnaissance, and even gaming defensive AI tools. Etay shares sharp insights into how attackers use our own tools against us and what that means for the future of cybersecurity. There’s also a philosophical thread woven throughout—questions about whether AI can truly be “original,” how human creativity intersects with machine learning, and what kind of ethical or regulatory frameworks might be needed (if any) to keep things from going off the rails. Etay brings both technical fluency and historical perspective to the discussion, making it a conversation that’s as grounded as it is thought-provoking. This episode doesn’t veer into fear-mongering or hype. It stays real—examining where we are, where we’re headed, and how to make better decisions as the ground keeps shifting. Whether you’re in security, tech leadership, policy, or just curious about how AI is reshaping the digital battleground, this one’s worth your time. Tune in to the latest TechSpective Podcast—now streaming on all major platforms. Share your thoughts in the comments below.
Agentic AI and the Art of Asking Better Questions
I’ve had a lot of conversations about AI over the past couple years—some insightful, some overhyped, and a few that left me questioning whether we’re even talking about the same technology. But every now and then, I get the opportunity to sit down with someone who not only understands the technology but also sees its broader implications with clarity and honesty. This episode of the TechSpective Podcast is one of those moments. Jeetu Patel, President and Chief Product Officer at Cisco, joins me for an unscripted, unfiltered conversation that covers more ground than I could have outlined in a set of pre-written questions. Actually, I did draft a set of pre-written questions. We just didn't follow or use them at all. Jeetu and I have known each other for a while, and this episode reflects the kind of conversation you only get with someone who’s deeply immersed in both the strategic and human sides of tech. It’s thoughtful. It’s philosophical. And it doesn’t pull punches. At the center of our discussion is the concept of “agentic AI”—a term that’s being used more frequently, sometimes without much clarity. We unpack what it actually means, what it can realistically do, and how it differs from the wave of chatbots and content generators that came before it. More importantly, we talk about how these AI agents might change not just the tasks we automate, but how we think about work itself. Of course, with any conversation about AI and the future of work comes the inevitable tension: what gets lost, what gets reimagined, and what still requires distinctly human judgment. Jeetu brings a nuanced take to this, rooted in his experience leading product innovation at one of the world’s largest tech companies. It’s not a conversation filled with predictions so much as it is a reframing of the questions we should be asking. What stood out to me is how quickly we normalize the extraordinary. A technology that felt magical two years ago is now embedded in our daily workflows. That speed of adoption changes the stakes. It means we need to be more deliberate—not just about what AI can do, but what we want it to do, and what we risk offloading too quickly. We also touch on the philosophical implications. If AI agents really can handle more of the cognitive heavy lifting, what’s our role in the loop? Do we become editors? Overseers? Explorers of new frontiers? And how do we prepare for jobs that don’t exist yet, using tools that are evolving faster than we can document them? I think this episode will resonate with anyone trying to navigate this moment—whether you’re in product development, policy, marketing, or just someone who likes to think a few moves ahead. It’s about more than AI. It’s about how we adapt, how we define value, and what we choose to hold onto as the landscape shifts. Give it a listen. And as always, I’d love to hear your thoughts.
Building Security for a World That’s Already Changed
There’s a question I’ve been sitting with lately: Are we prepared for what AI is about to expose in our organizations—not just technically, but operationally? In this episode of the TechSpective Podcast, I sit down with Kavitha Mariappan, Rubrik’s Chief Transformation Officer, to unpack some of the less flashy but arguably more urgent questions about enterprise security, AI readiness, and business continuity. If your organization is still treating identity as a login issue or AI as a future-state conversation, you might be missing the bigger picture. Kavitha doesn’t speak in clichés. She’s been in the trenches—engineering, scaling go-to-market teams, and now helping steer one of the fastest-evolving players in the data security space. Her perspective is shaped by decades of experience, but her focus is very much on the now: how to operationalize resilience at a time when every system, process, and even person has become a potential attack vector. One of the threads we pull on is the idea that resilience isn’t a fallback plan anymore—it’s the front line. And identity? That’s not just a security issue. It’s a dependency. If you can’t log in, you can’t recover. You can’t operate. You can’t pivot. The conversation touches on what it really means to build for resilience in a landscape where downtime isn’t just costly—it’s existential. We also explore what I’ll loosely call “AI exposure therapy”—not in the sense of experimenting with new models or shiny tools, but in understanding how AI is forcing companies to confront their structural weaknesses. What used to be considered internal inefficiencies are now potential vectors of attack. Technical debt isn’t just a performance issue—it’s a risk multiplier. Kavitha brings data to the table too—sharing insight from Rubrik Zero Labs on the alarming surge in identity-based attacks and why the majority of companies are still playing catch-up when it comes to securing what they can’t always see. It’s a wake-up call, but not a hopeless one. What made this conversation stand out to me wasn’t just the subject matter, but the way Kavitha frames the questions we should be asking: How do we architect for a world that’s already in flux? How do we define AI transformation when most businesses are still digesting digital transformation? And perhaps most critically, what needs to change inside the organization before the tech can even do its job? I won’t give away the full arc of the discussion, but here’s my pitch: If you’re leading, advising, or building for a company that handles sensitive data (hint: that’s all of us), this episode will challenge you to think differently about where resilience really begins—and what it’s going to take to build it into the DNA of your org. Listen to or watch the full episode here: