TechSpective Podcast

TechSpective Podcast

https://techspective.net/feed/podcast
0 Followers 186 Episodes Claim Ownership
The TechSpective Podcast brings together top minds in cybersecurity, enterprise tech, AI, and beyond to share unique perspective on technology—unpacking breakthrough trends like zero trust, threat intelligence, AI-enabled security, ransomware’s geopolitical ties, and more. Whether you’re an IT pro, security exec, or simply tech‑curious, each episode blends expert insight with real-world context—from microsegmentation strategies to the human side of cyber ethics. But we also keep it fun, some...
View more

Episode List

Why Ransomware Should Be Getting Your Attention Again

Mar 25th, 2026 12:56 PM

Ransomware has been a persistent headline topic for years now, to the point where a lot of people have probably gotten numb to it. I know I had. It starts to feel like background noise — another attack, another breach, another company paying out. So when I sat down with Derek Manky, Chief Security Strategist and Global VP of Threat Intelligence at Fortinet, and he started walking through the numbers from Fortinet's latest Global Threat Landscape Report, it got my attention again. The data isn't background noise. It's a pretty clear signal that things are getting more serious, not less. Derek has been tracking the threat landscape for over 25 years, 22 of them at Fortinet, where he leads the FortiGuard Labs threat intelligence team. That kind of tenure is rare in this industry, and it gives him a long view that's useful when you're trying to understand whether a trend is real or just noise. In this case, the ransomware numbers are real — and the reasons behind them are more interesting than the headlines usually get into. Part of what we talked about is how the economics and tactics of cybercrime have shifted. It's not just that there are more attacks. It's that the attacks are more targeted, more deliberate, and increasingly supported by tools that make sophisticated operations accessible to a much wider pool of threat actors. The AI angle here is real, and Derek gets specific about what that actually looks like in practice — not in a theoretical sense, but in terms of tools that exist right now and what they cost. There's also a metric from the report that I think should probably get more attention than it does. It has to do with how fast attackers move once a vulnerability becomes public knowledge. The window has gotten tight enough that some of the conventional wisdom around patching and response timelines doesn't really hold up anymore. We talked through what that means for defenders and what a more realistic approach looks like. One thing I appreciated about the conversation is that Derek didn't make it all sound hopeless. There's a practical framework for thinking about defense that he walks through — one that accepts the reality that you're never going to eliminate all your risk, and focuses instead on identifying and closing the exposures that actually matter most. That's a more useful starting point for most organizations than trying to chase everything at once. We also got into some of the work Fortinet does that goes beyond building security products — specifically around disrupting cybercriminal infrastructure and working with law enforcement and international partners to hold threat actors accountable. Derek mentioned something toward the end of the conversation that I hadn't heard before, a new initiative that takes a pretty different approach to gathering intelligence on cybercrime networks. Worth listening to. And because it's the TechSpective Podcast, we did eventually go off-script. There was a brief Star Trek tangent. There were house plants. That's just how these go. The full episode is below. If you work in security or are responsible for making decisions about security at your organization, it's worth the time.

The Agentic AI Hype Is Real — But So Is the Confusion

Mar 23rd, 2026 12:05 PM

Everyone is talking about agentic AI. And that's part of the problem. Over the last couple of years, the term has gone the way of every other buzzword in tech — slapped onto products and platforms regardless of whether it actually applies. Marketing departments are busy, as Adi Kuruganti, Chief AI and Development Officer at Automation Anywhere, put it when we sat down to record the latest TechSpective podcast episode. And when marketing departments get busy, clarity tends to suffer. Automation Anywhere has been in the automation space for over a decade. They helped create the Robotic Process Automation category, so Adi has a longer view on this than most. He knows what automation looked like before the AI wave hit, and he has a pretty specific definition of what an agent actually is — one that rules out a lot of what's currently being marketed as agentic AI. That distinction has real consequences. When you're automating routine, low-stakes tasks, some ambiguity is tolerable. But when you're talking about healthcare workflows, financial processes, or anything touching sensitive customer data, the difference between a rules-based automation and a probabilistic AI agent matters. Getting that wrong isn't just a technical problem. It can be a compliance problem, a liability problem, or worse. We also get into accountability. When an AI agent takes an action — reads a document, makes a decision, updates a record — who's responsible for that outcome? It's a question a lot of organizations are still working through, and the answer is more nuanced than it first appears. Adi has a clear perspective on this, shaped by what Automation Anywhere sees across its customer base of more than 5,000 enterprises. Data privacy comes up, too. Giving an AI agent access to the context it needs to actually be useful means sharing information with it. But in regulated industries, that creates real constraints. How do you give an agent enough to work with without exposing data it shouldn't touch? It's a real problem for a lot of enterprises right now, and we talk through how organizations are navigating it. And then there's the question of trust — specifically, how much autonomy you give an agent before a human needs to review what it's doing. The answer isn't as straightforward as "always have a human check the work." Adi makes a point here that I think a lot of people in the AI SOC space would recognize immediately. If you've been following the agentic AI conversation and wondering how much of it is real versus noise, this episode is worth your time. Adi doesn't oversell where the technology is. He's direct about what still needs to mature before agentic process automation can scale the way people expect it to. And he knows the difference between a real shift and a rebranding exercise. The TechSpective podcast is available on all major podcast platforms. You can also watch the full episode on YouTube.

Why Cloud Still Feels Harder Than It Should Be

Mar 18th, 2026 3:36 PM

Cloud was supposed to make everything easier. In some ways, it absolutely has. You can spin up infrastructure in minutes, scale on demand, and deploy globally without ever touching a piece of hardware. That’s a massive shift from the days when a “deployment” meant racking servers and hoping you sized things correctly six months in advance. But if you’ve actually been in the trenches—even recently—you know the reality is a little messier. That’s where this episode of the TechSpective Podcast starts. I sat down with Harshit Omar, co-founder and CTO of FluidCloud, and we ended up digging into something that doesn’t get talked about enough: the gap between what cloud is supposed to be and what it actually looks like day to day. Because “the cloud” isn’t really a thing anymore. It’s a collection of platforms that all do roughly the same things… just differently enough to make your life harder. AWS, Azure, Google Cloud—they all check the same boxes at a high level. Compute, storage, networking, databases. But once you get past that surface layer, the differences start to matter. The way services are structured, the way they’re configured, the way they behave—it’s not interchangeable. Not even close. And that becomes a problem the moment you try to do anything beyond a single-cloud deployment. Multi-cloud sounds great in theory. Flexibility. Resilience. Avoiding vendor lock-in. All good things. But in practice, it usually means you’re juggling multiple sets of tools, multiple skill sets, and multiple ways of solving the same problem. Most teams don’t have deep expertise across multiple clouds. They might be strong in one. Maybe decent in a second. Beyond that, it gets thin fast. And even if you do have the talent, you’re still dealing with the reality that everything moves—constantly. New services, updated APIs, shifting best practices. What you knew a year ago doesn’t always map cleanly to what you’re doing today. We also got into something I’ve seen play out over and over again—the tension between speed and control. Developers want to move fast. That’s their job. The cloud makes it easy to spin things up, try things out, and iterate quickly. But someone still has to manage cost, enforce security, and keep everything from turning into chaos. That responsibility doesn’t go away just because the infrastructure is abstracted. The old world had its limitations, but it was predictable. You knew what you had because you could point to it. Now, your environment can change in real time, and not always in ways you expect. That’s powerful, but it’s also a little dangerous if you don’t have the right visibility and controls in place. One of the more interesting parts of the conversation was looking ahead a bit—not in some five-year crystal ball way, but just where things seem to be heading. Right now, cloud providers still benefit from a certain amount of friction. Once you’re in, you’re kind of in. Moving workloads somewhere else is possible, but it’s not trivial. That friction keeps customers sticky. But what happens if that changes? What happens if moving between clouds becomes easy enough that it’s just… a choice? That’s not a small shift. If organizations can move workloads without a ton of overhead, it forces cloud providers to compete differently. It’s no longer about who you’re locked into. It’s about who actually delivers the best experience, performance, and cost. We’re not fully there yet, but you can see the direction things are going. This episode doesn’t try to wrap that up with a neat conclusion, because there isn’t one. It’s a real conversation about what’s working, what isn’t, and where things might be headed next. If you spend any time dealing with cloud, DevOps, or security, this will probably sound familiar.

Most Companies Are Still Just Playing With AI

Mar 13th, 2026 4:17 PM

Everybody's got an AI strategy. Every platform claims to be AI-powered. Every vendor deck has a slide about how their product uses machine learning to deliver transformative outcomes. Most of it is still theater. I had a conversation with David DeSanto, CEO at Anaconda, recently for the TechSpective Podcast, and what struck me most was how honest he was about where enterprise AI actually stands. Not where vendors want it to be. Not where the headlines say it is. Where it actually is. A lot of organizations have run pilots. Some have solid proof-of-concept projects. A handful have built internal tools that genuinely save teams time. But very few have moved AI into real production across the business. There's a big difference between "we're experimenting" and "this is how we work now." That gap is where most companies are stuck---and it's not because the technology doesn't work. The demo almost always looks good. A model produces useful output. A prototype saves someone a few hours. The problem shows up when you try to scale that across an enterprise environment. Suddenly you're dealing with data governance questions, security concerns, reliability issues, and a fundamental trust problem: can we actually rely on what this thing produces? Those issues don't show up in the demo. Open source plays an interesting role here. It's always been central to the data science world, and that hasn't changed. Developers and data scientists are still experimenting constantly---new models, new frameworks, new workflows. Open ecosystems make that possible. But they also create real headaches for organizations trying to manage dependencies, maintain security, and keep things consistent across teams. Innovation versus governance. That's the tension nobody has fully figured out yet. Something else worth noting: AI is changing what technical expertise actually means. Tasks that required specialized skills a few years ago can now be partially automated. That sounds like it should reduce the need for expertise---but it mostly just moves where that expertise matters. Technical teams spend less time writing code from scratch and more time framing problems, evaluating outputs, and validating results. Knowing how to ask the right question---or spot when an AI's answer is subtly wrong---can matter more than generating the answer in the first place. That's a real shift in how those jobs work, and most organizations are still figuring out how to adapt. Trust is the underlying issue running through all of this. Organizations can't treat AI like a magic box that produces correct answers. They need to understand how models work, how their data is being used, and how outputs are generated. Without that visibility, it's hard to rely on AI for anything that actually matters. And the challenge isn't really technical. The technology works well enough. What's hard is building the infrastructure, governance, and culture around it---getting security teams, data scientists, developers, and business leaders to actually work together instead of operating in separate lanes. That collaboration doesn't happen naturally. It has to be built deliberately. AI also tends to change the process, not just speed it up. Teams aren't just doing the same work faster---they're working differently, exploring problems differently, testing ideas differently. Machines are becoming collaborators in that process rather than just tools. Adapting to that takes time. The organizations that figure it out won't be the ones with the most advanced AI technology. They'll be the ones that put in the unglamorous work---governance frameworks, cross-team alignment, careful validation of what the AI actually produces. That's less exciting than the vendor pitch. But it's closer to what real progress looks like.

Rethinking Cybersecurity For A World Of AI And Machine Identities

Mar 10th, 2026 4:45 PM

I spend a lot of time talking with people in cybersecurity. Founders, analysts, CISOs, researchers. One thing that comes up again and again is that the problem space keeps getting bigger. Not just more threats—more complexity. That’s really the thread running through my recent TechSpective Podcast conversation with Clarence Chio, co-founder and CEO of Coverbase. Security used to be easier to conceptualize. Not easier to solve, necessarily—but easier to frame. You had networks, endpoints, users, and a perimeter. Protect the edge. Monitor what’s inside. Respond when something goes wrong. That model doesn’t really exist anymore. Today, most organizations operate in environments that span multiple clouds, dozens or hundreds of SaaS applications, APIs everywhere, and automated workflows connecting everything together. Identities are everywhere too—human users, service accounts, machine identities, AI agents. The number of things acting inside a system has exploded. And every one of those things represents potential risk. Clarence and I spent a good part of the conversation talking about how that shift changes the nature of cybersecurity. It’s less about building walls and more about understanding behavior. Who is doing what? What systems are interacting? What’s normal, and what isn’t? That sounds simple, but it’s actually one of the hardest problems in security right now. The environment changes constantly. New tools get deployed. Developers spin up services. AI models start interacting with data pipelines and APIs. Keeping track of it all is a challenge. Then there’s the AI angle. AI is showing up everywhere right now—on both sides of the security equation. Security vendors are embedding AI into their platforms to analyze data faster and automate responses. At the same time, attackers are experimenting with AI to generate malware, improve phishing, and automate reconnaissance. But one thing Clarence pointed out—and I agree—is that AI doesn’t magically solve security problems. If anything, it tends to amplify whatever processes already exist. If your visibility is poor, AI doesn’t fix that. If your governance is weak, automation can actually make the problem worse. Technology alone rarely fixes systemic problems. Another part of the discussion that stood out to me was the human side of security. It’s easy to focus on tools because that’s what vendors sell. But effective security programs depend heavily on the people running them. Security professionals need to understand the technology, obviously. But they also need context and judgment. They need to know how systems interact and how changes ripple across an environment. And maybe most important, they need the freedom to question assumptions. That’s something Clarence emphasized during the conversation. In fast-moving technology environments, curiosity and critical thinking matter. Security teams can’t just follow checklists. They have to understand how systems behave and be able to spot when something doesn’t look right. Which brings us back to complexity. The attack surface keeps growing. Infrastructure is more distributed. AI and automation are adding new layers of capability—and new layers of risk. There’s no single tool that solves that. What organizations can do is build better visibility, invest in people, and develop security programs that are designed to adapt rather than assume the environment will stay stable. That’s easier said than done, but it’s the direction things are moving. If you’re working in security—or just trying to make sense of how AI and modern infrastructure are reshaping risk—I think you’ll find the conversation interesting. Clarence brings a thoughtful perspective, and we cover a lot of ground without getting lost in buzzwords. You can listen to the full episode of the TechSpective Podcast or watch the discussion on YouTube.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free