Securing the AI Frontier: Unmasking LLM and RAG Vulnerabilities
CISO Insights: Voices in Cybersecurity

Securing the AI Frontier: Unmasking LLM and RAG Vulnerabilities

2025-05-27
Large language models present new security challenges, especially when they leverage external data sources through Retrieval Augmented Generation (RAG) architectures . This podcast explores the unique attack techniques that exploit these systems, including indirect prompt injection and RAG poisoning. We delve into how offensive testing methods like AI red teaming are crucial for identifying and addressing these critical vulnerabilities in the evolving AI landscape. www.securitycareers.help/navigating-the-ai-frontier-a-cisos-perspective-on-securing-generative-ai/
View more
Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free