Large language models present new security challenges, especially when they leverage external data sources through Retrieval Augmented Generation (RAG) architectures . This podcast explores the unique attack techniques that exploit these systems, including indirect prompt injection and RAG poisoning. We delve into how offensive testing methods like AI red teaming are crucial for identifying and addressing these critical vulnerabilities in the evolving AI landscape.
www.securitycareers.help/navigating-the-ai-frontier-a-cisos-perspective-on-securing-generative-ai/
www.hackernoob.tips/the-new-frontier-how-were-bending-generative-ai-to-our-will