Can your AI assistant become a silent data leak? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down EchoLeak, a zero-click exploit in Microsoft 365 Copilot that shows how attackers can manipulate AI systems using nothing more than an email. No clicks. No downloads. Just a cleverly crafted message that turns your AI into an unintentional insider threat.
They also share a real-world discovery from LMG Security’s pen testing team: how prompt injection was used to extract system prompts and override behavior in a live web application. Wi...
Can your AI assistant become a silent data leak? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down EchoLeak, a zero-click exploit in Microsoft 365 Copilot that shows how attackers can manipulate AI systems using nothing more than an email. No clicks. No downloads. Just a cleverly crafted message that turns your AI into an unintentional insider threat.
They also share a real-world discovery from LMG Security’s pen testing team: how prompt injection was used to extract system prompts and override behavior in a live web application. With examples ranging from corporate chatbots to real-world misfires at Samsung and Chevrolet, this episode unpacks what happens when AI is left untested—and why your security strategy must adapt.
Key Takeaways
- Limit and review the data sources your LLM can access—ensure it doesn’t blindly ingest untrusted content like inbound email, shared docs, or web links.
- Audit AI integrations for prompt injection risks—treat language inputs like code and include them in standard threat models.
- Add prompt injection testing to every web app and email flow assessment, even if you’re using trusted APIs or cloud-hosted models.
- Red-team your LLM tools using subtle, natural-sounding prompts—not just obvious attack phrases.
- Monitor and restrict outbound links from AI-generated content, and validate any use of CSP-approved domains like Microsoft Teams.
Resources
- EchoLeak technical breakdown by Aim Security
- LMG Security Blog: Prompt Injection in Web Apps
- Chevrolet chatbot tricked into $1 car deal
- Microsoft 365 Copilot Overview
#EchoLeak #Cybersecurity #Cyberaware #CISO #Microsoft #Microsoft365 #Copilot #AI #GenAI #AIsecurity #RiskManagement
View more