YuraScanner: Leveraging LLMs for Task-driven Web App Scanning (god2025)
Web application scanners are popular and effective black-box testing tools, automating the detection of vulnerabilities by exploring and interacting with user interfaces. Despite their effectiveness, these scanners struggle with discovering deeper states in modern web applications due to their limited understanding of workflows. This study addresses this limitation by introducing YuraScanner, a task-driven web application scanner that leverages large-language models (LLMs) to autonomously execute tasks and workflows. YuraScanner operates as a goal-based agent, suggesting actions to achieve predefined objectives by processing webpages to extract semantic information. Unlike traditional methods that rely on user-provided traces, YuraScanner uses LLMs to bridge the semantic gap, making it web application-agnostic. Using the XSS engine of Black Widow, YuraScanner tests discovered input points for vulnerabilities, enhancing the scanning process's comprehensiveness and accuracy. We evaluated YuraScanner on 20 diverse web applications, focusing on task extraction, execution accuracy, and vulnerability detection. The results demonstrate YuraScanner's superiority in discovering new attack surfaces and deeper states, significantly improving vulnerability detection. Notably, YuraScanner identified 12 unique zero-day XSS vulnerabilities, compared to three by Black Widow. This study highlights YuraScanner's potential to revolutionize web application scanning with its automated, task-driven approach. Licensed to the public under https://creativecommons.org/licenses/by-sa/4.0/ about this event: https://c3voc.de
A CISO's Adventures in AI Wonderland (god2025)
As a CISO (or any other security expert) in the area of AI, you can find yourself in increasingly challenging and sometimes bizarre AI-related situations not unlike Alice's adventures in Wonderland. Depending on whom you speak to, people either have high (inflated?) expectations about the (magic?) benefits of AI for security efforts, or try to explain why "AI security Armageddon" is looming... and that is just the security part of the story. All other areas in your organization are heavily using or experimenting with AI (e.g., vibe coding, automation, decision making, etc.), challenging (or ignoring) established security practices. This talk tells the story of the daily experience of dealing with AI as a CISO in a cloud-application startup. Which experiments failed or were successful, which advice is helpful, what is difficult to apply in practice, which questions are still open... The motivation for this talk is to start a conversation among security experts on how we can shape a secure AI future and not get pushed into the role of being seen as "hindering" AI progress. Licensed to the public under https://creativecommons.org/licenses/by-sa/4.0/ about this event: https://c3voc.de
The Trust Trap - Security von Coding Assistants (god2025)
Coding Assistants wie Github Copilot, Cursor oder Claude versprechen einen Effizienzboost für die Softwareentwicklung. Doch welchen Einfluss hat die Nutzung dieser Tools auf die Software Security? Dieser Vortrag analysiert die Vor- und Nachteile von Coding Assistants in Hinblick auf die Sicherheit des entstehenden Codes. Er gibt einen Überblick über die aktuelle Studienlage und die Benchmarks zu den verschiedenen Modellen und diskutiert die Ergebnisse. Neben der Bedeutung von eingebrachten Schwachstellen im Code selbst werden Risiken wie Slopsquatting, Model Poisoning und Rules File Backdoors erläutert. Zuletzt gibt der Vortrag Empfehlungen zu Best Practices für die Nutzung von Coding Assistants: von der richtigen Konfiguration und Nutzung über Richtlinien zum Review und Testen von solchem Code. Licensed to the public under https://creativecommons.org/licenses/by-sa/4.0/ about this event: https://c3voc.de
How we hacked Y Combinator companies' AI agents (god2025)
We hacked 7 of the16 publicly-accessible YC X25 AI agents. This allowed us to leak user data, execute code remotely, and take over databases. All within 30 minutes each. In this session, we'll walk through the common mistakes these companies made and how you can mitigate these security concerns before your agents put your business at risk. Licensed to the public under https://creativecommons.org/licenses/by-sa/4.0/ about this event: https://c3voc.de
"I have no idea how to make it safer": Security and Privacy Mindsets of Browser Extension Developers (god2025)
Browser extensions are a powerful part of the Web ecosystem as they extend browser functionality and let users personalize their online experience. But with higher privileges than regular web apps, extensions bring unique security and privacy risks. Much like web applications, vulnerabilities often creep in, not just through poor implementation, but also through gaps in developer awareness and ecosystem support. In this talk, we share insights from a recent study in which we interviewed and observed 21 extension developers across the world [1] as they worked on security and privacy-related tasks that we designed based on our prior works and observations [2, 3]. Their live decision-making revealed common misconceptions, unexpected pain points, and ecosystemic obstacles in the extension development lifecycle. Extending beyond our published results, we plan to highlight some of the untold anecdotes, insecure development practices, their threat perception, the design-level challenges, as well as the misconceptions around them. The audience will take away the following items from the presentation/discussion: Common insecure practices in extension development. Why security ≠ privacy ≠ store compliance, as often perceived by extension developers! Hidden design gaps and loopholes in extension architecture that developers can't spot or comprehend. Anecdotes on the course of extension development in the era of LLMs. Developers, regulations (GDPR/CCPA/CRA), and a few “interesting” opinions. And, most importantly, why you should NOT give up on them just yet! :) References: [1] Agarwal, Shubham, et al. “I have no idea how to make it safer”: Studying Security and Privacy Mindsets of Browser Extension Developers. Proceedings of the 34th USENIX Security Symposium 2025. [2] Agarwal, Shubham, Aurore Fass, and Ben Stock. Peeking through the window: Fingerprinting Browser Extensions through Page-Visible Execution Traces and Interactions. Proceedings of the 31st ACM SIGSAC Conference on Computer and Communications Security. 2024. [3] Agarwal, Shubham. Helping or hindering? How browser extensions undermine security. Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. 2022. Licensed to the public under https://creativecommons.org/licenses/by-sa/4.0/ about this event: https://c3voc.de