Super Prompt: Generative AI w/ Tony Wan
Technology
How do you extract prohibited information from ChatGPT? What are Grandma and DAN exploits? Why do they work? What can Large Language Model (LLM) companies do to protect themselves? Grandma exploits or hacks are ways to trick chatGPT into giving you information that is in violation of company policy. For example, tricking chatGPT to give you confidential, dangerous, or inappropriate information. "Jailbreaking” is a slang for removing the artificial limitations in iPhones to install apps not approved by Apple. Turns out, there are ways to jailbreak LLMs. The tech companies supplying LLM as a service want to provide a safe, and legally-compliant environment. How can this be done without hampering the flexibility and usefulness of creative prompting?
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
Self-Driving Cars | Autonomous Vehicles Part 2 | Episode 5
Self-Driving Cars | Autonomous Vehicles Part 1 | Episode 4
Fooling Big Brother | Facial Recognition | Mass Surveillance | Adversarial Machine Learning | Attack Methods | Episode 3
GPT-3 | ChatGPT Under the Hood | Natural Language Processing | Episode 2
"Hot Dog. Not Hot Dog." AI from Silicon Valley, the TV Series | How to Build and Train AI | Image Classification | Episode 1
Welcome to the Super Prompt podcast!
Create your
podcast in
minutes
It is Free
Insight Story: Tech Trends Unpacked
Zero-Shot
Fast Forward by Tomorrow Unlocked: Tech past, tech future
The Unbelivable Truth - Series 1 - 26 including specials and pilot
Lex Fridman Podcast