Computation and Language - CARES Comprehensive Evaluation of Safety and Adversarial Robustness in Medical LLMs
PaperLedge

Computation and Language - CARES Comprehensive Evaluation of Safety and Adversarial Robustness in Medical LLMs

2025-05-19
Hey PaperLedge crew, Ernis here, ready to dive into some fascinating research! Today, we're tackling something super relevant: the safety of those AI language models everyone's talking about, especially when they're being used in healthcare. Think about it: these large language models, or LLMs, are getting smarter and are being used more and more in medicine. That's awesome, but it also raises some big questions. Like, how can we be sure they're actually safe? Can they be tricked into giving the...
View more
Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free