Security, bias risks are inherent in GenAI black box models
Targeting AI

Security, bias risks are inherent in GenAI black box models

2024-03-25
From bias to hallucinations, it is apparent that generative AI models are far from perfect and present risks. Most recently, tech giants -- notably Google -- have run into trouble after their models made egregious mistakes that reflect the inherent problem with the data sets upon which large language models (LLMs) are based. Microsoft faced criticism when its models from partner OpenAI generated disturbing images of monsters and women. The problem is due to the architecture of the LLMs,...
View more
Comments (3)

More Episodes

All Episodes>>

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free