AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.
Our AI Futures - Lord Chris Holmes and the AI Bill
Project Bluebird: Revolutionising Air Traffic Control with AI and digital twins
AI for Cyber Defence
Building Digital Tools for Polar Research
Data Science for the Arts and Humanities
Algorithmic Justice
How do we regulate AI?
Diagnosing Dementia with AI
Making the world add up with Tim Harford
The Coffee Pod - Hussein Rappel
The Coffee Pod - Malvika Sharan
The Coffee Pod - Fernando Benitez
The Coffee Pod - Tom Andersson
The Coffee Pod - Domenic DiFrancesco
How to Speak Whale
The Coffee Pod - Ruoyun Hui
AI in the financial sector
The Stats Gap
How much can we limit the rising of the seas?
Where next for self-driving vehicles?
Create your
podcast in
minutes
It is Free
Museum of the Missing
Strange by Nature Podcast
The Poetry of Science
Sasquatch Chronicles
Hidden Brain