AI is widely lauded as a way of reducing the burden on human online content moderators. However, to understand whether AI could, and should, replace human moderators, we need to understand its strengths and limitations. In this episode our hosts speak to the researchers Paul Röttger and Bertie Vidgen to discuss how they are attempting to tackle online hate speech, in particular through their work on HateCheck - a suite of tests for hate speech detection models.
Where next for self-driving vehicles?
Footballers on Twitter: What is fair game?
Turing deployment at sea: identifying plankton in real time
Machine Learning for Armed Conflict Mediation
Living with Machines
Data Science for Social Good: Predicting air pollution in a post-COVID world?
The right to privacy
The Turing Podcast asks: Where is Bitcoin headed?
Careers in data science with Accenture
”The problems of AI” with James Geddes
You don’t need anybody’s permission to be a great mathematician
Nicol Turner Lee: Bridging the digital divide
How to communicate science to non-specialists
Tackling the Infodemic
How can AI help us understand breast cancer
Palaeoanalytics: Using Data Science and Machine Learning to answer questions about Human Evolution
Optimizing Policy for Sustainable Development
Covid lockdowns: which policies worked best?
In conversation with Sue Black
Create your
podcast in
minutes
It is Free
DNA Today: A Genetics Podcast
Museum of the Missing
Strange by Nature Podcast
Sasquatch Chronicles
Hidden Brain