In this podcast from the Carnegie Mellon University Software Engineering Institute, Carol Smith, a senior research scientist in human-machine interaction, and Jonathan Spring, a senior vulnerability researcher, discuss the hidden sources of bias in artificial intelligence (AI) systems and how systems developers can raise their awareness of bias, mitigate consequences, and reduce risks.
Women in Software and Cybersecurity: Eileen Wrubel
Managing Technical Debt: A Focus on Automation, Design, and Architecture
Women in Software and Cybersecurity: Grace Lewis
Women in Software and Cybersecurity: Bobbie Stempfley
Women in Software and Cybersecurity: Dr. Lorrie Cranor
Leading in the Age of Artificial Intelligence
Applying Best Practices in Network Traffic Analysis
10 Types of Application Security Testing Tools and How to Use Them
Using Test Suites for Static Analysis Alert Classifiers
Blockchain at CMU and Beyond
Leading in the Age of Artificial Intelligence
Deep Learning in Depth: The Future of Deep Learning
Deep Learning in Depth: Adversarial Machine Learning
System Architecture Virtual Integration: ROI on Early Discovery of Defects
Deep Learning in Depth: The Importance of Diverse Perspectives
A Technical Strategy for Cybersecurity
Best Practices for Security in Cloud Computing
Risks, Threats, and Vulnerabilities in Moving to the Cloud
Deep Learning in Depth: IARPA's Functional Map of the World Challenge
Deep Learning in Depth: Deep Learning versus Machine Learning
Create your
podcast in
minutes
It is Free