The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
10 Types of Application Security Testing Tools and How to Use Them
Using Test Suites for Static Analysis Alert Classifiers
Blockchain at CMU and Beyond
Leading in the Age of Artificial Intelligence
Deep Learning in Depth: The Future of Deep Learning
Deep Learning in Depth: Adversarial Machine Learning
System Architecture Virtual Integration: ROI on Early Discovery of Defects
Deep Learning in Depth: The Importance of Diverse Perspectives
A Technical Strategy for Cybersecurity
Best Practices for Security in Cloud Computing
Risks, Threats, and Vulnerabilities in Moving to the Cloud
Deep Learning in Depth: IARPA's Functional Map of the World Challenge
Deep Learning in Depth: Deep Learning versus Machine Learning
How to Be a Network Traffic Analyst
Workplace Violence and Insider Threat
Why Does Software Cost So Much?
Cybersecurity Engineering & Software Assurance: Opportunities & Risks
Software Sustainment and Product Lines
Best Practices in Cyber Intelligence
Deep Learning in Depth: The Good, the Bad, and the Future
Create your
podcast in
minutes
It is Free