The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
Deep Learning in Depth: The Good, the Bad, and the Future
The Evolving Role of the Chief Risk Officer
Obsidian: A Safer Blockchain Programming Language
Agile DevOps
Kicking Butt in Computer Science: Women in Computing at Carnegie Mellon University
Is Software Spoiling Us? Technical Innovations in the Department of Defense
Is Software Spoiling Us? Innovations in Daily Life from Software
How Risk Management Fits into Agile & DevOps in Government
5 Best Practices for Preventing and Responding to Insider Threat
Pharos Binary Static Analysis: An Update
Positive Incentives for Reducing Insider Threat
Mission-Practical Biometrics
At Risk Emerging Technology Domains
DNS Blocking to Disrupt Malware
Best Practices: Network Border Protection
Verifying Software Assurance with IBM’s Watson
The CERT Software Assurance Framework
Scaling Agile Methods
Ransomware: Best Practices for Prevention and Response
Integrating Security in DevOps
Create your
podcast in
minutes
It is Free