The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
SEI Fellows Series: Peter Feiler
NTP Best Practices
Establishing Trust in Disconnected Environments
Distributed Artificial Intelligence in Space
Verifying Distributed Adaptive Real-Time Systems
10 At-Risk Emerging Technologies
Technical Debt as a Core Software Engineering Practice
DNS Best Practices
Three Roles and Three Failure Patterns of Software Architects
Security Modeling Tools
Best Practices for Preventing and Responding to Distributed Denial of Service (DDoS) Attacks
Cyber Security Engineering for Software and Systems Assurance
Moving Target Defense
Improving Cybersecurity Through Cyber Intelligence
A Requirement Specification Language for AADL
Becoming a CISO: Formal and Informal Requirements
Predicting Quality Assurance with Software Metrics and Security Methods
Network Flow and Beyond
A Community College Curriculum for Secure Software Development
Security and the Internet of Things
Create your
podcast in
minutes
It is Free