The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
Improving the Common Vulnerability Scoring System
Why Software Architects Must Be Involved in the Earliest Systems Engineering Activities
Selecting Metrics for Software Assurance
AI in Humanitarian Assistance and Disaster Response
The AADL Error Library: 4 Families of Systems Errors
Women in Software and Cybersecurity: Suzanne Miller
Privacy in the Blockchain Era
Cyber Intelligence: Best Practices and Biggest Challenges
Assessing Cybersecurity Training
DevOps in Highly Regulated Environments
Women in Software and Cybersecurity: Dr. Ipek Ozkaya
The Role of the Software Factory in Acquisition and Sustainment
Defending Your Organization Against Business Email Compromise
My Story in Computing with Dr. Eliezer Kanal
Women in Software and Cybersecurity: Eileen Wrubel
Managing Technical Debt: A Focus on Automation, Design, and Architecture
Women in Software and Cybersecurity: Grace Lewis
Women in Software and Cybersecurity: Bobbie Stempfley
Women in Software and Cybersecurity: Dr. Lorrie Cranor
Leading in the Age of Artificial Intelligence
Create your
podcast in
minutes
It is Free