The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
The Future of Cyber: Educating the Cybersecurity Workforce
Documenting Process for CMMC
Agile Cybersecurity
CMMC Levels 1-3: Going Beyond NIST SP-171
The Future of Cyber: Secure Coding
Challenges to Implementing DevOps in Highly Regulated Environments
The Future of Cyber: Cybercrime
An Ethical AI Framework
My Story in Computing: Madison Quinn Oliver
The CERT Guide to Coordinated Vulnerability Disclosure
Women in Software and Cybersecurity: Dr. April Galyardt
The Future of Cyber: Security and Privacy
The Future of Cyber: Security and Resilience
Reverse Engineering Object-Oriented Code with Ghidra and New Pharos Tools
Women in Software and Cybersecurity: Dr. Carol Woody
Benchmarking Organizational Incident Management Practices
Machine Learning in Cybersecurity: 7 Questions for Decision Makers
Women in Software and Cybersecurity: Kristi Roth
Human Factors in Software Engineering
Women in Software and Cybersecurity: Anita Carleton
Create your
podcast in
minutes
It is Free