The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
Mission-Based Prioritization: A New Method for Prioritizing Agile Backlogs
My Story in Computing with Carol Smith
Digital Engineering and DevSecOps
A 10-Step Framework for Managing Risk
7 Steps to Engineer Security into Ongoing and Future Container Adoption Efforts
Ransomware: Evolution, Rise, and Response
VINCE: A Software Vulnerability Coordination Platform
Work From Home: Threats, Vulnerabilities, and Strategies for Protecting Your Network
An Introduction to CMMC Assessment Guides
The CMMC Level 3 Assessment Guide: A Closer Look
The CMMC Level 1 Assessment Guide: A Closer Look
Achieving Continuous Authority to Operate (ATO)
Challenging the Myth of the 10x Programmer
A Stakeholder-Specific Approach to Vulnerability Management
Optimizing Process Maturity in CMMC Level 5
Reviewing and Measuring Activities for Effectiveness in CMMC Level 4
Situational Awareness for Cybersecurity: Beyond the Network
Quantum Computing: The Quantum Advantage
CMMC Scoring 101
Developing an Effective CMMC Policy
Create your
podcast in
minutes
It is Free