The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katherine-Marie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
My Story in Computing with Sam Procter
Developing and Using a Software Bill of Materials Framework
The Importance of Diversity in Cybersecurity: Carol Ware
The Importance of Diversity in Software Engineering: Suzanne Miller
The Importance of Diversity in Artificial Intelligence: Violet Turri
Using Large Language Models in the National Security Realm
Atypical Applications of Agile and DevSecOps Principles
When Agile and Earned Value Management Collide: 7 Considerations for Successful Interaction
The Impact of Architecture on Cyber-Physical Systems Safety
ChatGPT and the Evolution of Large Language Models: A Deep Dive into 4 Transformative Case Studies
The Cybersecurity of Quantum Computing: 6 Areas of Research
User-Centric Metrics for Agile
The Product Manager’s Evolving Role in Software and Systems Development
Actionable Data in the DevSecOps Pipeline
Insider Risk Management in the Post-Pandemic Workplace
An Agile Approach to Independent Verification and Validation
Zero Trust Architecture: Best Practices Observed in Industry
Automating Infrastructure as Code with Ansible and Molecule
Identifying and Preventing the Next SolarWinds
Create your
podcast in
minutes
It is Free