In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
How to Be a Network Traffic Analyst
Workplace Violence and Insider Threat
Why Does Software Cost So Much?
Cybersecurity Engineering & Software Assurance: Opportunities & Risks
Software Sustainment and Product Lines
Best Practices in Cyber Intelligence
Deep Learning in Depth: The Good, the Bad, and the Future
The Evolving Role of the Chief Risk Officer
Obsidian: A Safer Blockchain Programming Language
Agile DevOps
Kicking Butt in Computer Science: Women in Computing at Carnegie Mellon University
Is Software Spoiling Us? Technical Innovations in the Department of Defense
Is Software Spoiling Us? Innovations in Daily Life from Software
How Risk Management Fits into Agile & DevOps in Government
5 Best Practices for Preventing and Responding to Insider Threat
Pharos Binary Static Analysis: An Update
Positive Incentives for Reducing Insider Threat
Mission-Practical Biometrics
At Risk Emerging Technology Domains
DNS Blocking to Disrupt Malware
Create your
podcast in
minutes
It is Free