In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
SEI Fellows Series: Peter Feiler
NTP Best Practices
Establishing Trust in Disconnected Environments
Distributed Artificial Intelligence in Space
Verifying Distributed Adaptive Real-Time Systems
10 At-Risk Emerging Technologies
Technical Debt as a Core Software Engineering Practice
DNS Best Practices
Three Roles and Three Failure Patterns of Software Architects
Security Modeling Tools
Best Practices for Preventing and Responding to Distributed Denial of Service (DDoS) Attacks
Cyber Security Engineering for Software and Systems Assurance
Moving Target Defense
Improving Cybersecurity Through Cyber Intelligence
A Requirement Specification Language for AADL
Becoming a CISO: Formal and Informal Requirements
Predicting Quality Assurance with Software Metrics and Security Methods
Network Flow and Beyond
A Community College Curriculum for Secure Software Development
Security and the Internet of Things
Create your
podcast in
minutes
It is Free