In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
Improving the Common Vulnerability Scoring System
Why Software Architects Must Be Involved in the Earliest Systems Engineering Activities
Selecting Metrics for Software Assurance
AI in Humanitarian Assistance and Disaster Response
The AADL Error Library: 4 Families of Systems Errors
Women in Software and Cybersecurity: Suzanne Miller
Privacy in the Blockchain Era
Cyber Intelligence: Best Practices and Biggest Challenges
Assessing Cybersecurity Training
DevOps in Highly Regulated Environments
Women in Software and Cybersecurity: Dr. Ipek Ozkaya
The Role of the Software Factory in Acquisition and Sustainment
Defending Your Organization Against Business Email Compromise
My Story in Computing with Dr. Eliezer Kanal
Women in Software and Cybersecurity: Eileen Wrubel
Managing Technical Debt: A Focus on Automation, Design, and Architecture
Women in Software and Cybersecurity: Grace Lewis
Women in Software and Cybersecurity: Bobbie Stempfley
Women in Software and Cybersecurity: Dr. Lorrie Cranor
Leading in the Age of Artificial Intelligence
Create your
podcast in
minutes
It is Free