In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
My Story in Computing with Carol Smith
Digital Engineering and DevSecOps
A 10-Step Framework for Managing Risk
7 Steps to Engineer Security into Ongoing and Future Container Adoption Efforts
Ransomware: Evolution, Rise, and Response
VINCE: A Software Vulnerability Coordination Platform
Work From Home: Threats, Vulnerabilities, and Strategies for Protecting Your Network
An Introduction to CMMC Assessment Guides
The CMMC Level 3 Assessment Guide: A Closer Look
The CMMC Level 1 Assessment Guide: A Closer Look
Achieving Continuous Authority to Operate (ATO)
Challenging the Myth of the 10x Programmer
A Stakeholder-Specific Approach to Vulnerability Management
Optimizing Process Maturity in CMMC Level 5
Reviewing and Measuring Activities for Effectiveness in CMMC Level 4
Situational Awareness for Cybersecurity: Beyond the Network
Quantum Computing: The Quantum Advantage
CMMC Scoring 101
Developing an Effective CMMC Policy
The Future of Cyber: Educating the Cybersecurity Workforce
Create your
podcast in
minutes
It is Free