In this SEI Podcast, Dr. Eric Heim, a senior machine learning research scientist at Carnegie Mellon University's Software Engineering Institute (SEI), discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
Zero Trust Architecture: Best Practices Observed in Industry
Automating Infrastructure as Code with Ansible and Molecule
Identifying and Preventing the Next SolarWinds
A Penetration Testing Findings Repository
Understanding Vulnerabilities in the Rust Programming Language
We Live in Software: Engineering Societal-Scale Systems
Secure by Design, Secure by Default
Key Steps to Integrate Secure by Design into Acquisition and Development
An Exploration of Enterprise Technical Debt
The Messy Middle of Large Language Models
An Infrastructure-Focused Framework for Adopting DevSecOps
Software Security in Rust
Improving Interoperability in Coordinated Vulnerability Disclosure with Vultron
Asking the Right Questions to Coordinate Security in the Supply Chain
Securing Open Source Software in the DoD
A Model-Based Tool for Designing Safety-Critical Systems
Managing Developer Velocity and System Security with DevSecOps
A Method for Assessing Cloud Adoption Risks
Software Architecture Patterns for Deployability
ML-Driven Decision Making in Realistic Cyber Exercises
Create your
podcast in
minutes
It is Free