In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.
In this episode, Alec sits down with Peter Mancini, founder of A8A8, and a seasoned data science expert who has leveraged AI since 2005. Peter shares his unconventional entry into artificial intelligence and reflects on key lessons learned from years of deploying AI and quantum computing in high-stakes environments, including work for the US Army and financial institutions. The conversation explores the critical importance of trust, metacognition, and continuous risk assessment throughout the AI lifecycle, with practical anecdotes ranging from model uncertainty in banking to emergent cybersecurity vulnerabilities. Peter discusses the profound implications of AI’s collaborative nature, the ethical dilemmas posed by AI-generated content, and the evolving intersection of AI, quantum computing, and blockchain. The episode concludes with concrete recommendations for transparency, explainability, and incident response, emphasizing the need for vigilance against both known and unforeseen risks, including elusive black swan events.
Summary:
Trust and Verification: Peter emphasizes that over-trusting AI models without robust verification is a primary and often overlooked risk.
Metacognition in Risk Management: He advocates for ongoing critical thinking, group validation, and policy over rigid frameworks to manage AI risks.
AI-Driven Cybersecurity Threats: Real-world examples illustrate how AI can inadvertently expose sensitive associations and aid adversaries, highlighting the need for advanced guardrails.
Quantum Computing Integration: Peter discusses how quantum computing accelerates probabilistic analysis but may also expose encryption vulnerabilities and new risk vectors.
Ethical and Societal Impacts: The episode covers manipulation risks, deepfake challenges, and the essential role of transparency and explainability for both users and developers.
Referenced in this episode:
Companies/Organizations:
Movies:
Copyright © 2026 by Artificial Intelligence Risk, Inc.