AI Risk Reward

AI Risk Reward

https://feeds.captivate.fm/ai-risk-reward/
1 Followers 86 Episodes Claim Ownership
I am your host, Alec Crawford, Founder and CEO of Artificial Intelligence Risk, Inc. and this is AI Risk-Reward, a podcast about balancing the risk and reward of using AI personally, professionally, and as a large organization! We will discuss hot topics such as, will AI take my job or make it better? When I ask Chat-GPT work questions, is that even safe? From an ethical perspective, is it enough for big companies to anonymize private data before using it? (Probably not.) I am discussing...
View more

Episode List

Deep Dive: Trust, Quantum Computing, and the Future of AI Risk with Peter Mancini, Founder of A8A8

Mar 24th, 2026 5:00 AM

In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.In this episode, Alec sits down with Peter Mancini, founder of A8A8, and a seasoned data science expert who has leveraged AI since 2005. Peter shares his unconventional entry into artificial intelligence and reflects on key lessons learned from years of deploying AI and quantum computing in high-stakes environments, including work for the US Army and financial institutions. The conversation explores the critical importance of trust, metacognition, and continuous risk assessment throughout the AI lifecycle, with practical anecdotes ranging from model uncertainty in banking to emergent cybersecurity vulnerabilities. Peter discusses the profound implications of AI’s collaborative nature, the ethical dilemmas posed by AI-generated content, and the evolving intersection of AI, quantum computing, and blockchain. The episode concludes with concrete recommendations for transparency, explainability, and incident response, emphasizing the need for vigilance against both known and unforeseen risks, including elusive black swan events.Summary:Trust and Verification: Peter emphasizes that over-trusting AI models without robust verification is a primary and often overlooked risk.Metacognition in Risk Management: He advocates for ongoing critical thinking, group validation, and policy over rigid frameworks to manage AI risks.AI-Driven Cybersecurity Threats: Real-world examples illustrate how AI can inadvertently expose sensitive associations and aid adversaries, highlighting the need for advanced guardrails.Quantum Computing Integration: Peter discusses how quantum computing accelerates probabilistic analysis but may also expose encryption vulnerabilities and new risk vectors.Ethical and Societal Impacts: The episode covers manipulation risks, deepfake challenges, and the essential role of transparency and explainability for both users and developers.Referenced in this episode:Companies/Organizations:A8A8Artificial Intelligence Risk, Inc.US ArmyFidelity InvestmentsRocket MortgageOpenAIGoogleMetaMicrosoftMovies:Blade RunnerCopyright © 2026 by Artificial Intelligence Risk, Inc.

What’s Working in AI Use Cases Now: Lucas Erb, LinkedIn Top Voice & AIexperts.com Founder

Mar 17th, 2026 5:00 AM

In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.In this episode, Alec welcomes Lucas Erb, Founder of AIexperts.com and seasoned advisor on AI strategy, who shares his journey from early computer science interests to consulting at Deloitte, and ultimately founding his own firm. Lucas discusses the evolution of AI adoption, emphasizing the critical gap in mid-market business AI enablement and describing how his company demystifies automation and agent-based solutions for this segment. Key practical examples are explored, focusing on AI’s real-world impact—particularly in sales automation and productivity—rather than generic tool adoption. The conversation also dives deep into the ethical and social challenges of AI, highlighting the ongoing risks of bias and the necessity for thoughtful, transparent implementation. Alec and Lucas conclude with insights into future workforce implications, AI for good initiatives, and advice for young professionals navigating the rapidly changing technology landscape.Summary:AI Journey: Lucas Erb recounts his path from early technical curiosity to founding AIexperts.com, highlighting his time at HP and Deloitte. Mid-Market Enablement: He identifies a critical gap in AI adoption for midsize businesses and shares how his firm provides practical, ROI-driven automation. Ethical Challenges: The episode addresses pressing issues around model bias, data selection, and the importance of ongoing evaluation to ensure fairness. Future of Work: Discussion centers on the shifting landscape for new graduates and the need for leaders to shape a responsible AI-driven workforce. AI for Good: Lucas underscores the importance of broad participation in AI ethics and safety, stressing that collective action is necessary to keep pace with innovation.Referenced in this episode:Companies/Organizations:AIexperts.comArtificial Intelligence Risk, Inc.DeloitteHP AnthropicAccentureMcKinseyHarvard University (AI for Human Flourishing Program)NASDAQMITGlobal AI Ethics InstituteXerox PARCAppleMovies:InceptionJurassic ParkCopyright © 2026 by Artificial Intelligence Risk, Inc.

Deep Dive: AI Policy and Risk Governance with Asad Ramzanali, Director of AI and Tech Policy

Mar 10th, 2026 5:00 AM

In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.In this deep dive episode, Alec welcomes Asad Ramzanali, Director of AI and Tech Policy at the Vanderbilt Policy Accelerator, for a comprehensive discussion on the current landscape of AI policy and risk governance. Asad explains how AI’s broad and general-purpose nature requires sector-specific regulatory strategies, emphasizing that existing frameworks must adapt to both new and exacerbated risks. The conversation covers the challenges of benchmarking and evaluating large models, the balance between federal and state governance, and the ongoing debate over regulation versus innovation. Asad highlights the importance of direct regulatory interventions, robust enforcement mechanisms, and maintaining public trust, particularly as AI adoption accelerates across public and private sectors. The episode closes with reflections on economic disruption, business model risks, and future research priorities in AI policy.Summary:Defining AI Risk: Asad stresses the need for adaptable, use-case-driven frameworks due to AI’s general-purpose scope.Sectoral Regulation: Different regulators must address AI risks where they specifically arise, especially in finance, health, and national security.Benchmarking Challenges: Evaluating AI models requires independent, evolving methodologies, not just self-reported metrics from companies.Regulation vs. Innovation: The current regulatory environment is far from overreaching, and well-crafted policies can actually foster safer innovation.Accountability and Public Trust: Clear liability, enforcement, and transparency are critical for democratic legitimacy and effective AI risk management.Referenced in this episode:Companies/Organizations:Vanderbilt Policy AcceleratorArtificial Intelligence Risk, Inc.Vanderbilt UniversityFDA (U.S. Food and Drug Administration)FCC (Federal Communications Commission)NIST (National Institute of Standards and Technology)OpenAIAnthropicGoogleNOAA (National Oceanic and Atmospheric Administration)Hamilton Project (Brookings Institution)Global AI Ethics InstituteMovies:TerminatorCopyright © 2026 by Artificial Intelligence Risk, Inc.

Rethinking Risk: Agentic AI, Ethical Insurance, and Tanner Hackett’s Journey with Counterpart

Mar 3rd, 2026 5:00 AM

In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com , interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.In this episode, Alec welcomes Tanner Hackett, CEO and founder of Counterpart, to discuss how AI and agentic technology are transforming the insurance industry. Tanner shares his unique entrepreneurial background, highlighting how data-driven decision-making has been the common thread across his ventures in e-commerce, marketing technology, and now, insurance. He explains how Counterpart leverages agentic AI to streamline underwriting, enhance transparency, and proactively manage risk for commercial clients, while emphasizing the importance of human expertise in high-stakes decisions. The conversation also touches on the ethical and regulatory challenges of integrating AI into insurance, including the need for change management within legacy organizations. Tanner offers candid advice to startup founders and recommends resources for keeping pace with AI innovation, before closing with a spirited lightning round on topics ranging from startup fundraising events to Lord of the Rings.Summary:Entrepreneurial Journey: Tanner Hackett traces his path from e-commerce to founding Counterpart, focusing on the power of data in reshaping industries.Agentic AI in Insurance: Counterpart uses agentic AI to automate and improve insurance workflows, while maintaining a critical human-in-the-loop for complex risk assessment.Ethics and Regulation: The episode explores the ethical complexities and regulatory lag in insurance AI, with commercial lines facing fewer immediate ethical dilemmas than personal lines.Industry Transformation: Tanner highlights the slow but inevitable modernization of insurance, predicting both efficiency gains and significant workforce changes as AI adoption grows.Startup Insights: Practical advice is given for founders on capital raising, rapid iteration, and the importance of sales and human psychology in entrepreneurial success.Referenced in this episode:Companies/Organizations:CounterpartArtificial Intelligence Risk, Inc.Troutman Street AudioLazadaButtonOpenAIChubbEdmund Hillary Fellows (EHF)MITMovies:Minority ReportLord of the Rings Copyright © 2025 by Artificial Intelligence Risk, Inc.

Deep Dive: Trustworthy, Multimodal, and Personalized AI Safety with Dr. Jindong Wang, Assistant Professor at William & Mary

Feb 24th, 2026 5:00 AM

In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.In this deep dive episode, Alec sits down with Dr. Jindong Wang, Assistant Professor at William & Mary’s Data Science Department and former Microsoft researcher, to explore the nuanced landscape of trustworthy AI, multimodal safety, and personalized AI safety. Dr. Wang details his definition of trustworthy AI, focusing on privacy, robustness, transparency, and user-centric design, and explains why these are foundational for societal trust. The discussion delves into technical strategies such as differential privacy and federated learning, as well as the complex safety challenges arising from multimodal and multi-agent AI systems. Dr. Wang shares insights on emerging research, including benchmarks for risk management, adaptive and context-aware models, and the need for regulatory and ethical advances to keep pace with technological change. The episode concludes with an examination of the future risks of AI, the importance of AI literacy, and broad recommendations for education and governance as AI becomes more deeply woven into the fabric of society.Summary:Trustworthy AI Principles: Dr. Wang articulates the critical elements of trustworthy AI, emphasizing privacy, interpretability, and ethical safeguards.Technical and Regulatory Strategies: The conversation covers advanced privacy-preserving techniques and the evolving regulatory frameworks needed for effective AI risk management.Multimodal and Multi-Agent Safety: Unique risks in systems combining text, image, audio, and agentic collaboration are discussed, alongside the need for improved benchmarks and alignment.Emergent Behaviors and Human Oversight: Dr. Wang highlights frameworks for detecting and correcting emergent behaviors, and underscores the ongoing necessity of human-in-the-loop governance and AI literacy.Future Risks and Education: The episode closes with reflections on cultural bias, open-source risks, and the urgent need for scalable, personalized AI education.Referenced in this episode:Companies/Organizations:William & MaryArtificial Intelligence Risk, Inc.MicrosoftOpenAIGoogleNvidiaCopyright © 2025 by Artificial Intelligence Risk, Inc.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free