In the AI Risk Reward podcast, our host, Alec Crawford (@alec06830), Founder and CEO of Artificial Intelligence Risk, Inc. aicrisk.com, interviews guests about balancing the risk and reward of Artificial Intelligence for you, your business, and society as a whole. Podcast production and sound engineering by Troutman Street Audio. You can find them on LinkedIn.
In this deep dive episode, Alec sits down with Dr. Jindong Wang, Assistant Professor at William & Mary’s Data Science Department and former Microsoft researcher, to explore the nuanced landscape of trustworthy AI, multimodal safety, and personalized AI safety. Dr. Wang details his definition of trustworthy AI, focusing on privacy, robustness, transparency, and user-centric design, and explains why these are foundational for societal trust. The discussion delves into technical strategies such as differential privacy and federated learning, as well as the complex safety challenges arising from multimodal and multi-agent AI systems. Dr. Wang shares insights on emerging research, including benchmarks for risk management, adaptive and context-aware models, and the need for regulatory and ethical advances to keep pace with technological change. The episode concludes with an examination of the future risks of AI, the importance of AI literacy, and broad recommendations for education and governance as AI becomes more deeply woven into the fabric of society.
Summary:
Trustworthy AI Principles: Dr. Wang articulates the critical elements of trustworthy AI, emphasizing privacy, interpretability, and ethical safeguards.
Technical and Regulatory Strategies: The conversation covers advanced privacy-preserving techniques and the evolving regulatory frameworks needed for effective AI risk management.
Multimodal and Multi-Agent Safety: Unique risks in systems combining text, image, audio, and agentic collaboration are discussed, alongside the need for improved benchmarks and alignment.
Emergent Behaviors and Human Oversight: Dr. Wang highlights frameworks for detecting and correcting emergent behaviors, and underscores the ongoing necessity of human-in-the-loop governance and AI literacy.
Future Risks and Education: The episode closes with reflections on cultural bias, open-source risks, and the urgent need for scalable, personalized AI education.
Referenced in this episode:
Companies/Organizations:
Copyright © 2025 by Artificial Intelligence Risk, Inc.