The ancient Chinese concept of balance, Yin and Yang, suggests that harmony comes from finding a careful balance between opposing forces. While today’s topic isn’t about Chinese philosophy, it resonates with this idea of balance—especially in the context of AI.
In this episode, I’m joined by Diederik Roijers, a senior researcher at the AI Lab at Vrije Universiteit Brussel. Diederik is a specialist in Multi-Objective Reinforcement Learning (MORL),
Diederik brings a unique perspective into how we can balance competing goals in decision-making, rather than focusing narrowly on optimising just one.
How should we navigate trade-offs, like maximising rewards versus minimising risks? Diederik suggests that pursuing “good enough” outcomes might be more practical—and ethical—than chasing perfection. His vision is hopeful yet pragmatic: by making deliberate choices about how AI is designed and deployed, we can reap the societal benefits of AI. He believes in the possibility of creating AI systems that are innovative, transparent, maintainable, and aligned with societal values. Together, we explore what it might take to ensure AI serves everyone—not just a select few.