An Interview with Mert Çuhadaroğlu
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Mert Çuhadaroğlu, Program Manager of BABL AI’s AI & Algorithm Auditor Certification Program, for an in-depth conversation about careers in AI governance, responsible AI, and what it really takes to become an AI auditor. Mert shares his unique professional journey — from banking and finance, to career coaching and publishing, to becoming a leading figure in AI ethics and auditing. Now based in Istanbul, Mert plays a critical role in guiding and evaluating BABL AI certification students, including reviewing capstone projects and supporting professionals from a wide range of backgrounds. Together, Shea and Mert discuss: What makes BABL AI’s AI & Algorithm Auditor Certification different from other AI governance programs Whether you need a technical background to succeed in AI auditing The real-world demand for AI auditors and AI governance professionals Common career paths for certification graduates What students actually do in the capstone project (including LLM and generative AI use cases) How BABL AI’s certifications compare to other industry credentials An overview of BABL AI’s additional certification programs, including EU AI Act Quality Management Systems, AI Governance for Business Professionals, and AI for Legal Professionals This episode is both a behind-the-scenes look at BABL AI’s training philosophy and a practical guide for anyone considering a career in AI assurance, audit, or governance. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Diving into the AI Compliance Officer
What does a Chief AI Compliance Officer actually do—and does your organization secretly need one already? 🤔 In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by co-hosts Jeffery Recker and Bryan Ilg to unpack what it really takes to own AI risk, compliance, and governance inside a modern organization. Drawing on BABL AI’s AI Compliance Officer Program and years of audit work, they break down the real pain points leaders are facing and how to move from confusion to a concrete plan. Whether you’ve just been handed “AI compliance” on top of your day job, or you’re building AI products and worried about regulations, this one’s for you. In this episode, they discuss: What a Chief AI Compliance Officer role looks like in practice – Why it often lands on general counsel, chief compliance officers, or chief AI officers – Why this work can’t be owned by one person alone The 3-part structure of BABL AI’s AI Compliance Officer Program AI foundations – Governance, AI management systems, policies, procedures, and documentation Fractional AI Compliance Officer support – Access to BABL’s research and audit team on an ongoing basis Continuous monitoring & measurement – Keeping up with self-learning, changing AI systems over time How to build an AI system inventory and triage risk – Simple rubric for identifying high, medium, and low-risk AI systems – When to treat a system as “high risk” by default – Why simplicity is the antidote to feeling overwhelmed Key AI risks every organization should know about – Data poisoning and how malicious instructions can sneak into your systems – Shadow AI (employees using unapproved tools like personal ChatGPT accounts) – Model & data drift and why “it worked when we launched it” isn’t good enough – How these risks connect to reputation, regulatory exposure, and business strategy Why governance, risk & compliance (GRC) is not a “brake” on innovation – How good governance actually lets you move faster and more confidently – The value of a “SWAT team” style AI compliance function vs. going it alone Who should watch/listen? General counsel, chief compliance officers, chief risk officers Chief AI / data / technology leaders Product owners building AI-powered tools Anyone who’s just been told: “You’re now responsible for AI compliance.” 🫠 Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Implementing AI into Your Career
In this follow-up to our episode on AI, training, and the job market, BABL AI CEO Dr. Shea Brown is joined again by COO Jeffery Recker and Chief of Staff Emily Brown to get practical about one big question: How do you actually implement AI into your career… without losing yourself (or your job) in the process? Whether you’re secure in your role, worried about layoffs, or actively changing careers, this episode focuses on tactical, realistic steps you can start taking this week. 🎧 In this episode, we cover: How to start using large language models (LLMs) and agents in your day-to-day work Concrete examples for roles like lawyers, accountants, marketers, operations, HR, teachers, and journalists What to do if your manager or organization is afraid of AI (data leaks, reputation risk, etc.) How to avoid “AI slop” and become the person who provides clear, minimal, high-value outputs A practical plan if you’ve been laid off or see layoffs coming: dual-track job search + AI pivot Using AI ethically for resumes, ATS filters, and video interviews—without fabricating experience Why you should make an “AI inventory” of tools already in your life (spoiler: it’s more than you think) How to set boundaries with AI so it augments your work, not your identity or mental health Mindset shifts for people who don’t feel “technical” but still need to adapt Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
AI, Training & the Job Market
In this latest episode of Lunchtime BABLing, hosted by BABL AI CEO Dr. Shea Brown with COO Jeffery Recker and—making her first appearance—Chief of Staff Emily Brown, we dig into what today’s AI-shaped job market really means for knowledge workers, how to build durable skills, and why “human in the loop” still matters—especially in marketing, ops, and hiring. 🎧 What you’ll learn Why AI anxiety is spiking—and how to respond with deliberate upskilling The #1 meta-skill: building a strong filter (concise, expert-informed outputs > AI slop) How AI literacy translates to any role (marketing, people ops, compliance, product) Practical ways to pivot toward Responsible AI / AI assurance / AI auditing Why specialization beats chasing every trend (go narrow, go deep, then pivot) The value of community: mentorship, peer feedback, and portfolio/capstone work Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
AI and Scheduling Optimization with Leon Ingelse
From lesson-planning to long-haul trucking, good schedules make the world run—literally. In this episode, BABL AI CEO Dr. Shea Brown sits down with Leon Ingelse, writer-researcher at Croatian optimization studio Dots & Lines, to unpack the hidden math, ethics, and human stories behind modern scheduling and routing. 🔑 What we cover Hard vs. soft constraints – why “can’t” and “prefer not to” need different math Digital twins – building a virtual copy of a business before you touch the real one Fairness & “karma” scheduling – balancing preferences over weeks, months, years Transparency & compliance – explaining a timetable (and the laws baked into it) Human-in-the-loop vs. full automation – when you still want a person pressing “publish” Optimization ≠ LLMs – where stochastic AI falls short and formal models shine The future of Dots & Lines and why bespoke solutions often beat off-the-shelf products Check out the babl.ai website for more stuff on AI Governance and Responsible AI!