S3E15: 'New Certification: Enabling Privacy Engineering in AI Systems' with Amalia Barthel & Eric Lybeck
In this episode, I'm joined by Amalia Barthel, founder of Designing Privacy, a consultancy that helps businesses integrate privacy into business operations; and Eric Lybeck, a seasoned independent privacy engineering consultant with over two decades of experience in cybersecurity and privacy. Eric recently served as Director of Privacy Engineering at Privacy Code. Today, we discuss: the importance of more training for privacy engineers on AI system enablement; why it's not enough for privacy professionals to solely focus on AI governance; and how their new hands-on course, "Privacy Engineering in AI Systems Certificate program," can fill this need. Throughout our conversation, we explore the differences between AI system enablement and AI governance and why Amalia and Eric were inspired to develop this certification program. They share examples of what is covered in the course and outline the key takeaways and practical toolkits that enrollees will get - including case studies, frameworks, and weekly live sessions throughout. Topics Covered: How AI system enablement differs from AI governance and why we should focus on AI as part of privacy engineering Why Eric and Amalia designed an AI systems certificate course that bridges the gaps between privacy engineers and privacy attorneysThe unique ideas and practices presented in this course and what attendees will take away Frameworks, cases, and mental models that Eric and Amalia will cover in their courseHow Eric & Amalia structured the Privacy Engineering in AI Systems Certificate program's coursework The importance of upskilling for privacy engineers and attorneysResources Mentioned:Enroll in the 'Privacy Engineering in AI Systems Certificate program' (Save $300 with promo code: PODCAST300 - enter this into the Inquiry Form instead of directly purchasing the course)Read: 'The Privacy Engineer's Manifesto'Take the free European Commission's course, 'Understanding Law as Code'Guest Info: Connect with Amalia on LinkedInConnect with Eric on LinkedInLearn about Designing PrivacySend us a text TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
S3E14: 'Why We Need Fairness Enhancing Technologies Rather Than PETs' with Gianclaudio Malgieri (Brussels Privacy Hub)
Today, I chat with Gianclaudio Malgieri, an expert in privacy, data protection, AI regulation, EU law, and human rights. Gianclaudio is an Associate Professor of Law at Leiden University, the Co-director of the Brussels Privacy Hub, Associate Editor of the Computer Law & Security Review, and co-author of the paper "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness". In our conversation, we explore this paper and why privacy-enhancing technologies (PETs) are essential but not enough on their own to address digital policy challenges.Gianclaudio explains why PETs alone are insufficient solutions for data protection and discusses the obstacles to achieving fairness in data processing – including bias, discrimination, social injustice, and market power imbalances. We discuss data alteration techniques such as anonymization, pseudonymization, synthetic data, and differential privacy in relation to GDPR compliance. Plus, Gianclaudio highlights the issues of representation for minorities in differential privacy and stresses the importance of involving these groups in identifying bias and assessing AI technologies. We also touch on the need for ongoing research on PETs to address these challenges and share our perspectives on the future of this research. Topics Covered: What inspired Gianclaudio to research fairness and PETsHow PETs are about power and controlThe legal / GDPR and computer science perspectives on 'fairness'How fairness relates to discrimination, social injustices, and market power imbalances How data obfuscation techniques relate to AI / ML How well the use of anonymization, pseudonymization, and synthetic data techniques address data protection challenges under the GDPRHow the use of differential privacy techniques may led to unfairness Whether the use of encrypted data processing tools and federated and distributed analytics achieve fairness 3 main PET shortcomings and how to overcome them: 1) bias discovery; 2) harms to people belonging to protected groups and individuals autonomy; and 3) market imbalances.Areas that warrant more research and investigation Resources Mentioned:Read: "The Unfair Side of Privacy Enhancing Technologies: Addressing the Trade-offs Between PETs and Fairness"Guest Info: Connect with Gianclaudio on LinkedInLearn more about Brussles Privacy HubSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
S3E13: 'Building Safe AR / VR/ MR / XR Technology" with Spatial Computing Pioneer Avi Bar Zeev (XR Guild)
In this episode, I had the pleasure of talking with Avi Bar-Zeev, a true tech pioneer and the Founder and President of The XR Guild. With over three decades of experience, Avi has an impressive resume, including launching Disney's Aladdin VR ride, developing Second Life's 3D worlds, co-founding Keyhole (which became Google Earth), co-inventing Microsoft's HoloLens, and contributing to the Amazon Echo Frames. The XR Guild is a nonprofit organization that promotes ethics in extended reality (XR) through mentorship, networking, and educational resources. Throughout our conversation, we dive into privacy concerns in augmented reality (AR), virtual reality (VR), and the metaverse, highlighting increased data misuse and manipulation risks as technology progresses. Avi shares his insights on how product and development teams can continue to be innovative while still upholding responsible, ethical standards with clear principles and guidelines to protect users' personal data. Plus, he explains the role of eye-tracking technology and why he advocates classifying its data as health data. We also discuss the challenges of anonymizing biometric data, informed consent, and the need for ethics training in all of the tech industry. Topics Covered: The top privacy and misinformation issues that Avi has noticed when it comes to AR, VR, and metaverse dataWhy Avi advocates for classifying eye tracking data as health data The dangers of unchecked AI manipulation and why we need to be more aware and in control of our online presence The ethical considerations for experimentation in highly regulated industriesWhether it is possible to anonymize VR and AR dataWays these product and development teams can be innovative while maintaining ethics and avoiding harm AR risks vs VR risksAdvice and privacy principles to keep in mind for technologists who are building AR and VR systems Understanding The XR Guild Resources Mentioned:Read: The Battle for Your Brain: Defending the Right to Think Freely in the Age of NeurotechnologyRead: Our Next RealityGuest Info: Connect with Avi on LinkedInCheck out the XR GuildLearn about Avi's Consulting ServicesSend us a text Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnTRU Staffing PartnersTop privacy talent - when you need it, where you need it.Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
S3E12: 'How Intentional Experimentation in A/B Testing Supports Privacy' with Matt Gershoff (Conductrics)
Today, I'm joined by Matt Gershoff, Co-founder and CEO of Conductrics, a software company specializing in A/B testing, multi-armed bandit techniques, and customer research and survey software. With a strong background in resource economics and artificial intelligence, Matt brings a unique perspective to the conversation, emphasizing simplicity and intentionality in decision-making and data collection. In this episode, Matt dives into Conductrics' background, the role of A/B testing and experimentation in privacy, data collection at a specific and granular level, and the details of Conductrics' processes. He emphasizes the importance of intentionally collecting data with a clear purpose to avoid unnecessary data accumulation and touches on the value of experimentation in conjunction with data minimization strategies. Matt also discusses his upcoming talk at the PEPR Conference and shares his hopes for what privacy engineers will learn from the event. Topics Covered: Matt’s background and how he started A/B testing and experimentation at ConductricsThe major challenges that arise when companies run experiments and how Conductrics works to solve them Breaking down A/B testingHow being intentional about A/B testing and experimentation supports high level privacyThe process of the data collection, testing, and experimentation Collecting the data while minimizing privacy risks The value of attending the USENIX Conference on Privacy Engineering Practice & Respect (PEPR24) and what to expect from Matt’s talk Guest Info: Connect with Matt on LinkedInLearn more about ConductricsRead about George Box's quote, "All models are wrong" Learn about the PEPR ConferenceSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.
S3E11: 'Decision-Making Governance & Design: Combating Dark Patterns with Fair Patterns' with Marie Potel-Saville (Amurabi & FairPatterns)
In this episode, Marie Potel-Saville joins me to shed light on the widespread issue of dark patterns in design. With her background in law, Marie founded the 'FairPatterns' project with her award-winning privacy and innovation studio, Amurabi, to detect and fix large-scale dark patterns. Throughout our conversation, we discuss the different types of dark patterns, why it is crucial for businesses to prevent them from being coded into their websites and apps, and how designers can ensure that they are designing fair patterns in their projects.Dark patterns are interfaces that deceive or manipulate users into unintended actions by exploiting cognitive biases inherent in decision-making processes. Marie explains how dark patterns are harmful to our economic and democratic models, their negative impact on individual agency, and the ways that FairPatterns provides countermeasures and safeguards against the exploitation of people's cognitive biases. She also shares tips for designers and developers for designing and architecting fair patterns.Topics Covered: Why Marie shifted her career path from practicing law to deploying and lecturing on Legal UX design & combatting Dark Patterns at AmurabiThe definition of ‘Dark Patterns’ and the difference between them and ‘deceptive patterns’What motivated Marie to found FairPatterns.com and her science-based methodology to combat dark patternsThe importance of decision making governance Why execs should care about preventing dark patterns from being coded into their websites, apps, & interfacesHow dark patterns exploit our cognitive biases to our detrimentWhat global laws say about dark patternsHow dark patterns create structural risks for our economies & democratic modelsHow "Fair Patterns" serve as countermeasures to Dark PatternsThe 7 categories of Dark Patterns in UX design & associated countermeasures Advice for designers & developers to ensure that they design & architect Fair Patterns when building products & featuresHow companies can boost sales & gain trust with Fair Patterns Resources to learn more about Dark Patterns & countermeasuresGuest Info: Connect with Marie on LinkedInLearn more about AmurabiCheck out FairPatterns.comResources Mentioned:Learn about the 7 Stages of Action ModelTake FairPattern's course: Dark Patterns 101 Read Deceptive Design PatternsListen to FairPatterns' Fighting DarSend us a text Privado.aiPrivacy assurance at the speed of product development. Get instant visibility w/ privacy code scans.TRU Staffing PartnersTop privacy talent - when you need it, where you need it.Shifting Privacy Left MediaWhere privacy engineers gather, share, & learnDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.Copyright © 2022 - 2024 Principled LLC. All rights reserved.