Podbean logo
  • Discover
  • Podcast Features
    • Podcast Hosting

      Start your podcast with all the features you need.

    • Podbean AI Podbean AI

      AI-Enhanced Audio Quality and Content Generation.

    • Blog to Podcast

      Repurpose your blog into an engaging podcast.

    • Video to Podcast

      Convert YouTube playlists to podcasts, videos to audios.

  • Monetization
    • Ads Marketplace

      Join Ads Marketplace to earn through podcast sponsorships.

    • PodAds

      Manage your ads with dynamic ad insertion capability.

    • Apple Podcasts Subscriptions Integration

      Monetize with Apple Podcasts Subscriptions via Podbean.

    • Live Streaming

      Earn rewards and recurring income from Fan Club membership.

  • Podbean App
    • Podcast Studio

      Easy-to-use audio recorder app.

    • Podcast App

      The best podcast player & podcast app.

  • Help and Support
    • Help Center

      Get the answers and support you need.

    • Podbean Academy

      Resources and guides to launch, grow, and monetize podcast.

    • Podbean Blog

      Stay updated with the latest podcasting tips and trends.

    • What’s New

      Check out our newest and recently released features!

    • Podcasting Smarter

      Podcast interviews, best practices, and helpful tips.

  • Popular Topics
    • How to Start a Podcast

      The step-by-step guide to start your own podcast.

    • How to Start a Live Podcast

      Create the best live podcast and engage your audience.

    • How to Monetize a Podcast

      Tips on making the decision to monetize your podcast.

    • How to Promote Your Podcast

      The best ways to get more eyes and ears on your podcast.

    • Podcast Advertising 101

      Everything you need to know about podcast advertising.

    • Mobile Podcast Recording Guide

      The ultimate guide to recording a podcast on your phone.

    • How to Use Group Recording

      Steps to set up and use group recording in the Podbean app.

  • All Arts Business Comedy Education
  • Fiction Government Health & Fitness History Kids & Family
  • Leisure Music News Religion & Spirituality Science
  • Society & Culture Sports Technology True Crime TV & Film
  • Live
  • How to Start a Podcast
  • How to Start a Live Podcast
  • How to Monetize a podcast
  • How to Promote Your Podcast
  • How to Use Group Recording
  • Log in
  • Start your podcast for free
  • Podcasting
    • Podcast Features
      • Podcast Hosting

        Start your podcast with all the features you need.

      • Podbean AI Podbean AI

        AI-Enhanced Audio Quality and Content Generation.

      • Blog to Podcast

        Repurpose your blog into an engaging podcast.

      • Video to Podcast

        Convert YouTube playlists to podcasts, videos to audios.

    • Monetization
      • Ads Marketplace

        Join Ads Marketplace to earn through podcast sponsorships.

      • PodAds

        Manage your ads with dynamic ad insertion capability.

      • Apple Podcasts Subscriptions Integration

        Monetize with Apple Podcasts Subscriptions via Podbean.

      • Live Streaming

        Earn rewards and recurring income from Fan Club membership.

    • Podbean App
      • Podcast Studio

        Easy-to-use audio recorder app.

      • Podcast App

        The best podcast player & podcast app.

  • Advertisers
  • Enterprise
  • Pricing
  • Resources
    • Help and Support
      • Help Center

        Get the answers and support you need.

      • Podbean Academy

        Resources and guides to launch, grow, and monetize podcast.

      • Podbean Blog

        Stay updated with the latest podcasting tips and trends.

      • What’s New

        Check out our newest and recently released features!

      • Podcasting Smarter

        Podcast interviews, best practices, and helpful tips.

    • Popular Topics
      • How to Start a Podcast

        The step-by-step guide to start your own podcast.

      • How to Start a Live Podcast

        Create the best live podcast and engage your audience.

      • How to Monetize a Podcast

        Tips on making the decision to monetize your podcast.

      • How to Promote Your Podcast

        The best ways to get more eyes and ears on your podcast.

      • Podcast Advertising 101

        Everything you need to know about podcast advertising.

      • Mobile Podcast Recording Guide

        The ultimate guide to recording a podcast on your phone.

      • How to Use Group Recording

        Steps to set up and use group recording in the Podbean app.

  • Discover
  • Log in
    Sign up free
Tech Law Talks

Tech Law Talks

Technology

AI explained: The EU AI Act, the Colorado AI Act and the EDPB

AI explained: The EU AI Act, the Colorado AI Act and the EDPB

2025-03-04
Download

Partners Catherine Castaldo, Andy Splittgerber, Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI?

----more----

Transcript:

Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. 

Catherine: Hello, everyone, and thanks again for joining us on Tech Law Talks. We're here with a really good array of colleagues to talk to you about the EU AI Act, the Colorado AI Act, the EDPB guidance, and we'll share some of those initials soon on what they all mean. But I'm going to let my colleagues introduce themselves. Before I do that, though, I'd like to say if you like our content, please consider giving us a five-star review wherever you find us. And let's go ahead and first introduce my colleague, Andy. 

Andy: Yeah, hello, everyone. My name is Andy Splittgerber. I'm a partner at Reed Smith in the Emerging Technologies Department based out of Munich in Germany. And looking forward to discussing with you interesting data protection topics. 

Thomas: Hello, everyone. This is Thomas, Thomas Fischl in Munich, Germany. I also focus on digital law and privacy. And I'm really excited to be with you today on this podcast. 

Tyler: Hey everyone, thanks for joining. My name is Tyler Thompson. I'm a partner in the emerging technologies practice at Reed Smith based in the Denver, Colorado office. 

Catherine: And I'm Catherine Castaldo, a partner in the New York office. So thanks to all my colleagues. Let's get started. Andy, can you give us a very brief overview of the EU AI app? 

Andy: Sure, yeah. It came into force in August 2024. And it is a law about mainly the responsible use of AI. Generally, it is not really focused on data protection matters. Rather, it is next to the world-famous European Data Protection Regulation. It has a couple of passages where it refers to the GDPR and also sometimes where it states that certain data protection impact assessments have to be conducted. Other than that, it has its own concept dividing up AI systems. And we're just expecting a new guidance on how authorities and how the commission interprets what AI systems are. So watch out for that. Into different categories, prohibited AI, high-risk AI, and then normal AI systems. There are also special rules on generative AI, and then some rules on transparency requirements when organizations use AI towards ends customers. And depending on these risk categories, there are certain requirements, and attaching to each of these categories, developers, importers, and also users as like organizations of AI have to comply with certain obligations around accountability, IT security, documentation, checking, and of course, human intervention and monitoring. This is the basic concept and the rules start to kick in February 2nd, 2025 when prohibited AI must not be used anymore in Europe. And the next bigger wave will be on August 2nd, 2025 when the rules on generative AI kick in. So organizations should start and be prepared to comply with these rules now and get familiar with this new type of law. It's kind of like a new area of law. 

Catherine: Thanks for that, Andy. Tyler, can you give us a very brief overview of the Colorado AI Act? 

Tyler: Sure, happy to. So Colorado AI Act, this is really the first comprehensive AI law in the United States. Passed at the end of the 2024 legislative session. it covers developers or deployers that use a high-risk AI system. Now, what is a high-risk AI system? It's just a system that makes a consequential decision. What is a consequential decision? These can include things like education decisions, employment opportunities, employment related decisions, financial lending service decisions, if it's an essential government service, a healthcare service, housing, insurance, legal services. So that consequential decision piece is fairly broad. The effective date of it is February 1st of 2026, and the Colorado AG is going to be enforcing it. There's no private right of action here, but violating the Colorado AEI Act is considered an unfair and deceptive trade practice under Colorado law. So that's where you get the penalties of the Colorado AEI Act. It's tied into the Colorado deceptive trade practice. 

Catherine: That's an interesting angle. And Tom, let's turn to you for a moment. I understand that the European Data Protection Board, or EDPB, has also recently released some guidance on data protection in connection with artificial intelligence. Can you give us some high-level takeaways from that guidance? 

Thomas: Sure, Catherine, and it's very true that the EDPB has just released a statement. It actually has been released in December of last year. And yeah, they have released that highly anticipated statement on AI models and data protection. This statement of the EDPB follows actually a much-discussed paper published by the German Hamburg Data Protection Authority in July of last year. And I also wanted to briefly touch upon this paper. Because the Hamburg Authority argued that AI models, especially large language models, are anonymous when considered separately. They do not involve the processing of personal data. To reach this conclusion, the paper decoupled the model itself from, firstly, the prior training of the model, which may involve the collection and further processing of personal data as part of the training data set. And secondly, the subsequent use of the model, where a prompt may contain personal data and output may be used in a way that means it represents personal data. And interestingly, this paper considered only the AI model itself and concluded that the tokens and values that make up the inner processes of a typical AI model do not meaningfully relate to or correspond with information about identifiable individuals. And consequently, the model itself was classified as anonymous, even if personal data is processed during the development and the use of the model. So the EDPB statement, recent statement, does actually not follow this relatively simple and secure framework proposed by the German authority. The EDPB statement responds actually to a request from the Irish Data Protection Commission and gives kind of a framework, just particularly with respect to certain aspects. It actually responds to four specific questions. And the first question was, so under what conditions can AI models be considered anonymous? And the EDPB says, well, yes, it can be considered anonymous, but only in some cases. So it must be impossible with all likely means to obtain personal data from the model either through attacks aimed at extracting the original training data or through other interactions with the AI model. The second and third questions relate to the legal basis of the use and the training of AI models. And the EDPB answered those questions in one answer. So the statement indicates that the development and use of AI models can. Generally be based on a legal basis of legitimate interest, then the statement lists a variety of different factors that need to be considered in the assessment scheme according to Article 6 GDPR. So again, it refers to an individual case-by-case analysis that has to be made. And finally, the EDPB addresses the highly practical question of what consequences it has for the use of an AI model if it was developed in violation of data protection regulations. The EDPB says, well, this partly depends on whether the EI model was first anonymized before it was disclosed to the model operator. And otherwise, the model operator may need to assess the legality of the model's development as part of their accountability obligations. So quite interesting statement. 

Catherine: Thanks, Tom. That's super helpful. But when I read some commentary on this paper, there's a lot of criticism that it's not very concrete and doesn't provide actionable guidance to businesses. Can you expand on that a little bit and give us your thoughts? 

Thomas: Yeah, well, as is sometimes the case with these EDPB statements, which necessarily reflect the consensus opinion of authorities from 27 different member states. The statement does not provide many clear answers. So instead, the EDPP offers kind of indicative guidelines and criteria and calls for case-by-case assessments of AI models to understand whether and how they are affected by the GDPR. And interestingly, someone has actually counted how often the phrase case-by-case appears in the statement. It appears actually 16 times. and can or could appears actually 161 times so. Obviously, this is likely to lead to different approaches among data protection authorities, but it's maybe also just an intended strategy of the EDPB. Who knows? 

Catherine: Well, as an American, I would read that as giving me a lot of flexibility. 

Thomas: Yeah, true. 

Catherine: All right, let's turn to Andy for a second. Andy, also in view of the AI Act, what do you now recommend organizations do when they want to use generative AI systems? 

Andy: That's a difficult question after 161 cans and goods. We always try to give practical advice. And I mean, with regard, like if you now look at the AI Act plus this EDPB paper or generally GDPR, there are a couple of items where organizations can prepare and need to prepare. First of all, organizations using generative AI must be aware that a lot of the obligations is on the developers. So the developers of generative AI definitely have more obligations, especially under the AI Act, for example. They have to create and maintain the model's technical documentation, including the training and testing processes, monitor the AI system. They must also, which can be really painful and will be painful, they have to make available a detailed summary of the content that was used for the training for the model. And this goes very much also into copyright topics. So there are a lot of obligations and none of these are on the using side. So if organizations use generative AI, they don't have to comply with all of this, but they have to, and that's our recommendation, ensure in their agreements when they license the model or the AI system, get the confirmation by the developer that the developer complies with all of these obligations. That's kind of like the supply chain compliance in AI. So that's one of the aspects from the using side. Make sure in your agreement that the provider complies with AI Act. Other items for the agreement when licensing AI, generative AI systems or AI is attaching to what Thomas said. Getting a statement from the developer whether or not the model itself contains personal data. The ideal answer is no, the model does not contain personal data because then we don't have the poisonous tree. If the developer was not in compliance with GDPR or data protection laws when doing the training, there is a cut. If the model does not contain any personal data, then this cannot infect the later use by the using organization. So this is a very important statement. We have not seen this in practice very often so far, and it is quite a strong commitment developers are asked to give, but it is something at least to be discussed in the negotiations. So that's the second point. A third point for the agreement with the provider is whether or not the usage data is used for further training that can create data protection issues and might require using organizations to solicit consent or other justifications from their employees or users. And then, of course, having in place a data processing agreement with the provider or developer of the generative AI system if it runs on someone else's systems. So these are all items for the contracts, and we think this is something that needs to be tackled now because it always takes a while until the contract is negotiated and in place. And on top of this, as I said, the AI Act obligations are rather limited. There's only some transparency only, but it's transparency obligations for using organizations to, for example, inform their employees that they're using AI to inform end users that a certain whatever text or photo or article was created by AI. So like a tag, this was created by AI being transparent that AI was used to develop something. And then on top of this, the general GDPR compliance requirements apply, like transparency about what personal data is processed when the AI is used. Justification of processing, add the AI system to your role paths, and also check if potentially data protection impact assessment is required. This will mainly be the case if the AI has intensive impact on the personality of data subjects' data. So these are the general requirements. So takeaways, look, check the contracts, check the limited transparency requirements under AI Act, and comply with what you know already under GDPR. 

Tyler: It's interesting because there is a lot of overlap between the EU AI Act and the Colorado AI Act. But Colorado, it does have that robust impact assessment requirements. You know, you've got to provide notification. You have to provide opt-out rights and appeal. You do have some of that publicly facing notice requirement as well. And so the one thing that I think I want to highlight that's a little bit different, we have an AG notification requirement. So if you discover that your artificial intelligence system has been creating an effect that could be considered algorithmic discrimination, you have an affirmative duty to notify the attorney general. So that's something that's a little bit different. But I think overall, there's a lot of overlap between the Colorado AI Act and the EU AI Act. And I like Andy's analogy of the supply chain, right? Colorado as well. Yes, it applies to the developers, but it also applies to deployers. And on the deployer side, it is kind of that supply chain type of analogy of these are things that you as a deployer, you need to go back, look at your developer, make sure you have the right documentation, that you've checked the right boxes there and have done the right things. 

Catherine: Thanks for that, Tyler. Do you think we're entering into an area where the U.S. States might produce more AI legislation? 

Tyler: I think so. Virginia has proposed a version of basically the Colorado AI Act. And I honestly think we could see the same thing with these AI bills that we have seen with privacy on the US side, which is kind of a state-specific approach. Some states adopting the same or highly similar versions of the laws of other states, but then maybe a couple states going off on their own and doing something unique. So it would not be surprising to me at all, at least in the short to midterm. We have a patchwork of AI laws throughout the United States just based on individual state law. 

Catherine: Thanks for that. And I'm going to ask a question to both Tyler and Tom and Andy. Either one of you can answer, whoever thinks of this. But we've been reading a lot lately about DeepSeek and all the cyber insecurities, essentially, with utilizing a system like that and some failures on the part of the developers there. Is there any security requirement in either one of the EU or Colorado-based AI acts for deploying or developing a new system? 

Tyler: Yeah, for sure. So where your security requirements are going to come in, I think, is in the impact assessment piece, right? Where, you know, when you have to look at your risks and how this could affect an individual, whether through a discrimination issue or other type of risk to it, you're going to have to address that in the discrimination piece. So while it's not like a specific security provision, there's no way that you're going to get around some of these security requirements because you have to do that very robust impact assessment, right? Part of that analysis under the impact assessment is known or reasonably foreseeable risks. So things like that, you're going to have to, I would say, address via some of the security requirements facing the AI platform. 

Catherine: Great. And what about from the European side? 

Andy: Yes, similar from the European side or perhaps even a bit more, definitely robustness, cybersecurity, IT security is like a major portion of the AI Act. So that's definitely a very, very important obligation and duty that must be compliant. 

Catherine: And I would think too under GDPR, because you have to ensure adequate technical and organizational measures that if you had personal information going into the AI system, you'd have to comply with that requirement as well, since they stand side by side. 

Andy: Exactly, exactly. And then there's under both also notification obligations if something goes wrong. 

Catherine: Well, good to know. All right, well, maybe we'll do a future podcast on the impact of the NIST AI risk management framework and the application to both of these large bodies of law. But I thank all my colleagues for joining us today. We have time for just a quick final thought. Does anyone have one? 

Andy: Thought from me after the AI Act came into force now, I'm as a practical European worried that we're killing the AI industry and innovation in Europe. It's kind of like good to see that at least some states in the U.S. follow a bit of a similar approach, even if it's, you know, different. Perhaps I haven't given up the hope for a more global solution. Perhaps the AI Act will be also adjusted a bit to then come more to a closer global solution. 

Tyler: On the U.S., I'd say, look, my takeaway is start now, start thinking about some of this stuff now. It can be tempting to say it's just Colorado. You know, we have till February of 2026. I think a lot of these things that the Colorado AI Act and even the EU AI Act are requiring are arguably things that you should be doing anyway. So I would say start now, especially as Andy said, on the contract side, if nothing else. We'd start thinking about doing a deal with a developer or a deployer. What needs to be in that agreement? How do we need to protect ourselves? And how do we need to look at the regulatory space to future-proof this so that when we come to 2026, we're not amending 30, 40 contracts? 

Thomas: And maybe a final thought from my side. So the EDPB statement does only answer a few questions, actually. So it doesn't touch other very important issues like automated decision-making. There is nothing in the document. There is not really anything about sensitive data. The use of sensitive data, data protection impact assessments are not addressed. So a lot of topics that remain unclear, at least there is no guidance yet. 

Catherine: Those are great views and I'm sure really helpful to all of our listeners who have to think of these problems from both sides of the pond. And thank you for joining us again on Tech Law Talks. We look forward to speaking with you again soon. 

Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. 

Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. 

All rights reserved. 

Transcript is auto-generated.

view more

More Episodes

Tariff-related considerations when planning a data center project
2025-07-07 286
AI explained: Introduction to Reed Smith's AI Glossary
2025-04-23 469
AI explained: Navigating AI in Arbitration - The SVAMC Guideline Effect
2025-04-10 527
Navigating NIS2: What businesses need to know
2025-02-12 339
AI explained: AI and the Colorado AI Act
2025-01-29 361
Navigating the Digital Operational Resilience Act
2025-01-28 267
EU/Germany: Damages after data breach/scraping – Groundbreaking case law
2024-12-18 346
AI explained: AI in the UK insurance market
2024-12-05 299
AI explained: AI and cybersecurity threat
2024-11-13 367
AI explained: AI regulations and PRC court decisions in China
2024-10-15 385
AI explained: AI and financial services
2024-10-10 285
AI explained: AI in shipping and aviation
2024-10-03 222
AI explained: AI and banking
2024-10-01 218
AI explained: AI and the impact on medical devices in the EU
2024-09-26 223
AI explained: AI and governance
2024-09-24 277
AI explained: AI and recent HHS activity with HIPAA considerations
2024-09-19 195
AI explained: AI and shipping
2024-09-17 159
AI explained: AI and the German workplace
2024-09-12 195
AI explained: Open-source AI
2024-09-09 209
  • ←
  • 1
  • 2
  • 3
  • 4
  • 5
  • →
01245678910111213141516171819

Get this podcast on your
phone, FREE

Download Podbean app on App Store Download Podbean app on Google Play

Create your
podcast in
minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get started

It is Free

  • Podcast Services

    • Podcast Features
    • Pricing
    • Enterprise Solution
    • Private Podcast
    • The Podcast App
    • Live Stream
    • Audio Recorder
    • Remote Recording
    • Podbean AI
  •  
    • Create a Podcast
    • Video Podcast
    • Start Podcasting
    • Start Radio Talk Show
    • Education Podcast
    • Church Podcast
    • Nonprofit Podcast
    • Get Sermons Online
    • Free Audiobooks
  • MONETIZATION & MORE

    • Podcast Advertising
    • Dynamic Ads Insertion
    • Apple Podcasts Subscriptions
    • Switch to Podbean
    • YouTube to Podcast
    • Blog to Podcast
    • Submit Your Podcast
    • Podbean Plugins
    • Developers
  • KNOWLEDGE BASE

    • How to Start a Podcast
    • How to Start a Live Podcast
    • How to Monetize a Podcast
    • How to Promote Your Podcast
    • Mobile Podcast Recording Guide
    • How to Use Group Recording
    • Podcast Advertising 101
  • Support

    • Support Center
    • What’s New
    • Free Webinars
    • Podcast Events
    • Podbean Academy
    • Podbean Amplified Podcast
    • Badges
    • Resources
  • Podbean

    • About Us
    • Podbean Blog
    • Careers
    • Press and Media
    • Green Initiative
    • Affiliate Program
    • Contact Us
  • Privacy Policy
  • Cookie Policy
  • Terms of Use
  • Consent Preferences
  • Copyright © 2015-2025 Podbean.com