FIR #501: AI and the Rise of the $400K Storyteller
AI isn’t replacing communicators — it’s amplifying the value of communication, especially storytelling and strategic writing. In this short, midweek FIR episode, Neville and Shel explore how the hottest jobs in tech are increasingly about telling stories, not writing code, with Netflix, Microsoft, Adobe, Anthropic, and OpenAI all hiring communications and storytelling teams at salaries ranging from six figures up to $775,000 per year. Even AI labs themselves are posting compensation packages around $400K for storytelling and communications roles, signaling that they understand the irreplaceable human value of meaning-making in an age of automated content generation. The distinction Neville and Shel highlight between traditional messaging and true storytelling proves critical: conventional communications start with what the brand wants to say, while storytelling starts with what audiences actually care about. The strongest communicators will be those who move beyond prescriptive messaging to tell genuine human stories. Links from this episode: The unexpected winners of the AI slop boom: Word nerds Why OpenAI Is Offering $400K for Storytelling Roles The Great Communicators Are Human Human Storytellers Worth $400k+ Amidst AI Boom The Great Communicators Are Human Storytelling Wins In The Age Of AI: 3 Valuable Communication Tools How Storytelling Unlocks Career Pathways In The Age Of AI The Bionic Storyteller: How AI can amplify HR’s human voice Businesses hiring storytellers to ‘cut through the AI slop’ Storybrand Building a Story Brand 2.0 (Book) The next monthly, long-form episode of FIR will drop on Monday, February 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Neville Hobson: Hi everyone and welcome to For Immediate Release. This is episode 501. I’m Neville Hobson. Shel Holtz: I’m Shel Holtz. And here’s some good news for communicators. Artificial intelligence isn’t replacing us, it’s amplifying the value of communication itself, especially storytelling and strategic writing. If you’ve been feeling that AI spells doom for writers and communicators, the labor market is telling a very different story. We’ll tell you that story right after this. Let’s start with something concrete. The hottest jobs in tech right now aren’t about writing code or managing data. They’re about telling clear, compelling human stories. Recent hiring trends show that giants like Netflix, Microsoft, Adobe, Anthropic, and OpenAI are aggressively expanding communications and storytelling teams with roles offering from six figures up to as much as $775,000 a year for senior leadership positions without any requirement to write a line of code. Why? Because AI has flooded the internet with cheap automated output, what some observers are calling slopaganda. I love this word, slopaganda. Hadn’t heard it before I read that article, but millions of words get generated every minute. Most of it lacks clarity, insight, context, and meaning, exactly the things that real communicators deliver. Companies are recognizing that the ability to cut through that noise with strategic narrative creates trust, authority, and differentiation in the market. Even the AI labs themselves, including OpenAI and Anthropic, are willing to pay top dollar for storytellers. One analysis said that nearly $400,000 compensation packages are being posted specifically for storytelling and communications roles at these firms. exactly because humans excel at crafting nuanced messages that machines simply can’t. So here’s the underlying shift communicators need to understand. AI automates… AI automates tasks, but meaning making remains deeply human. Machines can generate text, but they don’t know which stories matter to whom or why. And we keep hearing communicators and writers venting on LinkedIn about machines lacking judgment, empathy, context, and strategic framing, all those hallmarks of great communication. That’s exactly what they’re looking for. And in an age of automated noise, those abilities create value. That’s a theme echoed across industry thinking. Shel Holtz: That’s a theme echoed across industry thinking. A Forbes piece on storytelling in the age of AI highlights that storytelling is one of the most powerful tools we have and one of the most powerful tools leaders have. It helps audiences remember facts wrapped in emotion, connect data to human experience, and anchor organizational vision in something people can feel and act on. Another Forbes analysis argues that storytelling isn’t just about communication, it’s also a career pathway. When individuals and organizations tell clear stories about evolving roles, skills, development, and future opportunities, they make the future feel navigable rather than threatening. This matters for internal communication too. HR and people leaders are increasingly using narrative to frame change and build resilience. When employees feel adrift and amid all the talk of AI disruption, a coherent story about how the organization is evolving and where people fit in. is one of the most effective ways to build trust and engagement. Even the broader hype narrative around AI’s impact on jobs, including viral essays, warning of sweeping automation, underscores this point. Some of the loudest voices talking about disruption are exactly those using storytelling to shape a narrative about the future. But the data so far suggests that the real impact of AI isn’t mass job elimination, it’s task transformation. with humans shifting into roles that emphasize strategy, creativity, judgment, and communication, exactly the space where we storytellers thrive. So for communicators who worry that AI might make them obsolete, here’s the reality. Your craft isn’t threatened, it’s elevated. AI makes routine work easier, but narrative leadership, strategic framing, and contextual clarity are becoming even more essential. The labor market isn’t pulling back its investment in communicators, it’s paying up for them because the ability to tell a clear human story is now a competitive advantage. With the world drowning in automated content, meaning is scarce. And communicators are the ones who turn noise into narrative, confusion into clarity, and information into influence. That’s not something AI replaces, it’s something only humans can do well. And that’s why even in an AI era, talented communicators are irreplaceable and more valuable than ever. And by the way, if the tech companies feel the need to cut through the noise created by all that slopaganda, I got to use that word again, other industries will figure out sooner or later that they need to as well. Neville Hobson: Listening to what you’re saying there, Shel, what strikes me is how similar themes are now surfacing here in the UK. So the Times ran a piece recently about companies hiring chief storytellers, specifically to cut through what they and everyone calls AI slop. What’s interesting is that it isn’t framed as anti-AI, it’s framed as a response to saturation. When content becomes easy and abundant, meaning becomes scarce. Recruiters are saying demand for storytelling roles has doubled in the past year. and the way they define storytelling isn’t about clever copy. It’s about starting with what people care about rather than what the brand wants to say. There’s also a strong internal dimension, storytelling being used to align remote teams, break down silos and create shared culture. So I’m left wondering whether this chief storyteller trend is something genuinely new or whether we’re simply rediscovering the strategic craft of communication in an AI saturated environment. And finally, if AI makes it easier to generate content, does that mean communicators need to become curators of meaning rather than producers of material? Shel Holtz: Interesting question. And I think that this is somewhat different. We have been telling stories, but I think you have to define what we mean by storytelling here, because we write stories that aren’t really stories. It’s just a term that we use as a synonym for article. I wrote a story the other day. Was it really a story or was it a communication piece? Shel Holtz: There are so many stories that we could tell in the world of organizational communication that are really just prescriptive or a statement of fact. We’re getting the news out, but we don’t have a beginning, middle, and end. We certainly don’t have a protagonist. We’re not looking at Joseph Campbell’s hero’s journey and trying to figure out. how to apply that to the tales that we tell. There is a guy out there, Donald Miller, who’s got this thing called Story Brand, which is fascinating, that is designed to put your customer into the story as the hero and the company as the mentor or the guide who helps the hero achieve its goal through its journey. And I really like it. And there are free tools that you can use to map all this out for your brand or your product. But it gets us away from saying, isn’t this product great? Look how great it works and tell a genuine story instead. And I think this is why narrative and story rather than communications or public relations are the labels that are being attached to these job descriptions that are all over LinkedIn. When I saw the story, I went and looked and there are dozens and dozens of them. And the salaries are jaw dropping when you consider that the typical, you know, communication manager is making about 108,000 a year, according to one of these articles. you know, $400,000 with all benefits, with three days remote work, because I read these job descriptions. This is very encouraging for our profession. But if you’re the kind of communicator out there who writes these articles that just says we have an employee assistance program, It offers the following bulleted services. You should call if you have emotional or financial problems. That’s not what they’re looking for. They’re not looking for you. They’re looking for the guy who wrote that article that I’ve referenced 50 times on this podcast about the employee who was divorced and depressed and started drinking and gained 100 pounds and finally called the EAP when he hit rock bottom. And they worked with him to find something that really excited him and it turned out to be ballroom dancing and now he’s a national champion traveling around the world. He’s lost more than a hundred pounds. He’s quit smoking and drinking and all because of the EAP. Which of those two stories are you more likely to read? It’s absolutely the story of the guy who used the EAP. People can relate to that. People don’t even read the crap that says we have one and here’s what it offers. So I think cutting through the noise with genuine stories that tell the tale of what the organization is trying to convey, that’s what they’re looking for. Neville Hobson: So interesting. So the title Chief Storyteller, that sounds new and fashionable, right? But when you unpack it, much of it looks like what strong communication leaders have always done. Alignment, translation, cohesion, behavioral framing opens up a richer debate, I think. Is this a genuine new C-suite function or a rebranding of strategic communication crafted in an AI era? It sounds a lot like the latter to me. Shel Holtz: It sounds a lot like the latter, but I think there’s a bit of the former as well, because we’re talking about a transition of role. I think communicators who are employed right now want to start telling more stories if they want to keep their jobs, because if all you’re doing is writing the stuff that can be written by AI just by giving it the facts and say, turn this into an article, I think you’re toast. But if you can tell a genuine story that moves people, then your job is probably secure and you may be qualified to apply for one of these $400,000 a year jobs. I don’t think they’re going to hire the average communicator who’s doing a pretty good job at their organization, even if they’re at the C-suite level, if they can’t put together the kind of narrative that these companies are looking for. Certainly there are companies that are doing this and there are communicators in those companies that are doing this, but I don’t think it’s most. I think most are cranking out the typical content that is just conveying the news. And I think basic journalism, the who, what, when, where, why, if I can pop that into Claude or ChatGPT or Gemini, especially if I’ve trained it on my writing style, which I have, by the way, on Gemini, it’ll turn out a passable article that then you can edit in 15 minutes and be done. That’s not what they’re looking for. I think that they would argue that that probably is slopaganda. And… This is exactly the noise that they’re looking for somebody to help them cut through. Neville Hobson: So one of the strongest lines in the times piece is the distinction between messaging and meaning. Traditional comms starts with what the brand wants to say, says the times. Storytelling starts with what people care about. That’s a strategic pivot, I would say. So messaging is output driven, meaning is audience driven. AI is good at output, humans are better at contextual meaning. So is that? Now we should be looking at this as a shift from messaging to meaning. Shel Holtz: absolutely. I think that’s exactly what we’re talking about here. And the focus on the audience. And again, this is what Donald Miller’s story brand, who has paid us no consideration for the reference here, is exactly what he does. He puts the customer at the center of the company or the brand’s story. And I think that’s what’s different. I think that’s the transition or the pivot that communicators need to make. I don’t think it’s difficult. And if you haven’t… written fiction, I would suggest that you read about Joseph Campbell’s Hero’s Journey. There’s a wonderful book, I can’t remember the author’s name, but I’ve read it twice called The Writer’s Journey. he talks about, I mean, he’s focused on writing fiction, but he talks about how you apply the hero’s journey to things like Star Wars and The Wizard of Oz. And he has these tropes that everybody is familiar with. that he uses to explain how to write this way. And he tells you that every successful film in particular, and novels as well, uses this formula. And I read it twice because I really had to unpack it in a way that worked in organizational communication rather than novel and film writing. But it does, it works. And then I found Donald Miller in his story brand and I said, there it is right there. fill in, in the boxes, who is the mentor, who is the other characters that appear in this formula. And it’s well worth taking a look at and his book is worth reading as well. I’ll have a link to Story Brand in the show notes. Neville Hobson: Yeah. So I’m just going to go through my mind thinking about where this conversation we’re having here. And if we look at the, which to me makes complete sense, and the Times article and the Business Decider piece, I think support this, that the shift is definitely from messaging to meaning, something we’ve talked about quite a bit. The Times piece talks about the the noise not being the problem, it’s indistinguishable noise. And that makes sense, that kind of metaphorical phrase that reminds me of conversations we’ve had before, which talks about what a communicator is being using artificial intelligence to enhance their abilities. So I’m just trying to see where the kind of path looks ahead for this. It seems to me that AI is going to play an even more significant role in the future for communicators who are shifting from messaging to meaning. And I must admit, I don’t believe that the scenario you painted earlier about the kind of, you know, the communications person who has… been right doing the stuff he or she’s been doing for years, that’s fine. Keep doing that because there’s a market view. I don’t think that’s true. I think AIs see the threat for those people. Yeah. So if AI is good at output, according to the kind of, what are the concluding points in the times piece, humans are better at contextual meaning. That surely is then what people are looking for to pay half a million bucks or whatever is a salary. Shel Holtz: yeah, I agree with that. Neville Hobson: to achieve storyteller. I think this huge confusion here and inserting into the picture the phrase chief storyteller, where it’s just a fancy job title basically, doesn’t help with this, it seems to me. it’s inevitable, I suppose you’re going to get that. as you said, I’ve seen it’s all over LinkedIn, that chief storyteller is an executive function. Yeah, but that’s not the right interpretation for that, don’t believe. So it doesn’t help clarify what the picture is here. Shel Holtz: I don’t know. I would be very curious to look at the org charts of the companies that are seeking these positions to see if it is separate and distinct from the public relations or communications function. We talked several weeks ago about the proposed new definition of public relations, and it goes way beyond this. I’m thinking, and I don’t know this for a fact, Shel Holtz: But I’m thinking that what these companies are doing is creating a new function that will live alongside and presumably under the same umbrella as the PR or corporate communications department, which is building relationships with key stakeholders. But the storytellers are out there creating the content that’s going to cut through the slop aimed at particular audiences who are ripe for this kind of storytelling. I I was about to say messaging, it’s… trying to get away from messaging. And the PR department will continue to do the earnings releases and the thought leadership and the negotiations with critics and all of the stuff that PR typically does. I don’t get the impression that these jobs sit in the public relations department. Neville Hobson: No, I would say not, particularly as, for instance, one point that The Times made is that there’s a significant element of team building and so forth. So internal focus in organizations for this sort of role. it’s not just a public relations external function by any means. It’s interesting you mentioned the definition. I published a post on my blog this morning about that actually. looking at what the PRCA has done. It’s only one professional body. I’m thinking this isn’t going to fly unless everyone gets behind it. So that’s a different topic than what we’re talking about. it sort of fits in there because the role of… I just have a problem with this chief storyteller title, frankly, It doesn’t really fit what this role actually is. And I do believe, and you’ve partly prompted this kind of clarity in my thinking on this, that this is about messaging, it’s not about content production. That’s what AI does. And the interpretation of it, the meaning and significance of it is what the human does. Now, if you can, let’s say, present your skill as something in that area. to an organization who’s willing to pay $400,000. Again, be interested to see the job description behind that salary level. I haven’t seen that. I’ve I’ve not actually looked, I must admit. But it’d be interesting how they have described the role they’re willing to pay 400 grand for. So I would imagine they’re absolutely swamped with applications, which is where AI comes into play. AI comes into play well to sift out all the no-hopers, basically. Neville Hobson: But it is interesting, it is very interesting. And this could be a great catalyst for the discussion about the role of a communicator in organizations in light of this development. That seems to me to be something good to have. Shel Holtz: I can’t imagine somebody at OpenAI or Anthropic sifting through hundreds of resumes or probably thousands of resumes. they’re absolutely feeding them all to AI. I’d be shocked if not. And for the record, there are also some of these positions that don’t have storytelling in the title. I saw a couple that had narrative in the title instead. But I think they’re all getting to this notion of telling a powerful story that evokes emotions and pulls Shel Holtz: audiences in rather than advertising or traditional marketing speak. That’s what’s going to cut through the, I get to say it again, slopaganda. And that’ll be a 30 for this episode of For Immediate Release. The post FIR #501: AI and the Rise of the $400K Storyteller appeared first on FIR Podcast Network.
ALP 295: Building the ideal agency: wrestling with the tough decisions
David C. Baker recently published a fascinating thought experiment about what he’d do if starting an agency from scratch today—and it’s packed with provocative ideas worth serious consideration. His article offers a comprehensive blueprint covering everything from organizational structure to compensation philosophy, and much of it aligns with how Chip and Gini think about building sustainable agencies. But the most interesting conversations happen when smart people disagree, which is why this episode focuses on the handful of points where Chip and Gini see things differently. Not because Baker’s ideas are bad, but because they expose the tension between aspirational agency management and the messy realities of running a business with real budgets, real people, and real client demands. In this episode, Chip and Gini tackle mandatory one-month sabbaticals for every employee, open-book finances published on your website, 360-degree reviews, and incentive compensation structures. They dig into why ideas that sound compelling in theory often create unintended consequences in practice—like how retention-based bonuses can fuel scope creep, or why forced sabbaticals don’t actually solve the single-point-of-failure problem they’re designed to address. The conversation reveals thoughtful nuance on both sides. Gini shares her brutal experience with anonymous feedback that backfired when presented poorly. Chip explains why he sees most performance measurement systems as “performance theater” while still advocating for more financial transparency with teams. They discuss the logistical nightmares of scheduling multiple month-long absences and why backup systems for unexpected departures matter more than planned time off. Throughout, they return to a central theme: what works brilliantly at one stage of growth can be completely wrong at another. The goal isn’t to declare Baker’s ideas right or wrong, but to test assumptions and recognize that even the most well-intentioned frameworks deserve scrutiny before implementation. [read the transcript] The post ALP 295: Building the ideal agency: wrestling with the tough decisions appeared first on FIR Podcast Network.
FIR B2B episode #159: A tale of two newspapers
We are back with this episode after the recent events of the massive layoffs at the Washington Post and the LA Times, the shuttering of the Pittsburgh Post Gazette and funding cuts at NPR. Paul and David describe the continuing train wreck of daily news there and contrast the Post’s approach with what has been going on at the New York Times digital property. The Times diversified its revenue stream beyond its core newsgathering with purchasing gaming, cooking, and sports-related content. Post’s owner Jeff Bezos didn’t diversify or even keep the news core. Part of the digital newspaper problem is that its ad revenue model is gone, as search traffic has dried up thanks to AI chatbots. Compounding this is that overall monthly visits to the Post’s website is down from 60M (in 2022) to 40M visits last year, and subscriptions are dropping too. We contrast the Post and the Times business models We talk about some signs of success with subscriptions for smaller, more targeted sites, such as 404Media, which shows that a small group of independent journalists can keep quality high and report on significant stories. Also, individual creators (such as Mr. Beast and Mark Rober) can build a brand and attract significant audiences (Rober has more than 70M subscribers, for example) on YouTube and TikTok. Well worthwhile to listen to Marty Baron, former editorial director of the Post, talk to Tim Miller about his thoughts on the decline of his former employer. The post FIR B2B episode #159: A tale of two newspapers appeared first on FIR Podcast Network.
FIR #500: When Harassment Policies Meet Deepfakes
AI has shifted from being purely a productivity story to something far more uncomfortable. Not because the technology became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine argues that AI-enabled workplace abuse — particularly deepfakes — should be treated as workplace harm, not dismissed as gossip, humor, or something that happens outside of work. When anyone can generate realistic images or audio of a colleague in minutes and circulate them instantly, the targeted person is left trying to disprove something that never happened, even though it feels documented. That flips the burden of proof in ways most organizations aren’t prepared to handle. What makes this a communication issue — not just an HR or IT issue — is that the harm doesn’t stop with the creator. It spreads through sharing, commentary, laughter, and silence. People watch closely how leaders respond, and what they don’t say can signal tolerance just as loudly as what they do. In this episode, Neville and Shel explore what communicators can do before something happens: helping organizations explicitly name AI-enabled abuse, preparing leaders for that critical first conversation, and reinforcing standards so that, when trust is tested, people already know where the organization stands. Links from this episode: The Emerging Threat of Workplace AI Abuse The next monthly, long-form episode of FIR will drop on Monday, February 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz: Hi everybody, and welcome to episode number 500 of For Immediate Release. I’m Shel Holtz. Neville Hobson: And I’m Neville Hobson. Shel Holtz: And this is episode 500. You would think that that would be some kind of milestone that we would celebrate. For those of you who are relatively new to FIR, this show has been around since 2005. We have not recorded only 500 episodes in that time. We started renumbering the shows when we rebranded it. We started as FIR, then we rebranded to the Hobson and Holtz Report because there were so many other FIR shows. Then, for various reasons, we decided to go back to FIR and we started at zero. But I haven’t checked — if I were to put the episodes we did before that rebranding together with the episodes since then, we’re probably at episode 2020, 2025, something like that. Neville Hobson: I would say that’s about right. We also have interviews in there and we used to do things like book reviews. What else did we do? Book reviews, speeches, speeches. Shel Holtz: Speeches — when you and I were out giving talks, we’d record them and make them available. Neville Hobson: Yeah, boy, those were the days. And we did lives, clip times, you know, so we had quite a little network going there. But 500 is good. So we’re not going to change the numbering, are we? It’s going to confuse people even more, I think. Shel Holtz: No, I think we’re going to stick with it the way it is. So what are we talking about on episode 500? Neville Hobson: Well, this episode has got a topic in line with our themes and it’s about AI. We can’t escape it, but this is definitely a thought-provoking topic. It’s about AI abuse in the workplace. So over the past year, AI has shifted from being a productivity story to something that’s sometimes much more uncomfortable. Not because the technology itself suddenly became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine here in the UK published earlier this month makes the case that AI-enabled abuse, particularly deepfakes, should be treated as workplace harm, not as gossip, humor, or something that happens outside work. And that distinction really matters. We’ll explore this theme right after this message. What’s different here isn’t intent. Harassment, coercion, and humiliation aren’t new. What is new is speed, scaling, credibility. Anyone can use AI to generate realistic images or audio in minutes, circulate them instantly, and leave the person targeted trying to disprove something that never happened but feels documented. The article argues that when this happens, organizations need to respond quickly, contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Not just to protect the individual involved, but to preserve trust across the organization. Because once people see that this kind of harm can happen without consequences, psychological safety collapses. What also struck me reading this, Shel, is that while it’s written for HR leaders, a lot of what determines the outcome doesn’t actually sit in policy or process. It sits in communication. In moments like this, people are watching very closely. They’re listening for what leaders say and just as importantly, what they don’t. Silence, careful wording, or reluctance to name harm can easily be read as uncertainty or worse, tolerance. That puts communicators right in the middle of this issue. There are some things communicators can do before anything happens. First, help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity. Second, prepare leaders for that first conversation because tone and language matter long before any investigation starts. And third, reinforce shared expectations early. So when something does go wrong, people already know where the organization stands. This isn’t crisis response, it’s proactive preventative communication. In other words, this isn’t really a story about AI tools, it’s a story about trust — and how organizations communicate when that trust is tested. Shel Holtz: I was fascinated by this. I saw the headline and I thought it was about something else altogether because I’ve seen this phrase, “workplace AI abuse,” before, but it was in the context of things like work slop and some other abuses of AI that generally are more focused on the degradation of the information and content that’s flowing around the organization. So when I saw what this was focused on, it really sent up red flags for me. I serve on the HR leadership team of the organization I work for. I’ll be sharing this article with that team this morning. But I think there’s a lot to talk about here. First of all, I just loved how this article ended. The last line of it says, “AI has changed the mechanics of misconduct, but it hasn’t changed what employees need from their employer.” And I think that’s exactly right. From a crisis communication standpoint, framing it that way matters because it means we don’t have to reinvent values. We don’t have to reinvent principles. We just need to update the protocols we use to respond when something happens. Neville Hobson: Yeah, I agree. And it’s a story that isn’t unique or new even — the role communicators can play in the sense of signaling the standards visibly, not just written down, but communicating them. And I think that’s the first thing that struck me from reading this. It is interesting — you’re quoting that ending. That struck me too. The expectation level must be met. The part about not all of it sitting in process and so forth with HR, but with communication — absolutely true. Yet this isn’t a communication issue per se. This is an organizational issue where communication or the communicator works hand in glove with HR to manage this issue in a way that serves the interest of the organization and the employees. So making those standards visible and explaining what the rules are for this kind of thing — you would think it’s pretty common sense to most people, but is it not true that like many things in organizational life, something like this probably isn’t set down well in many organizations? Shel Holtz: It’s probably not set down well from these kinds of situations before AI. Where I work, we go through an annual workplace harassment training because we are adamant that that’s not going to happen. It certainly doesn’t cover this stuff yet. I suspect it probably will. But yeah, you’re right. I think organizations generally out there — many of them don’t have explicit policies around harassment and what the response should be. I think the most insidious part of how deepfakes are affecting all of this is that they flip the burden of proof. A victim has to prove that something didn’t happen, and in the court of workplace opinion, that’s really hard to do. It creates a different kind of reputational harm. Neville Hobson: Yeah. Shel Holtz: From traditional harassment, the kind we learn about in our training — you know, with he said, she said type situations — there’s a certain amount of ambiguity and people are trying to weigh what people said and look at their reputations and their credibility and make judgments based on limited information available. With deepfakes, there’s evidence. I mean, it’s fabricated, but it’s evidence. And some people seeing that before they hear it’s a deepfake just might believe it and side with the creator of that thing. The article does make a really critical point though, and that’s that it’s rarely about one bad actor. The person who created this had a malicious intent, but people who share it, people who forward it along and comment on it and laugh about it — that spreads the harm and it makes the whole thing more complex and it creates complicity among the employees who are involved in this, even though they may think it’s innocent behavior that just mirrors what they do on public social media. And from a comms perspective, that means the crisis isn’t just about the perpetrator, right? It’s about organizational culture. If people are circulating this content, that tells you something about your workplace that needs to be addressed that’s bigger than that one individual case. Neville Hobson: Yeah, I agree. Absolutely. And that’s one of the dynamics the article highlights that I found most interesting — about how harm spreads socially through sharing, commentary, laughter, or quiet disengagement. Communicators need to help prevent normalization — this is not acceptable, not normal. They’re often closest to these informal channels and cultural signals. That gives communicators a unique opportunity, the article points out. For example, communicators can challenge the idea that no statement is the safest option when values are being tested. Help leaders understand that internal silence can legitimize behavior just as much as explicit approval and encourage timely, values-anchored communication that says, “this crosses a line,” even if the facts are still being established. It is really difficult though. Separately, I’ve read examples where there’s a deepfake of a female employee that is highly inappropriate the way it presents her. And yet it is so realistic — incredibly realistic — that everyone believes it’s true. And the denials don’t make much difference. And that’s where I think another avenue that communicators, especially communicators, need to be involved in. HR certainly would be involved because that’s the relationship issue. But communicators need to help make the statements that this is not real, that it’s still being investigated, that we believe it’s not real. In other words, support the employee unless you’ve got evidence not to, or there’s some reason — legal perhaps — that you can’t say anything more. But challenge people who imply it’s genuine and carry that narrative forward with others in the organization. So it’s difficult. It doesn’t mean you’ve got to broadcast a lot of details. It means going back to reinforcing those standards in the organization, repeating what they are before harmful behavior becomes part of, as the article mentions, organizational folklore. It’s a tricky, tricky road to walk down. Shel Holtz: And it gets even trickier. There’s another layer of complexity to add to this for HR in particular. And that is an employee sharing one of these deepfakes on a personal text thread or on a personal account on a public social network — sharing it on Instagram, sharing it on Facebook — which might lead someone in the organization to say, “Well, that’s not a workplace issue. That’s something they did on their own private network.” But the deepfake involves a colleague at work, and we have to acknowledge that that becomes a workplace issue. Neville Hobson: Yeah, it actually highlights, Shel, that therefore education is lacking if that takes place, I believe. So you’ve got to have already in place the policies that explicitly address the label “AI abuse.” It’s a workplace harm issue. It’s not a technical or a personal one. And it’s not acceptable nor permitted for this to happen in the workplace. And if it does, the perpetrators will be disciplined and face consequences because of this. So that in itself though isn’t enough. It requires more proactive education to address it — like, for instance, informal communication groups to discuss the issue, not necessarily a particular example, and get everyone involved in discussing why it’s not a good thing. It may well surface opinions — again, depends on how trusted people feel or open they feel — on saying, “I disagree with this. I don’t think it is a workplace issue.” You get a dialogue going. But the company, the employer, in the form of the communicators, have the right people to take this forward, I think. Shel Holtz: But here’s another communication issue that isn’t really addressed in the article, but I think communication needs to be involved. The article outlines a framework for addressing this. They say stabilize, which is support and safety; contain, which is stop the spread and investigate — and investigate broadly, not just the creator. I mean, who helped spread this thing around? Yeah, that’s pretty good crisis response advice. But what strikes me is the fact that containment is mentioned almost as a technical IT issue when it’s really a communication challenge. Because how do you preserve evidence without further circulating harmful content? This requires clear protocols that everybody needs to understand. So communicators should be involved in helping to develop those protocols, but also making sure that they spread through the organization and are aligned with the values and become part of the culture. Neville Hobson: Okay, so that kind of brings it round to that first thing I mentioned about what communicators can do before anything happens, and that’s to help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity and set out exactly what the organizational position is on something like this. That will probably mean updating what would be the equivalent of the employee handbook where these kinds of policies and procedures sit, so that no one’s got any doubt of where to find out information about this. And then proactive communication about it. I mean, yes, communicators have lots to address in today’s climate. This is just one other thing. I would argue this is actually quite critical. They need to address this because unaddressed, it’s easy to see where this would gather momentum. Shel Holtz: Yeah. So based on the article, you’ve already shared some of your recommendations for communicators. I think that updating the harassment policies with explicit deepfake examples is important. This is the recommendation I’m going to be making where I work. I think managers need to be trained on that first-hour response protocol. Managers, I think, are pretty poorly trained on this type of thing. And generic e-learning isn’t going to take care of it. So I think there needs to be specific training, particularly out in the field or out on the factory floor, where this is, I think, a little more likely to happen among people who are at that level of the org. I don’t think you’re going to see much of this manager to manager or VP to VP. So I think it’s more front line where you’re likely to see this — where somebody gets upset at somebody else and does a deepfake. So those managers need to be trained. I think you need to have those evidence-handling procedures established and IT completely on board. So that’s a role for communicators. Reviewing and strengthening the reporting routes — who gets told when something like this happens and how does it get elevated? And then what are the protocols for determining what to do about it? And include this scenario in your crisis response planning. It should be part of that larger package of crises that might emerge that you have identified as possible and make sure that this is one of them. Yeah, this article really ought to be required reading for every HR professional, every organizational leader, every communication leader, because as we’ve been saying right now, I think most organizations aren’t prepared. What the article said is the technology has outpaced our policies, our training, and our cultural norms. We’re in a gap period where harm is happening and institutions are scrambling to catch up. Time to stop scrambling, time to just catch up, start doing this work. Neville Hobson: Yeah, I would agree. I think the final comment I’d make is kind of the core message that comes out of this whole thing that summarizes all of this. And this is from the employee point of view, it seems to me. So accept that AI has changed how misconduct happens, not what employees need. Fine, we accept that. Employees need confidence that if they are targeted, the organization will do the following: take it seriously, act quickly to contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Those four things need to be in place, I believe. Shel Holtz: Yeah. And what the consequences are — you always have to remind people that there are consequences for these things. And that’ll be a 30 for this episode of For Immediate Release. The post FIR #500: When Harassment Policies Meet Deepfakes appeared first on FIR Podcast Network.
ALP 294: Wake up or get left behind: AI is forcing your hand
No more excuses. No more waiting to see how things play out. AI has moved past the experimental phase, and if you’re still treating it like a nice-to-have rather than a fundamental shift in how your agency operates, you’re already falling behind. In this episode, Chip comes out swinging with a wake-up call for the agency community: the ground is shifting faster than most are willing to admit, and the window for meaningful adaptation is closing. Gini backs him up with examples of how AI has progressed from an intern-level tool to something that can genuinely replace mid-level work—if agencies don’t evolve what they’re selling. They dig into the practical reality of training AI tools to work like team members, not just one-off prompt machines. Chip explains how he uses different platforms for different strengths—Claude for writing, Gemini for competitive intelligence, Perplexity for research, and ChatGPT as his strategic baseline. Gini shares how her 12-year-old daughter creates entire anime worlds through conversation with AI, demonstrating the power of treating these tools as collaborators rather than search engines. The conversation covers what clients actually want to pay for in 2026 (hint: it’s not social posts and press releases), how to build AI agents trained on your specific expertise, and why the process of training AI forces valuable clarity about your business. They emphasize that this isn’t about slapping the “AI-powered” label on your services—it’s about fundamentally rethinking what value you deliver and how you deliver it. If you’ve been sitting on the sidelines waiting for the AI dust to settle, this episode is your warning: there is no settling. There’s only evolution or extinction. [read the transcript] The post ALP 294: Wake up or get left behind: AI is forcing your hand appeared first on FIR Podcast Network.