FIR #500: When Harassment Policies Meet Deepfakes
AI has shifted from being purely a productivity story to something far more uncomfortable. Not because the technology became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine argues that AI-enabled workplace abuse — particularly deepfakes — should be treated as workplace harm, not dismissed as gossip, humor, or something that happens outside of work. When anyone can generate realistic images or audio of a colleague in minutes and circulate them instantly, the targeted person is left trying to disprove something that never happened, even though it feels documented. That flips the burden of proof in ways most organizations aren’t prepared to handle. What makes this a communication issue — not just an HR or IT issue — is that the harm doesn’t stop with the creator. It spreads through sharing, commentary, laughter, and silence. People watch closely how leaders respond, and what they don’t say can signal tolerance just as loudly as what they do. In this episode, Neville and Shel explore what communicators can do before something happens: helping organizations explicitly name AI-enabled abuse, preparing leaders for that critical first conversation, and reinforcing standards so that, when trust is tested, people already know where the organization stands. Links from this episode: The Emerging Threat of Workplace AI Abuse The next monthly, long-form episode of FIR will drop on Monday, February 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz: Hi everybody, and welcome to episode number 500 of For Immediate Release. I’m Shel Holtz. Neville Hobson: And I’m Neville Hobson. Shel Holtz: And this is episode 500. You would think that that would be some kind of milestone that we would celebrate. For those of you who are relatively new to FIR, this show has been around since 2005. We have not recorded only 500 episodes in that time. We started renumbering the shows when we rebranded it. We started as FIR, then we rebranded to the Hobson and Holtz Report because there were so many other FIR shows. Then, for various reasons, we decided to go back to FIR and we started at zero. But I haven’t checked — if I were to put the episodes we did before that rebranding together with the episodes since then, we’re probably at episode 2020, 2025, something like that. Neville Hobson: I would say that’s about right. We also have interviews in there and we used to do things like book reviews. What else did we do? Book reviews, speeches, speeches. Shel Holtz: Speeches — when you and I were out giving talks, we’d record them and make them available. Neville Hobson: Yeah, boy, those were the days. And we did lives, clip times, you know, so we had quite a little network going there. But 500 is good. So we’re not going to change the numbering, are we? It’s going to confuse people even more, I think. Shel Holtz: No, I think we’re going to stick with it the way it is. So what are we talking about on episode 500? Neville Hobson: Well, this episode has got a topic in line with our themes and it’s about AI. We can’t escape it, but this is definitely a thought-provoking topic. It’s about AI abuse in the workplace. So over the past year, AI has shifted from being a productivity story to something that’s sometimes much more uncomfortable. Not because the technology itself suddenly became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine here in the UK published earlier this month makes the case that AI-enabled abuse, particularly deepfakes, should be treated as workplace harm, not as gossip, humor, or something that happens outside work. And that distinction really matters. We’ll explore this theme right after this message. What’s different here isn’t intent. Harassment, coercion, and humiliation aren’t new. What is new is speed, scaling, credibility. Anyone can use AI to generate realistic images or audio in minutes, circulate them instantly, and leave the person targeted trying to disprove something that never happened but feels documented. The article argues that when this happens, organizations need to respond quickly, contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Not just to protect the individual involved, but to preserve trust across the organization. Because once people see that this kind of harm can happen without consequences, psychological safety collapses. What also struck me reading this, Shel, is that while it’s written for HR leaders, a lot of what determines the outcome doesn’t actually sit in policy or process. It sits in communication. In moments like this, people are watching very closely. They’re listening for what leaders say and just as importantly, what they don’t. Silence, careful wording, or reluctance to name harm can easily be read as uncertainty or worse, tolerance. That puts communicators right in the middle of this issue. There are some things communicators can do before anything happens. First, help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity. Second, prepare leaders for that first conversation because tone and language matter long before any investigation starts. And third, reinforce shared expectations early. So when something does go wrong, people already know where the organization stands. This isn’t crisis response, it’s proactive preventative communication. In other words, this isn’t really a story about AI tools, it’s a story about trust — and how organizations communicate when that trust is tested. Shel Holtz: I was fascinated by this. I saw the headline and I thought it was about something else altogether because I’ve seen this phrase, “workplace AI abuse,” before, but it was in the context of things like work slop and some other abuses of AI that generally are more focused on the degradation of the information and content that’s flowing around the organization. So when I saw what this was focused on, it really sent up red flags for me. I serve on the HR leadership team of the organization I work for. I’ll be sharing this article with that team this morning. But I think there’s a lot to talk about here. First of all, I just loved how this article ended. The last line of it says, “AI has changed the mechanics of misconduct, but it hasn’t changed what employees need from their employer.” And I think that’s exactly right. From a crisis communication standpoint, framing it that way matters because it means we don’t have to reinvent values. We don’t have to reinvent principles. We just need to update the protocols we use to respond when something happens. Neville Hobson: Yeah, I agree. And it’s a story that isn’t unique or new even — the role communicators can play in the sense of signaling the standards visibly, not just written down, but communicating them. And I think that’s the first thing that struck me from reading this. It is interesting — you’re quoting that ending. That struck me too. The expectation level must be met. The part about not all of it sitting in process and so forth with HR, but with communication — absolutely true. Yet this isn’t a communication issue per se. This is an organizational issue where communication or the communicator works hand in glove with HR to manage this issue in a way that serves the interest of the organization and the employees. So making those standards visible and explaining what the rules are for this kind of thing — you would think it’s pretty common sense to most people, but is it not true that like many things in organizational life, something like this probably isn’t set down well in many organizations? Shel Holtz: It’s probably not set down well from these kinds of situations before AI. Where I work, we go through an annual workplace harassment training because we are adamant that that’s not going to happen. It certainly doesn’t cover this stuff yet. I suspect it probably will. But yeah, you’re right. I think organizations generally out there — many of them don’t have explicit policies around harassment and what the response should be. I think the most insidious part of how deepfakes are affecting all of this is that they flip the burden of proof. A victim has to prove that something didn’t happen, and in the court of workplace opinion, that’s really hard to do. It creates a different kind of reputational harm. Neville Hobson: Yeah. Shel Holtz: From traditional harassment, the kind we learn about in our training — you know, with he said, she said type situations — there’s a certain amount of ambiguity and people are trying to weigh what people said and look at their reputations and their credibility and make judgments based on limited information available. With deepfakes, there’s evidence. I mean, it’s fabricated, but it’s evidence. And some people seeing that before they hear it’s a deepfake just might believe it and side with the creator of that thing. The article does make a really critical point though, and that’s that it’s rarely about one bad actor. The person who created this had a malicious intent, but people who share it, people who forward it along and comment on it and laugh about it — that spreads the harm and it makes the whole thing more complex and it creates complicity among the employees who are involved in this, even though they may think it’s innocent behavior that just mirrors what they do on public social media. And from a comms perspective, that means the crisis isn’t just about the perpetrator, right? It’s about organizational culture. If people are circulating this content, that tells you something about your workplace that needs to be addressed that’s bigger than that one individual case. Neville Hobson: Yeah, I agree. Absolutely. And that’s one of the dynamics the article highlights that I found most interesting — about how harm spreads socially through sharing, commentary, laughter, or quiet disengagement. Communicators need to help prevent normalization — this is not acceptable, not normal. They’re often closest to these informal channels and cultural signals. That gives communicators a unique opportunity, the article points out. For example, communicators can challenge the idea that no statement is the safest option when values are being tested. Help leaders understand that internal silence can legitimize behavior just as much as explicit approval and encourage timely, values-anchored communication that says, “this crosses a line,” even if the facts are still being established. It is really difficult though. Separately, I’ve read examples where there’s a deepfake of a female employee that is highly inappropriate the way it presents her. And yet it is so realistic — incredibly realistic — that everyone believes it’s true. And the denials don’t make much difference. And that’s where I think another avenue that communicators, especially communicators, need to be involved in. HR certainly would be involved because that’s the relationship issue. But communicators need to help make the statements that this is not real, that it’s still being investigated, that we believe it’s not real. In other words, support the employee unless you’ve got evidence not to, or there’s some reason — legal perhaps — that you can’t say anything more. But challenge people who imply it’s genuine and carry that narrative forward with others in the organization. So it’s difficult. It doesn’t mean you’ve got to broadcast a lot of details. It means going back to reinforcing those standards in the organization, repeating what they are before harmful behavior becomes part of, as the article mentions, organizational folklore. It’s a tricky, tricky road to walk down. Shel Holtz: And it gets even trickier. There’s another layer of complexity to add to this for HR in particular. And that is an employee sharing one of these deepfakes on a personal text thread or on a personal account on a public social network — sharing it on Instagram, sharing it on Facebook — which might lead someone in the organization to say, “Well, that’s not a workplace issue. That’s something they did on their own private network.” But the deepfake involves a colleague at work, and we have to acknowledge that that becomes a workplace issue. Neville Hobson: Yeah, it actually highlights, Shel, that therefore education is lacking if that takes place, I believe. So you’ve got to have already in place the policies that explicitly address the label “AI abuse.” It’s a workplace harm issue. It’s not a technical or a personal one. And it’s not acceptable nor permitted for this to happen in the workplace. And if it does, the perpetrators will be disciplined and face consequences because of this. So that in itself though isn’t enough. It requires more proactive education to address it — like, for instance, informal communication groups to discuss the issue, not necessarily a particular example, and get everyone involved in discussing why it’s not a good thing. It may well surface opinions — again, depends on how trusted people feel or open they feel — on saying, “I disagree with this. I don’t think it is a workplace issue.” You get a dialogue going. But the company, the employer, in the form of the communicators, have the right people to take this forward, I think. Shel Holtz: But here’s another communication issue that isn’t really addressed in the article, but I think communication needs to be involved. The article outlines a framework for addressing this. They say stabilize, which is support and safety; contain, which is stop the spread and investigate — and investigate broadly, not just the creator. I mean, who helped spread this thing around? Yeah, that’s pretty good crisis response advice. But what strikes me is the fact that containment is mentioned almost as a technical IT issue when it’s really a communication challenge. Because how do you preserve evidence without further circulating harmful content? This requires clear protocols that everybody needs to understand. So communicators should be involved in helping to develop those protocols, but also making sure that they spread through the organization and are aligned with the values and become part of the culture. Neville Hobson: Okay, so that kind of brings it round to that first thing I mentioned about what communicators can do before anything happens, and that’s to help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity and set out exactly what the organizational position is on something like this. That will probably mean updating what would be the equivalent of the employee handbook where these kinds of policies and procedures sit, so that no one’s got any doubt of where to find out information about this. And then proactive communication about it. I mean, yes, communicators have lots to address in today’s climate. This is just one other thing. I would argue this is actually quite critical. They need to address this because unaddressed, it’s easy to see where this would gather momentum. Shel Holtz: Yeah. So based on the article, you’ve already shared some of your recommendations for communicators. I think that updating the harassment policies with explicit deepfake examples is important. This is the recommendation I’m going to be making where I work. I think managers need to be trained on that first-hour response protocol. Managers, I think, are pretty poorly trained on this type of thing. And generic e-learning isn’t going to take care of it. So I think there needs to be specific training, particularly out in the field or out on the factory floor, where this is, I think, a little more likely to happen among people who are at that level of the org. I don’t think you’re going to see much of this manager to manager or VP to VP. So I think it’s more front line where you’re likely to see this — where somebody gets upset at somebody else and does a deepfake. So those managers need to be trained. I think you need to have those evidence-handling procedures established and IT completely on board. So that’s a role for communicators. Reviewing and strengthening the reporting routes — who gets told when something like this happens and how does it get elevated? And then what are the protocols for determining what to do about it? And include this scenario in your crisis response planning. It should be part of that larger package of crises that might emerge that you have identified as possible and make sure that this is one of them. Yeah, this article really ought to be required reading for every HR professional, every organizational leader, every communication leader, because as we’ve been saying right now, I think most organizations aren’t prepared. What the article said is the technology has outpaced our policies, our training, and our cultural norms. We’re in a gap period where harm is happening and institutions are scrambling to catch up. Time to stop scrambling, time to just catch up, start doing this work. Neville Hobson: Yeah, I would agree. I think the final comment I’d make is kind of the core message that comes out of this whole thing that summarizes all of this. And this is from the employee point of view, it seems to me. So accept that AI has changed how misconduct happens, not what employees need. Fine, we accept that. Employees need confidence that if they are targeted, the organization will do the following: take it seriously, act quickly to contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Those four things need to be in place, I believe. Shel Holtz: Yeah. And what the consequences are — you always have to remind people that there are consequences for these things. And that’ll be a 30 for this episode of For Immediate Release. The post FIR #500: When Harassment Policies Meet Deepfakes appeared first on FIR Podcast Network.
FIR #499: When Saying Nothing Sends the Wrong Message
The Public Relations Society of America (PRSA) responded to member requests for a statement about the federal immigration crackdown in Minnesota with a letter explaining why the organization would remain silent. In this short midweek episode, Neville and Shel outline the key points in the letter, where they disagree, and how they might have responded. Links from this episode: An Open Letter to the Public Relations Society of America (PRSA) The next monthly, long-form episode of FIR will drop on Monday, February 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Neville Hobson Hi everyone and welcome to For Immediate Release. This is episode 499. I’m Neville Hobson. Shel Holtz And I’m Shel Holtz. At its core, this podcast is about organizational communication, which leads us to occasionally talk about the associations that aim to represent the profession. So today, let’s talk about PRSA (the Public Relations Society of America), which recently signaled a move to remain apolitical—retreating into a shell of neutrality when members were clamoring for them to speak up on controversial issues. Specifically, I’m talking about the silence from PRSA regarding ICE (Immigration and Customs Enforcement) operations in Minneapolis. Now, before you roll your eyes and think this is just another partisan squabble, stop right there. This isn’t about immigration policy; it is about the integrity of public information—the very foundation of our profession. We’ll dive into what PRSA said and how I responded after this. PRSA leadership, including Chair Heidi Harrell and CEO Matt Marcial, sent a message to members claiming that remaining apolitical protects the organization’s credibility. The letter framed this stance as a means to focus on its core mission. Leadership asserts that while they have commented on sensitive issues in the past, the current “complex environment” demands greater diligence, effectively reserving public advocacy only for matters that directly and significantly impact the technical practice of public relations or its ethical standards. By shifting the burden of advocacy to individual members and requiring chapters to vet local statements through national leadership, the society is attempting to build a “firewall against unintended risks.” In other words, they’re betting that professional neutrality is the best way to maintain trust across a diverse membership, even if it means stepping back from the broader social fray. Now, I have a different perspective. In fact, I’ve published an open letter to PRSA leadership on LinkedIn, arguing that their own Code of Ethics doesn’t just permit them to speak out—it actually demands it. Consider the “Free Flow of Information” provision in the PRSA Code of Ethics. It states that protecting the flow of accurate and truthful information is essential for a democratic society. In Minneapolis, we have federal officials making public statements about the killings of U.S. citizens—statements that are being credibly disputed by video evidence and eyewitness accounts. When government officials systematically misrepresent facts, that is a professional standards issue. It is not political to distinguish a truth from a lie. It is, quite literally, our job. PRSA argues that they want to maintain trust across a diverse membership, but let’s be clear: silence is a statement. It’s a message that says our ethical commitments are only applicable when there’s nothing controversial to address. Don’t believe for a minute that neutrality will save your reputation. Silence in the face of documented misinformation erodes trust among the very members who look to the Society to model the courage we’re expected to show our clients every day. The PRSA Ethics Code mandates a dual obligation: loyalty to clients and service to the public interest. It doesn’t say “serve the public interest only when it’s convenient or not controversial.” When federal agents are accused of violating nearly a hundred court orders and detaining citizens unlawfully, truth in the public interest is eroding fast under the weight of official silence. If PRSA won’t defend the standard of truth when it’s being trampled by powerful federal agencies, who will? I am not suggesting that PRSA needs to become an immigration advocacy group—I am decidedly not. But I am suggesting a path forward that reaffirms our values without wading into the partisan muck. PRSA could and should issue a statement that affirms the vital importance of truthful government communication. They should issue a call for transparency when official narratives conflict with documented evidence, and they should reaffirm that all communicators have an obligation to accuracy over mere advocacy. The fact is, our profession depends on a broader democratic society that functions on truthful information. When that foundation is threatened, our standards are implicated, whether we choose to acknowledge it or not. And let’s keep in mind, PRSA has members working in federal agencies that may require them to participate in the distribution of false information. Professional associations aren’t tested during the easy times. They’re tested when standing up for a principle actually costs something. PRSA’s current diligence looks a lot like retreat. We should be leading the charge for accountability, not languishing in a state of denial. The comments to the LinkedIn article I posted show a membership that is anything but neutral on the need for ethical leadership. I’ll make one more point here: this approach to determining when advocacy is required translates nicely to businesses that have retreated from taking stances on societal issues, despite the Edelman Trust Barometer’s continued demonstration that it’s an expectation of their shareholders. Neville Hobson It’s an interesting one, Shel. I’m reminded of discussions we have had on this podcast previously about the role of businesses to take a stand on issues that are societal but demand some kind of response in some form. This fits that, I think. Your response on LinkedIn was very good; the path forward you outlined is strong. I did like it when you mentioned the word “courage.” This demands that in the face of fear or apprehension. All those words could apply to the potential minefield PRSA would be wandering into if they stepped away from being “apolitical.” Could there be a response from those federal agencies themselves? Or perhaps a negative reaction from the administration and the White House? That may be a driver behind it. Yet, this sort of situation has arisen before. We’ve talked about the notion of professional bodies taking stands on issues. The way you’ve framed the issue as ethical and professional—it’s hard to argue against that. This is not a partisan thing. I see you’ve got over 120 comments on LinkedIn to your article. Did you hear anything from PRSA directly, or are they silent? Shel Holtz No. In fact, a few people who have had issues with PRSA in the past told me they appreciate me posting an open letter because PRSA has historically ignored those. I’m not necessarily expecting to hear anything from them. I don’t hold any leadership roles there, so there’s no reason they should think I’m someone special to reach out to. But you talk about professional organizations; related to all of this, we recently had the arrest of two journalists reporting on an activist group that interrupted a church service led by a pastor who also has a role with ICE in Minneapolis. It was arguably an illegal action for this group to do that, but two reporters went in with them to cover it and they were both arrested based on an order from the U.S. Attorney General. The associations that represent journalists were pretty quick with their statements. PRSA talked about making a statement when there is something that is “technically related to the profession.” That would certainly apply in the case of these journalists. But still, the journalism associations were quick, and there was no concern that members might take issue or that the administration might make life miserable for them. They had the courage to take a stance consistent with their codes of ethics. One member of the PRSA board, whom I know personally, did leave a comment questioning why I singled out PRSA. Why not the Page Society? Why not the IABC (International Association of Business Communicators)? My answer was: they didn’t send me a letter telling me why they’re not saying anything. But I absolutely think every communication association should be advocating for truth in public communications. That’s our job. Neville Hobson I think the fear of a strong, negative, almost threatening reaction from the administration and the White House is at the heart of this. They have “form” in ignoring ethics or international agreements—they’ll tear up those bits of paper because they say it’s “fake” or “rubbish.” Maybe that’s behind a lot of this. What you’ve given them is a challenge: will PRSA apply its own ethical framework when doing so carries reputational and political costs? You mentioned others saying PRSA has a history of ignoring public letters. You see this with other professional bodies who are reluctant to take stands, interpreting “taking a stand” as “advocating for a cause,” which they don’t do. I would argue this is splitting hairs because the argument is about upholding standards. Enlisting support from other professional bodies might be the safest approach—not asking them to take a stand on a specific political issue, but to reassert the point of truthful communication, transparency, and professional accountability. Someone has to do something to address this. This is an opportunity. I understand the reluctance, but I would counter by saying you need to have courage. You represent communicators across the United States, probably Canada, and elsewhere. Who else will do this if you don’t? What would you like to see happen as a result of this discussion? Shel Holtz I would like to see the professional associations have a conversation on their staffs and their volunteer boards and decide how they’re going to proceed in a way that conforms with the values they purport to espouse. I understand that PRSA issued the letter because they had been flooded with member requests to do something. A week or so ago, a letter was issued through the Minnesota Chamber of Commerce by 60 CEOs of Minnesota-based businesses—companies like 3M, Target, and UnitedHealth Group. Some people praised it, but most thought it was weak and “milquetoast.” It called for de-escalation but never named ICE or the immigration issue at all. In the meantime, Target is still under pressure from customers and employees to say something. There was an arrest made by ICE inside one of their stores that traumatized employees who witnessed it, and the company has said nothing. It’s similar to Home Depot, which has had arrests in its parking lots and has remained silent. This disturbs stakeholders. You don’t need a position on immigration policy to talk about tactics that are affecting your community and your business. That’s fair game. That’s where the framework for a statement has to be focused: what is the impact on your business and where does this align with your values? Neville Hobson It’s interesting. 3M and Cargill are global businesses. That “milquetoast” route was probably the safest way to navigate the tightrope, but it doesn’t really help much other than attracting criticism for being weak. I can equally understand that no one there wants to point fingers in a way that might not advance the discussion, but it doesn’t lead us anywhere. Shel Holtz Well, look at organizations like Patagonia, which has actually sued the administration, and their sales and profits are doing just fine. There may be a lot of fear that isn’t backed up by substantial consequences. If you look at the streets of Minneapolis these days, you can see where public sentiment is. It’s fine for business to be on the right side of this. Neville Hobson It is a very tricky situation. Every one of these companies has a statement defining their values, and surely what we’re seeing on the streets of Minneapolis would offend those values. No one’s willing to be counted. Maybe it needs a “safer” avenue—redefining or restating values in public and linking them to these events without naming names. But currently, we only have what we see on the news, and it’s not pretty. Shel Holtz No, it’s not. The businesses that have a direct connection to this and remain silent are going to be remembered for it. This doesn’t mean every business needs to make a statement—if you’re not based in Minnesota, perhaps it’s unrelated to your standards for public comment. But going back to PRSA: when you have federal officials making false statements to the public, and you have an organization that advocates for ethical communication, I think that demands a position. That is the framework for businesses and associations to look at: where is the alignment that should lead you to stand up and display that kind of courage? Neville Hobson Where indeed? Shel Holtz That’ll be a 30 for this episode of For Immediate Release. The post FIR #499: When Saying Nothing Sends the Wrong Message appeared first on FIR Podcast Network.
AI risk, trust, and preparedness in a polycrisis era
In this FIR Interview, Neville Hobson and Shel Holtz speak with crisis and risk communication specialist Philippe Borremans about his new Crisis Communication 2026 Trend Report, based on a survey of senior crisis and communication leaders. The conversation explores how crisis communication is evolving in an era defined by polycrisis, declining trust, and accelerating AI-driven risk – and why many organisations remain dangerously underprepared despite growing awareness of these threats. Drawing on real-world examples, including recent AI-amplified reputation crises, Philippe outlines where organisations are falling short and what communicators can do now to close the gap between awareness and action. Highlights AI is changing crisis dynamics: Organisations recognise risks like AI-driven misinformation and deepfakes, yet few have tested response plans or governance frameworks. Most crises are issues gone wrong: Crises often emerge from internal behaviours and poor issue management rather than sudden external shocks. Trust isn’t a luxury; it’s measurable: “Building trust” sounds good, but most organisations lack meaningful metrics or strategies to manage it. Silos break under stress: Crisis readiness still lives in functional silos — legal, HR, comms, operations — making compound crises harder to handle. Testing beats plans alone: Having a crisis plan helps, but regular, realistic testing and muscle memory are what make teams resilient. Agility matters more than perfect data: Waiting for complete information can stall responses; communicators must be comfortable acting in the face of uncertainty. About our Conversation Partner Philippe Borremans is a leading authority on AI-driven crisis, risk, and emergency communication with over 25 years of experience spanning 30+ countries. As the author of Mastering Crisis Communication with ChatGPT: A Practical Guide, he bridges the critical gap between emerging technologies and high-stakes communication management. A trusted advisor to global organisations including the World Health Organisation, the European Council, and multinational corporations, Philippe brings deep expertise in public health emergencies, corporate crisis communication, and AI-enhanced communication strategies. He is the creator of the Universal Adaptive Crisis Communication framework (UACC), designed to manage complex, overlapping crises. He publishes Wag The Dog, a weekly newsletter tracking industry innovations and trends. Follow Philippe on LinkedIn: https://www.linkedin.com/in/philippeborremans/ Relevant links https://www.riskcomms.com/ https://www.wagthedog.io/ https://www.riskcomms.com/f/the-2026-crisis-emergency-and-risk-communication-trends-report Transcript Shel Holtz Hi everybody and welcome to a For Immediate Release interview. I’m Shel Holtz. Neville Hobson I’m Neville Hobson. Shel Holtz And we are here today with Philippe Borremans. We have known Philippe for at least 20 years, going back to the days where he was managing blogging at IBM out of Brussels, located today in Portugal. And an independent consultant addressing crisis, risk, and emergency communications. Welcome, Philippe. Delighted to have you with us. Philippe Borremans No, thanks for having me and it’s good to see you both. Shel Holtz And before we jump into our questions, could you tell listeners a little bit about yourself, a little more background than I just offered up? Philippe Borremans Sure. Yeah, as you said, I mean, I started out in PR with Porter Novelli in Brussels, that’s ages ago, and then moved in-house at IBM for 10 years. So that was from 99 to, I think 2009, must be, working on, as you said, the first blogging guidelines, which then became the social media guidelines. It was a great project, I was responsible for all external comms there. And then… In fact, moved away from Belgium, lived four years in Morocco, working in public relations on a more, a bit more strategic level. And since then I’ve been specializing in risk, crisis and emergency comms. So that’s actually the only thing I do. It’s mainly around all the things that could happen to either a private sector organization, a government or a public organization. Shel Holtz And you also produce and distribute a terrific newsletter on all of this. So we’ll ask you later to let people know how to subscribe to that. We thought we would start with a case study, although we are going to get into a survey that you recently wrapped up and released. there was an incident in which an executive at Campbell’s, the company that makes Campbell’s soup, claimed that the company’s products were highly processed food for poor people and that the company used bioengineered meat. He also made some derogatory remarks about employees and this surfaced and spread around. An analysis found that negative sentiment around the company surged to 70 % and page one search results were flooded with these negative narratives. And that included the AI overviews. One analysis said the ears of marketing and branding were wiped away in an instant. And that same analysis said that one of the biggest risks that AI introduces is an inherent bias toward negative information. What happened with Campbell’s is that coverage spread really fast across social media and traditional news outlets when this email surfaced. That created a flood of new content that AI systems were happy to start ingesting ⁓ and reinforcing. So when people started searching for 3D printed meat and questions about whether Campbell’s uses real meat, AI didn’t correct those perceptions. It surfaced fragments of context. It pulled language from the company’s own website that referenced mechanically separated chicken. I don’t want to know what that means. And all of this muddied perceptions instead of clarifying things. What should communicators be doing? What didn’t Campbell’s do to protect itself from this? It really is a new reality about how information is gathered up and then shared back out? Philippe Borremans It makes you wonder sometimes but it does tell me that the organization probably was not investing enough in their online reputation side of things. I mean, I recently had discussion with a client, they were asking me about how do we prepare our online information so that it surfaces on AI searches and all these things. And I said, well, maybe you should already start by in your newsroom, not publish your press releases in PDF format because that is so the basics most of the time are not in place. And I think in this case, again, I mean, looking at how search engine optimization is changing, how AI is looking at information. That is crucial. It’s basics because if your online reputation is out there with the information that you have, the bias of AI, I can get it. But if you know that, again, you can work with that. And so I think the organization was simply not looking at their online side of the reputation and information dissemination. Neville Hobson What do you think, Philippe – it’s intriguing when I read this story originally that an organization as storied as Campbell’s Soup, one of the leading FMCG companies, with experience galore in communication, made errors such as and highlighted in this report. And it also highlights, I think, the speed with which this evolved and spread so rapidly, caught everyone by surprise. Is this a one-off, do you think? I mean, surely companies can’t be so unprepared as Campbell seemed to be with some kind of system in place, procedures, et cetera, or even going further back than that, the notion that an executive would say such things even. What’s your thought on that in terms of, of literally the self-inflicted damage they have heaped upon themselves? Philippe Borremans But I think in many cases, if we would take all the crisis communication cases that are publicly available, you would see that that is a trend that, you know, when people talk about crisis communication, they often think about the things that happen from the outside, right? The things that are sudden. But that is not the case. If you look at crisis communication history, the biggest proportion of crises are not sudden. There are smoldering crises that then break out. So that means that it’s first an issue that you can still manage, but that you for one or the other reason don’t look at and then it becomes a crisis. So it’s not sudden. You knew it was there at some point. And then the other thing as well, we think it’s external factors. But again, the majority of crises have an origin from within the organization, at least in the private sector. So what it tells me, first of all, it’s not new. It’s again, the old story of internal happening and it was an issue probably first and it came out and it was badly managed. And that shows to me or that tells to me that again, crisis, preparedness, reputation management with the big word is still not ingrained in on that top level executive level in the private sector. Shel Holtz Philippe, you’ve released a survey and Neville and I have been looking at it. We have questions, but can you give us an overview of the survey before we jump into our questions? Philippe Borremans Sure, yeah. So at the end of last year, I did a survey through my contacts, network, newsletter readers on crisis communications, a bit of looking at what the trends would be. Of course, AI is in there, but other things as well. And I got 102 responses that I can actually use. So was amazed. I was like, okay, this is something at least that shows some direction and I’ll just take my notes. Now one of the things that was interesting to see is that when we talk about AI for instance, one out of 87 people reported full AI integration. So that goes in line with other surveys that we see where, yes, there’s a lot of talk about AI in comms and the big changes it can bring and what have you, but we actually see a very, very small amount of implementation, structural implementation of AI. Most of us communicators are still playing around and discovering AI, and this was confirmed as well. Now, the respondents here are senior-level crisis slash communication director types. The adoption levels are low. The top barrier is very interesting. So I asked about the top barrier. So why then is AI not integrated in? And it’s 23.5 % set skills, huge skills gap. And again, that is in correlation with other surveys that I could see. Budget, okay, and then privacy security reasons at 14.7%. But the skills gap, that is the one that I’m really worried about because AI is not new. It’s been three years that we had access to the GenAI tools. We know we can install open source models on our own machines. We can sandbox them in an enterprise environment and still skills and the actual application are very, very low. So that was for AI. Another one, which I was really afraid of and unfortunately confirmed is exercising. Do organizations actually exercise their plans? They all have a plan somewhere, but we know it’s just a plan and it’s the first thing that goes out of the window when something really happens. But do we exercise? Do we do crisis simulations, tabletops, large scale simulations and only 26.5 % of the respondents here test at least annually? 9.8 reported they never tested and then you’ve got the whole middle who test from time to time when they feel like it probably. Public sector was a bit different than private sector but still that is worrying because I know from experience having worked in this field now for the last 15 years Good crisis communication or risk communication or emergency communication is about… It’s a muscle, right? If you don’t exercise it, whatever your plan is, it will not work. You need much more an agile approach, which comes from training and simulation exercises, than a rigid protocol plan. You need a plan, I’m not saying you don’t need it, but what will get you through a crisis is your agile approach because things change all the time. And that is only possible to get there, it’s only possible through exercising and we see that it’s not the case. Another one linked to AI. Everybody in the survey said, and it was really on top of when I asked about the biggest risks, AI, going wrong, AI risks related to AI. So fakes and what have you, deep fakes, etc. But only 3.9 % said that they had a tested gen AI crisis protocol. And 27.5 said they had no protocol and no plans in place to face an AI generated crisis. So it’s right on top there. Everybody’s afraid of it. Nobody’s planning for it. Again, an interesting insight I found. Neville Hobson That is interesting. Yeah. Philippe Borremans And then the first thing, mean, said that trust was much more difficult to manage than before. But what I saw in the rest of the information of the survey as well is that, again, the problem is there. what we actually and when I say we, it’s communicators and crisis communicators, what we don’t do is prepare, train and create protocols for different scenarios. Neville Hobson On that topic of trust, timely mention there, Philippe, because that’s one of my questions I was going to come back to a bit later, but this is the right moment to talk about that. The report actually describes a widening trust deficit. You touch on that with many organizations struggling to measure trust at all. That surprises me, I have to say, let alone rebuild it during a crisis. In fact, that applies to, I think, the Campbell Soup situation quite well, and it’s a crisis of trust they have now encountered. It’s timely to talk about this because in the context of the bigger picture of trust, Edelman’s Trust Barometer, which landed today as we record this, which is the 20th of January, it raises that question of the widening trust deficit in the context of crisis communication. So, I wonder your thoughts on the perennial question about communicators wanting to be taken seriously in the boardroom in particular. How should they rethink trust? And indeed, is that the right question to ask them even in this current climate of widening trust? Trust is already low. How on earth do you lift yourself up from that? How should they rethink it, as I mentioned, not as a value, but as something they can actually measure and manage? What’s your take on it? Philippe Borremans Well, I’ll even go a step further and I like your question. Is it even one of those concepts that we need to look at? I have a big issue with I do a lot of speaking at conferences and do workshops and every single time at least at conferences, every single other speaker has one slide that says we need to build trust. Right. And I got so fed up with that. I mean, what is trust, Neville and Shel? All three of us have different cultural backgrounds. Trust, the concept, is a different thing for all three of us. How we relate to government, how we relate to the private sector, how we relate to our community in society, it’s different. There is not one single definition. Of course, there is the broad definition of what it means, but when you look at it from a communications point of view and a relationship point of view between an organization and the publics, you will see that in every different part of the globe, it’s a different interpretation. And trust is not only, is not the only variable that works or that is important for crisis comms, then at least we have around 12 of them. Peter Sandman, you know, put the groundwork in that work, scientific research, but we have 12 to work with. Trust is just one of them. So already there, I’m very cautious about using trust as the…you know, the mantra or the silver bullet. But once we understand what we’re talking about and agree on it, to me, it’s very simple. It all starts with and ends with completely and completely understanding your different audiences. We always talk about stakeholders. Sure, they are important. But from a communications point of view, from trust building, and I think At least that’s my analysis from the Edelman Trust Barometer report as well. They talk about segmented audiences finally, we, I hope now finally most communication professionals understand that the general public doesn’t exist. We need to segment our audiences. And it’s understanding those through and through. Knowing what their context is. Knowing what their definition of trust is, what their relation is with your organization. Only then can you start building plans looking at how you would approach this in the context of a crisis. That’s what I think about this. Shel Holtz I want to stick with this issue of trust, even though it’s just one of several variables. Your survey found that nearly 66 percent of practitioners find building trust is harder today than it was five years ago. And you reference the idea of this being the era of the perma crisis. It’s always happening. Is this decline in our ability to build trust to failure of communication or is the external environment just too volatile to to manage effectively? Philippe Borremans But as an organization, as communicators, we’ve always worked in an environment that was shifting. Sure, maybe it’s, you, we’re in a peak moment where a lot of things are shifting. But if you actually look at, if I just look at different moments in my career at IBM, et cetera, and other organizations, there were always things shifting around. Now, either you look at it from your micro environment where you actually have something that you can manage or on a global scale. But I think it’s much more about the profession as communicators. First of all, understanding the environment. Not a lot of communicators truly understand polycrisis and permacrisis concepts and how it actually translates into communications. It’s thrown out there and geopolitics and what have you, but how does that translate to your day-to-day work for your organization? So that’s already, I think, a gap. And then once you understand that, what can you actually do to minimize that impact from a communications point of view? We only have so much that we can actually work on. That means we need to work with other departments as well and probably with industry associations, cetera, et cetera. We are not the, you know, we cannot solve everything. But if we actually already start knowing what we can do in our corner and understanding the global environment now, which is not easy. Then already we can take the first steps. I’m always amazed when I work with clients, they all have media and social media monitoring platforms. And they actually think that for them, that’s intel, that’s the insights they need. Most of the time I tell them, well, yes, you need that part, but you have nothing around predictive analytics. You have nothing on horizon scanning. You have nothing on. So there’s huge gaps in there. And that’s actually the new things that you need in a world which is changing all the time. Shel Holtz I remember reading in an IABC document, somebody said that a crisis is what you get when you fail at issues identification. Philippe Borremans It is an issue, badly managed issue is of course something that becomes a crisis. on trust, Shel, I think out of the report came that the majority of respondents find it much more difficult to manage trusts than five years ago. But when I asked, well, how do you actually measure that? Nobody knew. there, again, it’s an impression they have. It’s a feeling. Shel Holtz It’s a feeling. Philippe Borremans But where is your benchmark? How are you going to measure your impact that you have or don’t have? How do you work with that if you don’t have the data? And that’s a gap. Neville Hobson You mentioned ‘polycrisis’ and indeed your report starts out by saying we work in an era of polycrisis. And you then said communicators need to understand what that means. Well, I’m a communicator. Help me out here, Philippe. What does it mean? Philippe Borremans Well, a polycrisis is an interesting concept. So what it actually means is that you have different crises which are interlocked, right? And that can happen in the same crisis window, meaning you could face a climate hazard, let’s say a hurricane, which could result in a blackout, which means, you know, critical infrastructure which then could have an impact on your data center and you suddenly are in a very commercial crisis there because clients rely on your data center if you’re an infrastructure provider. So it’s that interconnectedness of different types of crisis. And that is an interesting concept. First of all, it’s closer to reality. I’ve seen it here in Portugal. We had our famous blackout for more than 12 hours, but you see how it trickles down and impacts different things. Neville Hobson Yeah. Philippe Borremans infrastructure, mobile connection, business, etc. etc. etc. So that idea of interconnected crisis is now it is interesting in the context of crisis communication because we have previously always been trained on siloed crisis. All the plans are written like, okay, if we have a product recall, what do we do? If we have a critical infrastructure breakdown, what do we do? If we but it’s all separated, it’s not integrated. And of course that changes the game, that changes how you prepare for a crisis. Neville Hobson So that leads into, nicely, the question I had, which is precisely on that point, one of the strongest themes in the report is that crisis communication works best when it’s integrated across functions. Yet, HR, legal, maybe cybersecurity, certainly comms are often only loosely connected. So when a real compound crisis hits, where do you most often see integration break down? And what distinguishes organizations that get this right from those that don’t? Philippe Borremans Yeah. Well, a good example was Heathrow. You know, remember the blackout of Heathrow? It was so crazy because at one point I put two screens next to each other. So through my network of crisis communicators, we were all going like, my God, how is this possible? You know, I mean, but I had actually next to that a screen with my feed of my connections in the business continuity world, the operational side. And they’re all going like, we got an airport, one of the biggest airports in the world up and running in less than 24 hours again. Job well done. So that’s where it happens in an organization. You have the comms people going crazy and you’ve got the ops people working very hard and doing what they do. But they don’t you know, there is no interconnectedness. And then, of course, you have legal good friends from legal. And now you have entities, of course, HR. mean, one of the things that I’m still very much amazed when I work with clients is that internal comms is never at the table. While we all know that your first communication during a crisis is your internal communications. And it’s still not the case. So that where it’s often go wrong. The big chasm that I see is between comms and operations. once they get together, you see fabulous things happening because we can translate what it actually means if they can be up and running in half an hour or in two days. We can translate that to our audiences and our stakeholders and say, look, that’s the situation. Cybersecurity is interesting as well because there’s a lot of pressure to integrate that now into crisis management teams simply and not because people think that’s the best way to do it, because it’s becoming the law within the EU. It needs to be integrated. It’s the law. You have no choice. So there’s a couple of things moving, but it’s more on the pressure of law and ISO quality norms and what have you, than actually understanding, yes, we all need to sit around the same table and let us all do our own jobs that we’re good in. We can translate stuff. You do the operational stuff. Shel Holtz It’s interesting, our CEO, I work for a construction company and our CEO says the thing that keeps him awake at night the most is cybersecurity and nothing to do with the industry. It’s just cybersecurity issues. Philippe, one of the new insights that came out of your report was a reference to populist politicians undermining science-based policy. How can organizational communicators deal with this landscape where facts are increasingly viewed through a partisan or an ideological lens? Philippe Borremans Well, again, it goes back to understanding why and how this happens. If you look just around a topic, which again is often discussed, mis-dis and malinformation, we talk about online and offline, right? It’s understanding what it actually means. I’m running a couple of workshops now on specifically on inoculation and pre-banking, which are two techniques and probably the only two techniques that work to counter this mis, mal and disinformation online. And so it’s, it’s understanding the psychology behind it. It’s not only about technology, it’s a lot about human biases and psychology. And, of course, countering the world’s geopolitical narratives, which, you know, have a certain way of going, that is, of course, very difficult, but understanding why they happen and how they work and how they then can impact certain audiences which and stakeholders were important to you. I think that is crucial as a communicator. And that is by studying, just looking at, what does this actually mean? Can we identify it? Can we translate how it could potentially impact what we do? And then how can we counter it? Unfortunately, if it’s about online mis-dis and malinformation, there’s only two techniques that work. And even then, those two alone will just create a small protective layer because it’s very difficult to take online. But pre-banking and inoculation are the only techniques that work for the moment. Other ones, is, and they’re being talked about like, let’s increase media literacy. Well, that’s first of all, up to up. I mean, it’s not our responsibility. I think as communicators, we have other things to do. It’s probably the responsibility of the government, institution, Ministry of Education, but then we’re off for the next three decades. Neville Hobson I want to go back to the gaps that we touched on earlier. The big gap that struck me from reading it is how so many leaders see AI-driven misinformation and deepfakes as critical risk. Yet most organizations still don’t have the documented protocols to deal with them. And you’ve made that point very strongly about no protocols, no plans. I wonder, what’s really holding back organizations from moving from beyond awareness, that like, yes, they know, to action. So I guess a simple question, like the takeaway for listeners in this one, if you’re a communicator, what’s the simplest first step you could take to move from awareness to action to develop a plan? Let’s say in the next 90 days, what would you say to someone with that? Philippe Borremans It starts with sensing, right? You have to listen for these things because otherwise you’ll just see them when they’re actually out there and you’re in trouble. So it’s actually sensing. So I’m a very strong believer in AI driven predictive analytics. So this is different than your standard monitoring. Your standard monitoring, look at brand mentions and CEO mentions, executive mentions, et cetera. That’s not how you’re going to detect deep fakes. There are actually platforms out there today which do predictive analytics look at the activation of bot networks, the spreading of a certain narrative in a certain context, and that will show you something is brewing. I’m making it very simple now. Something is brewing, things are getting organized, we could have something coming towards us, which could be deep fake and what have you and what have you. So first listening so that you have your alert system done in place. Then on the defense side, it’s actually also having what I call a truth bank. That’s a database or an Excel sheet, whatever. I can’t believe I said Excel sheet, a database where you have actual proof that your communication assets are yours, authentic and come from you. Because we are getting into an area where at one point in time, will, an organization will be questioned. Yes, you can say that press release is yours, but is it actually yours? You can say that that video of your CEO is actually true, but how can you prove that? We call me in an area as far as that. So you actually need to do it. And you know, I’m a big defender and also user of blockchain technology. It’s very simple today. You can actually, you know, actually prove without irrefutable doubt that some pieces of communication are yours. Example of a bank in Belgium. Already years, every single press release they send out is stamped through a blockchain system so that they can actually prove it’s theirs. And they started to do that more than five years ago because they had fake press releases going out. And that wasn’t even AI driven. That was just someone who got very creative. So first listening, then protecting your assets, making sure that you can prove it yours, and then countering. But countering depends on the situation. If it’s a rage farming attack, for instance, it’s no use in going against the originators, the people, the bad actors. That’s no use at all. You need to focus on the… Neville Hobson Can you just explain what rage farming is, Philippe? Philippe Borremans Sorry, yeah, rage farming. Rage farming is a technique where a bad actor, and most of the time it’s about making money, organizes an attack on your brand and they make money simply by the algorithms on the different platforms who then bring in sponsors and what have you and and clicks etc. Rage farming is an attack which is actually taking your normal communication standard comms, your next press release, your next presence at a conference, your next speech of your CEO, takes it out of context, looks at how it can be repurposed with one single objective to trigger rage. So a very practical example, imagine that a retail company decides to make unisex uniforms. Men and women dress the same. We don’t make a difference. You could think, wow, gay, why not? Taken out of context, that means that it could be translated by bad actors in, look, they don’t want women to be women anymore. look, the whole woke context, they would reframe that and then target that message. It’s just out of context, but target that message proactively to communities online who are much more conservative, who have a much more conservative worldview. They would then be triggered by rage, start to spread it, and then actually you have that whole system. That’s rage farming. And why did we come to rage farming? Lost my… Neville Hobson Yeah. You were trained to thought. Yeah. No, that was, no, we actually moved on from that question, which was these steps to take, and you were going through each of the steps… Philippe Borremans Yeah, so and against rage farming, is one of those things that you need to do. So is that actual listening and in the context of rage farming, it’s no use at all to go against the bad actors because they’re in there to make money. Most of the time they have a whole network. It’s no use at all. But you need to then focus on your audiences that you can at least still inform. So not even to the in this example, the more conservative community online, because you will not change their mind. That’s their cultural background. That’s how they think about the world. So you actually need to know very well where you can make a difference or not, which is not always easy. Shel Holtz Let’s stick with this theme of gaps. The respondents to your survey were mostly C-suite, director level professionals. Is there a generational gap in how senior leadership views risks compared to the lower level, more junior practitioners? They’re the ones who are monitoring the feeds and they’re the ones who are going to be tasked with implementing a response. Is there a gap between them and the senior leadership in terms of how they perceive these things, these issues? Philippe Borremans I didn’t, I couldn’t get that out of the survey. I could probably look at it more deeply, but my gut feeling and based on experience is that you have some senior leaders who definitely see these risks, but on a very strategic level. And there is a gap in translating that into an, shall we call it an operational level. That’s what other responses and other questions tell me. Like, we know it’s difficult to manage trust compared to five years ago. Yeah, but you don’t have the benchmark. So how would you know? It’s just a gut feeling that you have. we know AI generated crisis are top risk. Yeah, but you don’t have your pro. So it’s that translation, I think. So there’s a top layer, think, that actually reads all the reports and meets around with senior peers and they talk about geopolitics and the world changing and polycrisis, what have you, and they understand. But then how do you translate that into actual practical things in operational stuff? How do you upskill your team, your communications team today? Right. So that they can actually face all these these new issues. How do you change and adapt your crisis communication, preparedness planning? How do you integrate that? Those are the kind of practical questions that probably don’t trickle down. And of course, if down there you have more junior people, they maybe wouldn’t know the best way to go about it. That’s my feeling. Neville Hobson I’d like to talk a bit about testing. You can have a crisis communication plan, indeed more than one, which is not much use if you don’t ever test it to see if it all works. Anecdotally, I’ve heard, over the years, I suspect that that’s a major hurdle for many who perceive it as, know, organization wide. This is a massive project to get through. And yet I’ve often wondered, do you have to do that even? And then your report talks about things like embracing micro simulations, which I think maybe you could talk about that a little bit. But I’m also thinking something I found quite intriguing, make testing a governance requirement. And I suppose that makes sense in jurisdictions where testing isn’t any kind of legal requirement. So you voluntarily do this. But can you talk a little bit about the embracing micro simulations in particular and maybe some examples of how to make it seem both less daunting to a communicator and also relatively easy that they can actually implement some kind of testing process. Philippe Borremans Yeah, and that’s, that’s one of the things that comes back when I talk to people, like not specializing, so communications colleagues, I recently was at a conference and someone said, I am so convinced of this, but how do I translate that and tell that and ask for this to my management, because they see only the costs. Now, I actually have a little AI assistant that I trained into calculating return on investment of these things. But people think simulation is this big thing, right? You see ambulances coming and you see probably a big war room with big screens and what have you. It doesn’t have to be every single time that kind of simulation exercise. Organizations can start from the minimum, which is micro simulations. I have a a small micro simulation platform that I coded myself. I do workshops with that. It’s an half an hour exercise. It’s a lunch and learn time, right? Get people around the table with a sandwich and say, okay, what is the crisis that we’re going to role play today? Half an hour, you get feedback. Fine. You can do that every single week. People find it fun, but it trains the muscle because it’s based on real scenarios and it’s real feedback and etc. Tabletop exercises. You have many different forms and formats. They can range from one hour to three hours. They can be functional exercises. They can be completely invented exercises. And let’s not forget, mean, communications people have no experience at all. They’re actually simulation kits you can pay for and they’re not expensive and download, read through the manual and go through the motions. That also trains you maybe as a non-specialized communicator on what it actually means to manage and to do good simulations. But the most important thing is it doesn’t have to be the big thing. You can do micro simulations on a very regular basis, make it fun. You can do tabletop exercises every quarter, hopefully with an executive team, but put it in the agenda. And if you are in certain industries, I would actually say, well, you need a full scale simulation exercise every year if you’re in the petrochemical and what have you industry. The point is you can actually position this not as a cost center exactly as a corporate insurance does. We know based on research and facts that organizations who train their plan first of all get through a crisis much quicker but rebuild after a crisis much quicker and that’s where the money goes. If it takes you two years to rebuild that’s a lot of money. If you can shorten that by half or even more. That is the actual game you do. And that comes from training, training and training. There is a reason why I was in the Navy. There’s a reason why the the captain of the ship, you know, did fire exercises every single day. And after the, you know, the 52nd, you go like, why are we doing this stupid thing? But actually, when you have a fire, you know why. Shel Holtz Ten percent of organizations never test their plans at all, according to your report. What happens to these organizations when they’re confronted with a Black Swan event? mean, can you wing it these days? Philippe Borremans It’s a good question. Can you wing it? Some organizations wing it and then suddenly they go through it and are like, wow, how did you… But that’s more luck than anything else, I think. Now, black swan incidents, of course, are interesting because those are the ones you cannot prepare for. Well, you cannot plan for, you can prepare for. Because if you build actually that agile muscle around crisis management and crisis comms. You are already much better prepared than somebody who doesn’t have that agile muscle, who is strictly following protocol and old school plans and then suddenly is confronted with a black swan incident. And that’s why I’m a strong believer in working much more, again, you need plans, you need protocols, fully agree, but you actually need an agile communications team. We know things go very fast, they come from every single corner. You need that mindset. You need that agility muscle in there. And then teams are actually ready to take what comes and move at the moment. And I do see a link. Another thing which is very difficult for communicators from my generation because we were trained like that, at least I was at PR school, I was trained that you do not communicate until you have all the facts. It took me a couple of years to switch that. When I work for the UN agencies during the pandemic and other epidemics, you actually need to communicate without having all the facts. And it’s very uncomfortable. It’s very contra training. But that’s what everybody in communications today should have that skill, because most of the time you will not have all the facts, and the facts will change day after day after day after day. So you need that muscle again, that agility. That’s the most important thing I think today. Neville Hobson I would agree with that view, Philippe. Shel Holtz The last question I have before we get to our traditional final question. You got a PR manager in a company who wakes up tomorrow morning, finds your report, reads it and realizes they’re part of the 77 % with no AI protocol. What should they do? What are the first steps they should take to update their crisis plan? Philippe Borremans I think if they don’t have a protocol now, it means that it hasn’t been on the agenda or not on the radar. They’ve heard a couple of things. So first of all, get informed about what it actually means. What is a deep fake? What are the different things that could happen? And then see, okay, how relevant is that for our organization? And then translating that into a couple of very basic steps, the monitoring, the protocol setup, and the what if exercises. What if this tomorrow happens? What would we actually do? What would it mean for our audiences, for our executives, for our stakeholders? And how do we translate that? Not in a big plan and a long, you know, SOP, but simple steps. And most of the time it will not be a, you know, a communications team of 25 people. It will be one, two, maybe just split up the rows. What do we do if tomorrow there’s a deep fake popping up? How will we do the triage? Because you don’t have to react to everything. And if we decide to react, who are the first people that we need to inform? Sometimes it’s getting really the very basics in place. It’s already much more than 90 % of the other people that are actually not looking at this for the moment. Neville Hobson Sound advice. And of course, we now come to that point of the question we didn’t ask you, that you wished we had or hoped we would. What would that be if there is one? Philippe Borremans That would be, “Philippe, when do we have another drink in Brussels?” Shel Holtz Not soon enough. Neville Hobson I like that. That’ll do. I like that one. Yes. It was a long time since we had that drink in Brussels, Philippe, so we ought to. Philippe Borremans Or when do we meet face to face again? Because it’s been that’s been a very long time as well. So yeah. Neville Hobson Well, you and I are in Europe, so that’s easy for Shel to come over here. And in fact, going to the States these days isn’t a very attractive proposition, I think, to many of us over here. But it’s been a terrific conversation, and I think you’ve shared some great insights for our listeners. Where can people get hold of you? How would people find you online? Philippe Borremans Well, the main I mean, if they’re interested in the topic, it’s it’s maybe a good idea to subscribe. So I’ve got a free newsletter every week. I talk about risk crisis and emergency comms and AI. And that’s at wagthedog.io. And if they need support before during or after crisis, it’s my corporate website, which is riskcomms.com. Shel Holtz And you’re also on LinkedIn, I presume, and sharing your insights there as well. Philippe, it has been terrific. Thank you so much for the time. Philippe Borremans Sure, definitely. No, thank you. It was really great seeing you again and we definitely have to find an excuse this year to meet. Shel Holtz I would love that. Neville Hobson Thank you. The post AI risk, trust, and preparedness in a polycrisis era appeared first on FIR Podcast Network.
FIR #498: Can Business Be a Trust Broker in Today’s Insulated Society?
The 2026 Edelman Trust Barometer focuses squarely on “a crisis of insularity.” The world’s largest independent PR agency suggests only business is in a position to be a trust broker in this environment. While the Trust Barometer’s data offers valuable insights, Neville and Shel suggest it be viewed through the lens of critical thinking. After all, who is better positioned to counsel businesses on how to be a trust broker than a PR agency? Also in this episode: Research shows employee adoption of AI is low, especially in non-tech organizations like retail and manufacturing, and among lower-level employees. CEOs insist that AI is making work more efficient. Do employees agree? Organizations believe deeply in the importance of alignment. So why aren’t employees aligned any more today than they were eight years ago? Mark Zuckerberg changed the name of his company to reflect its commitment to the metaverse. These days, the metaverse doesn’t figure much in Zuckerberg’s thinking In his Tech Report, Dan York reflects on Wikipedia’s 25th anniversary. Links from this episode: 2026 Edelman Trust Barometer Society Is Becoming More Insular Exclusive: Global trust data finds our shared reality is collapsing Insularity is next trust crisis, according to the 2026 Edelman Trust Barometer Employers are the most trusted institution. That should worry you – Strategic The 2025 Edelman Trust Barometer has landed, and everyone in comms is about to spend the next six months quoting the same statistic HIS THEORY IS LITERALLY: The human beings of the earth don’t like each other, don’t trust each other, won’t talk to each other, won’t listen to each other. Richard Edelman Has No Clothes. (Nobody Does.) Trust amid insularity: the leadership challenge hiding in plain sight Employees say they’re fuzzy on their employers’ AI strategy JP Morgan’s AI adoption hit 50% of employees. The secret? A connectivity-first architecture How Americans View AI and Its Impact on People and Society Only 14% of workers use GenAI daily despite rising AI optimism: Survey Offering more AI tools can’t guarantee better adoption — so what can? Only 10 Percent of Workers Use AI Daily. Getting Higher Adoption Depends on Leaders Leaders Assume Employees Are Excited About AI. They’re Wrong. Meta is about to start grading workers on their AI skills CEOs are delusional about AI adoption CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story. The Productivity Gap Nobody Measured. FIR #497: CEOs Wrest Control of AI The Alignment Paradox What Mark Zuckerberg’s metaverse U-turn means for the future of virtual reality Meta Lays Off Thousands of VR Workers as Zuckerberg’s Vision Fails Meta Lays Off 1,500 People in Metaverse Division FIR episodes that featured metaverse discussions Links from Dan York’s Tech Report Celebrating Wikipedia’s 25th Birthday and Reflecting on Being a wikipedia for 21 Years At 25, Wikipedia faces its biggest threat yet: AI Wikipedia at 25: A Wake-Up Call The next monthly, long-form episode of FIR will drop on Monday, February 23. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz: Hi everybody and welcome to episode number 498 of For Immediate Release. This is our long-form episode for January 2026. I’m Shel Holtz in Concord, California. Neville Hobson: And I’m Neville Hobson, Somerset in the UK. Shel Holtz: And we have a great episode for you today, lots to talk about. I’m sure you’ll be shocked, completely shocked that much of it has a focus on artificial intelligence and its place in communication, but some other juicy topics as well. We’re going to start with the Edelman Trust Barometer, but we do have some housekeeping to take care of first and we will start with a rundown of the short midweek episodes that we have shared with you since our December 2025 long form monthly episode. Neville? Neville Hobson: Indeed. And starting with that episode that we published on the 29th of December, we led with exploring the future of news, including the Washington Post’s ill-advised launch of a personalized AI-generated podcast that failed to meet the newsroom standard for accuracy and the shift from journalist to information stewards as news sources. Other stories included Martin Sorrell’s belief that PR is dead and Sarah Waddington’s rebuttal in the BBC radio debate. Should communicators do anything about AI slop? And no, you can’t tell when something was written by AI. Reddit AI and the new rules of communication was our topic in FIR 495 on the 5th of January, where we discussed Reddit’s growing influence. Big topic, and I’m sure we’ll be talking about that again in the near future. On that day, we also published an extra unnumbered short episode to acknowledge FIR’s 21st birthday. Yes, we started out on the 3rd of January 2005 and that’s a lot of water under the bridge in that time, Shel. And I think we had quite a few bits of feedback on that episode. Shel Holtz: People dropped in and shared their congratulations. There were way too many of them to read and many of them were very, very similar. Just to share one, this is from Greg Breedenbach who said, “Congratulations, what a feat. I’ve been listening since 2008 and never got bored because you managed to keep it engaging and relevant. Thanks for all the hard work.” Neville Hobson: Great comment, Greg, thank you. So for FIR 496 on the 13th of January, we reported on the call by the PRCA, the Public Relations and Communications Association for a new definition of public relations. We explored the proposal’s emphasis on organizational legitimacy, its explicit inclusion of AI’s role in the information ecosystem, and the ongoing challenge of establishing a unified professional standard that resonates across the global communications industry. That had a few comments. Shel Holtz: That got a few comments. Gloria Walker said, “Attempts have been made from time to time over the decades to define and redefine PR. Until there is a short one that pros and clients and employers can understand, these exercises will continue. Good luck.” And Neville, you replied, you said, “You’re right, Gloria. This debate comes around regularly. One interesting precedent was the Public Relations Society of America led effort in 2011 in a public consultation to redefine PR. That process was deliberately open and received broad support from professional bodies and their members around the world.” And Philippe Borremans out of Portugal had a comment. He said, “Thanks for the mention of my comments. Hope it helps in the definition exercise.” Philippe, of course, wrote a LinkedIn article in response to the definition. There were some other comments in this episode, including one from Marybeth West. You can go find that on LinkedIn. This was a rather lengthy exchange between Marybeth and you that is just too long to include here. Neville Hobson: Great. And then in FIR 497 on the 19th of January, that’s just a week ago before we record this current episode, we unpacked the latest AI radar report from BCG, used to be known as Boston Consulting Group, that says AI has graduated from a tech-driven experiment to a CEO-owned strategic mandate. We examined this evolution that places communicators at the center of a high-stakes transition as AI moves from pilot phase into end-to-end organizational transformation. One comment we had to that: Shel Holtz: From our friend Brian Kilgore, who said, “Haven’t read the report yet, but will soon. Sometimes when I read a link first, I can’t get back to the comments.” But he continues to say, “I once took a job that was structured by Boston Consulting Group. My employer used the BCG report as the basis for the job description. It worked out well.” Neville Hobson: Excellent. So that’s where we’re at. Some good stuff since the last episode. And of course, now we’re about to get into the current. Shel Holtz: And yesterday I published the most recent Circle of Fellows, the monthly panel discussion with members of the class of IABC Fellows. This one was on mentoring. It was a fascinating conversation featuring Amanda Hamilton-Atwell, Brent Carey, Andrea Greenhouse, and Russell Grossman. The next Circle of Fellows—mark it in your calendar because this one’s going to be very interesting and maybe even controversial—this is going to be at noon Eastern time on Thursday, February 26th and it’s all about communicating in the age of grievance. This will feature Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Waugh. Neville Hobson: You’re such a tease, Shel, with that intro, I have to say. So yeah, go sign up for it, folks. I’d also like to mention that in December, IABC announced the formation of a new shared interest group, or SIG, that Sylvia Cambier and I are leading. It’s called the AI Leadership and Communication SIG. And I’m delighted that we have attracted 70 members so far. I’m also delighted to share that our first two live events are scheduled for February. On the 11th of February, we’re hosting a webinar for IABC members to introduce the SIG, explain why we formed it, what it stands for, and how it approaches AI through a leadership and communication lens. Then on the 25th of February, as part of IABC Ethics Month, we’re hosting a webinar on AI ethics and the responsibility of communicators. This is a public event open to members and non-members that explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human oversight. We’ve included links in the show notes so you can learn more about these events and sign up as well if you’d like to. Shel Holtz: Sounds great, I’m planning to attend those, schedule permitting. And that wraps up our housekeeping. Hooray! It’s time to get into our content, but first you have to listen to this. Neville Hobson: Our lead discussion this month is the 2026 Edelman Trust Barometer, which landed last week at the World Economic Forum in Davos, Switzerland, with a stark framing: trust amid insularity. But before we get into the findings, a quick word on what the Edelman Trust Barometer actually is. Many of you literally may not know why this is significant. The Edelman PR firm has published the Trust Barometer every year since 2000, making this its 26th edition. It’s based on a large-scale annual survey across 28 countries, tracking levels of trust in four core institutions: business, government, media, and NGOs, alongside attitudes to leadership, societal change, and emerging issues. Over time, it has become one of the most widely cited longitudinal studies of trust globally, not because it predicts events, but because it captures how public sentiment shifts year by year. After more than two decades of tracking trust globally, Edelman’s core finding this year is that we are no longer just living in a polarized world, but one where people are increasingly turning inward. That’s that word “insularity” I mentioned earlier. The report suggests that sustained pressure from economic anxiety, geopolitical tension, misinformation, and rapid technological change is reshaping how trust works. Rather than engaging with difference, many people are narrowing their circles of trust, placing greater confidence in those who feel familiar, local, and aligned with their own values, and withdrawing trust from institutions or people perceived as “other.” At a headline level, overall trust is broadly stable year on year. The global trust index edges up slightly, but that masks important differences. Trust continues to be significantly higher in developing markets than in developed ones, where trust levels remain flat or fragile. As in recent years, employers and business are the most trusted institutions globally, while government and media continue to struggle for confidence in many countries. What is notably sharper this year is the distribution of trust. The income-based trust gap has widened further, with high-income groups significantly more trusting than low-income groups. Edelman also finds growing anxiety about the future. Fewer people believe the next generation will be better off, and worries about job security, recession, trade conflicts, and disinformation are at or near record highs. A defining theme running through the report is what Edelman calls insularity. Seven in 10 respondents globally say they’re hesitant or unwilling to trust someone who differs from them, whether in values, beliefs, sources of information, or cultural background. Exposure to opposing viewpoints is declining in many countries, and trust is increasingly shifting away from national or global institutions towards local personal networks: family, friends, colleagues, and employers. Compared with last year’s focus on grievance and polarization, the 2026 report suggests a further step from division into retreat. The concern is not just disagreement, but disengagement—a world where people are less willing to cross lines of difference at all. In response, Edelman positions trust brokering as a necessary answer to this environment—the idea that organizations and leaders should actively bridge divides by facilitating understanding across difference rather than trying to persuade or convert. This concept sits at the center of the second half of the report. It’s also worth noting that Edelman’s framing, particularly around trust brokering and the role of institutions, has attracted a number of critical responses. We’ll highlight some of those critiques in our discussion alongside our own perspectives and what this year’s findings mean in practice. Taken together, the 2026 Trust Barometer paints a picture of a world where trust hasn’t collapsed, but it has narrowed, becoming more conditional, more local, and more shaped by fear and familiarity than by shared institutions or common ground. That raises important questions about leadership, communication, and the role organizations are being asked to play in society. So let’s unpack what Edelman is telling us this year. What stands out in the data where it feels like a continuation of recent trends and where this idea of insularity marks something more fundamental in how trust is changing? Shel? Shel Holtz: Well, we would be remiss if we didn’t acknowledge that this annual ritual has attracted a torrent of criticism over the years. Criticism raises some uncomfortable questions about what we’re actually measuring and, more importantly, whose interests the barometer serves. Now, none of this minimizes the value of the data that has been collected. For the eight years that I have been working for my employer, I have extracted points that I think are relevant and share these with our leadership. I’ve already undertaken that exercise this year. So what I’m about to share with you is a critique, but I don’t want anyone thinking this means you should ignore the report. It just means you should apply some critical thinking as you go over this information. And let’s start with the most fundamental critique: the methodology and sample selection. Clean Creatives, which is a climate advocacy organization, has documented how Edelman’s country selection appears strategically aligned with the firm’s client base. The United Arab Emirates, for instance, was only added to the trust barometer in 2011, conveniently right after they became an Edelman client in 2010. And wouldn’t you know it, the barometer regularly finds that trust in the UAE government remains among the highest in the world. And by the way, that’s a quote, “remains among the highest in the world,” findings that are then dutifully promoted by state media. Consider the question: is the trust barometer measuring trust or is it manufacturing it for the C-suite? The issue gets even more problematic when you look at the top of the leaderboard. Six of the highest-ranked governments in recent editions—China, the United Arab Emirates, Saudi Arabia, Indonesia, India, and Singapore—are rated by Freedom House as either not free or partly free. Researchers studying authoritarian regimes have identified what they call autocratic trust bias. It’s a phenomenon economist Timur Kuran calls “preference falsification.” In other words, people don’t exactly feel free to reveal their true opinions when they might face some sort of prosecution for indicating that they don’t trust their government. And here’s where David Murray’s recent critique hits the nail on the head. David is a friend of mine. He’s a friend of the show and he has been an FIR interview guest. And he published a takedown of what he calls this wearying annual ritual. David points out the sheer absurdity of Edelman’s latest focus: insularity. The 2026 report claims that seven in 10 people are insular, as you mentioned Neville, retreating into familiar circles. Edelman’s solution, as you mentioned again, is the trust brokers. And of course, the report finds that employers are the ones best positioned to scale this trust brokering skill set. But as Murray observes, there’s something deeply hollow about a global PR machine using AI and always-on monitoring to lecture us on the human skill of listening without judgment. It’s a case of “human hires machine to reassure self he is human.” Now consider Edelman is a $986 million global PR firm whose stated purpose is to evolve, promote, and protect their clients’ reputations. So when the research concludes year after year that business must lead and that my employer should be the primary trust broker, you have to ask: is this research or is this a pitch deck? Is Edelman documenting a phenomenon or are they selling a solution that just happens to require companies to hire more communications consultants to teach conflict resolution training? There’s also the question of academic rigor. Despite its massive influence, Edelman hasn’t made the full data set available to independent researchers. When their 2023 findings about polarization were criticized for lumping democratic and authoritarian countries together, they produced a reanalysis, but only after removing data from China, Saudi Arabia, and the UAE. And, surprise, the core finding—that business must lead—remained intact. The conflict of interest concerns extend even further. Edelman has been documented working with fossil fuel giants like Shell, Chevron, and Exxon Mobil. They were one of the largest vendors to the Charles Koch Foundation, yet the barometer presents findings about climate change and business ethics without disclosing these relationships. Peer-reviewed research found Edelman was engaged by coal and gas clients more than any other PR firm between 1989 and 2020. When a firm with that client roster tells us that business is the only institution that is both ethical and competent, we should probably raise an eyebrow. Look, I’m not saying the underlying trends—polarization, information chaos, erosion of truth—aren’t real. These are very serious shifts in our reality, but we need to be critical observers of the research. We need to ask who benefits from the conclusion that employers should step into the void left by failing democratic institutions and who profits from the narrative that CEOs, not citizens, should lead societal change? The Edelman Trust Barometer has become the ultimate gathering of elites at Davos telling each other what they want to hear. It provides a veneer of data-driven legitimacy to corporate overconfidence. But if we’re serious about rebuilding trust, we might want to start by questioning the research that so conveniently serves the interests of those who are producing it. Neville Hobson: Yeah, that’s quite a scathing analysis. I read David Murray’s blog post, really, really good, entertaining read in his inimitable style. One that actually mentions some points that are really right up there with some of the critiques you raised from your narrative—a post by Sharon O’Day. Sharon’s a digital communication consultant based in Amsterdam. I think she’s on the button with most of what she writes. I read her content on LinkedIn frequently. She’s got about 82,000 followers on LinkedIn, so she’s got some credentials and credibility. She talked about this, where her headline is the one that kind of sets the scene for what she writes in an article for Strategic Global. She says, “Employers are the most trusted institution—that should worry you,” says Sharon. She goes into a description of what the report is and what the big finding is about “my employer is now the most trusted institution.” She warns before internal communicators rush to embrace “trust brokering”—Edelman’s proposed solution to all this—we should ask what kind of trust are we actually talking about? She goes on to summarize what, in her view, Edelman gets right about this. The trust barometer lands strongly, she says, because it tells people what they already suspect, but with graphs. I did like that little bit there. So she talked a bit about the seductive appeal of trust brokering. And I thought this was a sharp analysis. Edelman’s solution is trust brokering: help people work across difference, acknowledge disagreement, translate perspectives, surface shared interests. Employers as the most trusted institution should facilitate this. You can see why this resonates, she says; it offers organizations a constructive role without being overtly political. For internal communicators, it suggests evolution from message delivery to dialogue facilitation. It fits our existing narratives nicely, she says. But the problem isn’t that this is wrong. It’s that it treats trust as primarily a relational challenge, when in most organizations it’s fundamentally structural. The core weakness, says Sharon, is assuming trust is an emotional state that can be rehydrated with better listening. She says trust is a systems problem, in fact. Workplace mistrust is often entirely rational, she says. People distrust organizations because they’ve watched restructures framed as “growth,” AI introduced without safeguards, workloads expand as headcount contracts, risk pushed downwards while control stays at the top. That’s a pretty keen assessment, I think, of reality in most organizations. And she notes being asked to engage openly feels less like inclusion and more like exposure. Frame trust as sentiment and the solution defaults to messaging. Understand trust as system behavior and the role shifts towards making systems legible: how decisions are made, where constraints sit, what won’t change. And she then talks about when insularity becomes moral judgment, reminding us this now applies to 70% of people globally, according to the trust barometer. The danger: this subtly relocates responsibility. If trust is low because people are insular, help them become more open. But what if mistrust is entirely rational? And she warns again that trust isn’t a moral virtue; it’s a calculation people update based on what organizations do, not what they say. Trust in an employer is not the same as trust in a democratic institution. It’s shaped by dependency as much as belief. Your employer controls your income, your professional identity, and often your healthcare and visa status. That changes the dynamic. So she winds up talking about the hard truth. The most worrying thing is that people trust their employer more than anything else—is that they may not have anywhere else left to put it. That’s not a mandate to become society’s repair shop, says Sharon. It’s a warning about what happens when you’re the last institution standing and you cock it up. For communications, a task isn’t to become trust brokers. It’s to tell the truth about the system people are inside, how it works, where constraint sits, what won’t change and why. Trust collapses when people stop expecting honesty about how decisions get made and who benefits. I think, though, that last bit in particular is a hard truth dose of reality. I suspect where in a sense she’s saying—I’m interpreting her words here—that communicators are part of the game, let’s say. They are not telling the truth about the system people are inside. And that’s quite an indictment to slam that down on the table in the midst of this. Yet, I think it’s a valid point to raise for discussion, whether you disagree or agree. It’s worth considering what she says. Are we all who work in large organizations in particular to communicate what the organization is doing, what the leaders are saying, what’s happening… are we simply regurgitating the top-down perspective of an untruth? Maybe that’s one way of putting it. So it adds to the questioning of Edelman’s motives or their responsibilities. I think what you noted—people like David Murray saying—have done a pretty good job at that. I’m not questioning that aspect of it all. I have found, largely, what Edelman talks about to be valid, notwithstanding those questions about their motives and often undisclosed relationships. Because after all, they interview each year 20,000-plus people in God knows how many countries. And these aren’t folks who have axes to grind themselves in the same way, let’s say, if it’s alleged that Edelman does. So I think it has credibility in that regard. I’m equally aware of a lot of the criticism about this that questions the credibility. I don’t do that the same way others do. I have found, and indeed the same with this current report, value in the information that Edelman have put together that they are sharing. So it’s useful to get a sense of this, particularly the annual changes in sentiment that we’ve reported on this in For Immediate Release throughout the years. I can remember actually being at the very first Edelman Trust Barometer when Richard Edelman was in London—that was in 2000, I think, or 2001. Beginning of the century, 26 years ago anyway. So it is interesting, Shel. And I think the criticisms are worthy of debate, not dismissing them unless you are quite clear you’ve got something else to say. The report is a dense document. It’s quite detailed. I found a good place to start to get a sense of what it’s all about is the top 10 findings, the snapshot views of each of the top points that is under the heading “Trust Amid Insularity.” So it’s definitely worth paying attention to, putting it in the context of what the critics say. Shel Holtz: And frankly, the longitudinal nature of this research—what David Murray called “wearying”—is actually where much of the value comes from: the ability to track change over time in any research. I mean, you look at engagement studies that companies do among their employees. If you couldn’t see how any element of that survey has improved or declined over time, it’s of far less value than getting this one snapshot in time for a single survey. So there’s great value there. And like I say, I think there’s great value in a lot of the data in this survey. I mean, the fact that the focus is on insularity should not be any surprise. We’re seeing this every day. It’s interesting that… I think it had to be 35, 40 years ago, IABC’s Research Foundation, the lamented long-gone IABC Research Foundation, did a study on trust. And I remember the definition that they gave trust. We were talking earlier about the definition of PR; the definition of trust is pretty fixed. It’s the belief that the party in question is going to do the right thing. And so it’s that simple. And the question becomes: what is the right thing? Among people who are inside their bubbles, that insularity, what do they believe the right thing is? And that is probably very different from people who are in a different bubble. And this, I think, is where that “trust brokering” idea has some legitimacy, even if it may not be presented in the best way. I think telling the truth is… that’s not what we need to be doing in order to address this. If we’re not telling the truth, then we simply have no stake in this game. You can’t go anywhere from there. But if you’re telling the truth, how do you get that into the heads of the people who are not paying attention to you? They’re listening to people who say you can’t trust them. And I think that comes through engagement, not through publication, not through telling. To some extent through listening—you must do that to find out what their issues are, what they do believe. But at some point you have to start engaging with people. I mean, the profession is called Public Relations, not Public Content Distribution. And those relations have to have some give and take, some two-ways. So if you have people who don’t trust you and are misinterpreting or are listening to false information being delivered by people who have an interest in taking your organization or your institution down, you need to reach out to those people and start to engage them. And I absolutely agree with whoever it was said that this is the direction that we need to be heading in. I think they were talking about internal communications being more dialogue, but I think that’s true of the external side too. Reaching people who are in bubbles is extremely difficult. I’ll tell you, I was having a conversation—this is a friend of mine who I have learned is on the opposite political spectrum from me. And I told him, “You know, I watch Fox News on a fairly regular basis. I find it important to know what the people on the other side of the political spectrum are hearing, what they believe, what they think, so it can inform my view of things.” It doesn’t change it, but it certainly informs it when I’m having conversations or I’m considering how to reach somebody. I said to him, “You ought to be doing the same. You ought to be watching some of the media that presents the views that are contrary to your own and understand them.” And his answer to me was, “Stop watching Fox News.” He felt that I should stay in my bubble. So this is a pretty entrenched perception that people have. And it’s become very ingrained in the cultures of these insular regions, if you want to call them that. How do you reach people? I think that’s the challenge for people in communications right now: how do you reach the people who just are not interested in hearing what you have to say? They want to hear what your critics have to say, and that’s all they’re listening to. Neville Hobson: Yeah. That makes sense. And indeed, that I think supports one of the key elements of this latest report, which is that traditionally in organizational communication, part of your goal is to get everyone lined up with the same message. We’re all singing off the same sheet and it’s all unified and we go forward. This is a change. This is not about that. It’s not about aligning people who are different; it is about understanding the differences and still being able to engage with them, recognizing their differences. And that makes complete sense to me in the current geopolitical environment, because I believe that what we’ve seen over the past few years—and the driver for this unquestionably is what’s happening in the States since Donald Trump became president for the second term—that as Mark Carney spoke in his speech at Davos at the World Economic Forum, that this isn’t a transition, we’re going through a “rupture.” I’m not sure I… it was very good, very good. I’m not so sure it is that—maybe it is a transition, it doesn’t matter what you call it—but the reality is that people are afraid in many countries. Just watch the TV news and you’ll be scared most days, particularly when you see things that you couldn’t imagine happening in some of the countries where it is happening, notably in the US, what’s happening there with crackdowns in various parts of society. It’s truly extraordinary. I think that is a big influence on this insularity, people withdrawing. Yet I think where it talks about people wanting to engage with people with similar views, similar beliefs and so forth, not different beliefs… I seem to remember a few years back—I’ve forgotten which year it was—when the Edelman Trust Barometer of that particular year published something that was quite radical, where the most trusted person in the world, if you will, is “someone like me.” I remember that. This is that, is it not? It’s someone like me, except the dynamics are very, very different to what it was back then. And I think one of the things I feel that this is a thing to really pay close attention to, which is aligned with what you said about engaging with people outside of different individual bubbles, is that recognition of difference. It is the fact that people need pushing in the right way. And it is the fact—and again, this comes back perhaps to Sharon O’Day’s critique—that we’re not telling the truth. That we need to tell a different version of the truth, if that doesn’t sound kind of weird. There is always more than one version of the truth. And I think: which one do you trust? And that’s, I think, a big challenge for communicators because it surely would be easy for a senior-level communicator, particularly if they’re an advisor to the C-suite, to see when the messaging coming out of the C-suite is simply not the right messaging. Not saying that they’re not telling the truth, far from it. What they believe as the truth may not actually reflect what is happening. And that’s where listening really becomes key. So it means, I suppose, that communicators can rethink this whole structure in light of what Edelman’s saying, but not exclusively because of this. But take a look at this: one of the key findings, the first one that Edelman mentions, “insularity undermines trust.” And that’s something that I grabbed from this when I wrote my blog post about this—a kind of reflective post I wrote a few days ago—of what insularity, when people withdraw into themselves and stop engaging with others with different views, they often can undermine the authority within an organization of what the leaders are trying to do by not cooperating, by just simply not doing it or even actively dissing it or whatever it might be. Is that a new thing? Maybe it’s not, but it certainly has a mass impact if you see that sort of thing going on. “Mass-class divide deepens” is another one that they talk about—the gap between high and low-income groups. So these are the bigger picture issues in our society. And yet we’re seeing things going on because of these changes in geopolitics, I suppose, that this is not a good thing. And institutions are falling short. The four big institutions I mentioned at the start are falling well short on addressing this. The phrase “trust brokering”—I really don’t like that, to be honest, Shel. It sounds gimmicky. It sounds like a catchphrase that someone’s come up with, which I suspect is what’s prompted a lot of the criticisms of it. I’ve even seen some people say, “Wait a minute, trust broker… isn’t that what communicators have been doing for years?” Now we’re calling it trust brokering. So we need to get past this kind of labeling confusion, I think, and look at what we must do to help leaders in particular do the right thing in their organizations and how they’re communicating things and enable, if you like, empower properly communicators to take all this forward. But there’s lots to pick from this report, I think, Shel. Shel Holtz: Yeah, I wonder how many PR agencies are going to announce soon that they are launching their “trust brokering units” now available to engage in your organization. I’m going to invoke the IABC Research Foundation one more time. Their seminal work was the Excellence Study—Excellence in Public Relations and Communications Management—outstanding effort. And the primary work that came out of that was a review of the literature on all of this. So a lot of academic stuff. It’s a rather lengthy book. I’ve read it; I still have it; I still refer to it. But one of the things that I learned when I was reading this way back when it came out is this notion of “boundary spanning.” It’s an academic term from PR in the academic world. And it suggests that public relations people really need to understand the perception and the perspectives of the opposition so well that when they talk about it in the organization, people are going to be suspicious that the PR people have switched sides. You understand it so well that you can basically talk like the opposition does and convey their concerns and their critiques as if you were one of them. I don’t know how many public relations people are doing that these days. Given the results of this research, it seems to me that boundary spanning is becoming a necessary tactic for public relations practitioners. I think it’s important that if that’s not something that you have looked into and this is the work that you do as a communicator, something to pay attention to. Neville Hobson: Yeah, I would agree with that. So there’s lots to absorb in this. We’ve touched on the kind of prominent points, but there’s one that struck me as an interesting one on Edelman’s list of the top 10 issues. There’s a tenth one. There’s a last on their list: “Trusted voices on social media open closed doors.” And I thought that’s an interesting take on that. They say people who trust influencers say they would trust or consider trusting a company they currently distrust if it were vouched for by someone they already trust. Think about that. That’s interesting, because we’re seeing separately to the Trust Barometer, influencers as a group, let’s say broadly speaking, under threat for lack of credibility in many cases. Some of the face-palming things that I’ve read about influencers doing or saying in recent months has been, you know, face-slap—you whack your hand on your head. But this is true, in my view, that that makes sense to me. And maybe that is an easy way for communicators to engage with people, maybe in slightly more open ways than they have in the past to enable that kind of thing. So again, it’s a thought point, if you like, that’s worth considering, even though it’s not high up on Edelman’s top 10 list—it’s the 10th. Worth paying attention to though, I think. Shel Holtz: Absolutely. And you see the opinion polls showing a shift in support or lack of support for one thing or another based on what some of the prominent influencers are saying when they change their view. Looking at the “bro-verse” in the podcast world—people like Joe Rogan, for example—who were very supportive of Donald Trump when he was running for president, and you look at the independent vote and it was very supportive of Donald Trump. And the bro-verse has shifted with what’s going on in Minneapolis and some other cities. You’re hearing Joe Rogan say, “What is this? The Gestapo in the streets now?” And now you’re starting to see that shift in opinion among independent voters away from Trump. Now this is a correlation, not a causation. But still, it’s interesting and seems to validate that 10th point among those top 10 from Edelman. Neville Hobson: Agree. So lots to unpack here. We’ve touched… we scratched the surface basically and shared some opinions of our own. There’ll be links to the report and some other content in the show notes if you want to dive into it. Shel Holtz: And we’re going to switch gears now and talk about artificial intelligence for at least the next two reports. These are very complementary reports—the one I’m about to share, then after Dan York’s report, Neville, your story, very, very complementary. So let’s get started. There is a striking disconnect happening in corporate America right now, and it comes down to a shift in perception. Leaders think their AI rollouts are going great, while the view from the cubicle is “not so much.” Let’s start with the numbers. A Gallup survey of over 23,000 workers found that 45% of American employees have used AI at work at least a few times. Sounds encouraging, doesn’t it? But wait—only 10% use it every day. Even frequent use sits at just 23%. So despite a year of this breathless hype and massive corporate investment, actual day-to-day adoption remains marginal. And here’s what may be the most telling statistic: 23% of workers, including 16% of managers, don’t even know if their company has formally adopted AI tools at all. Now, think about that. Nearly a quarter of your workforce is so disconnected from the organization’s strategy that they can’t say whether one even exists. This gap suggests that shadow IT problem where employees are using personal tools like ChatGPT while remaining completely unaware of their employer’s official path forward is what we’re probably seeing in a lot of organizations. The adoption pattern breaks down along predictable and frankly troubling lines. Usage is concentrated where you would expect: Technology organizations (76% of employees are using AI), Finance companies (58%). But in retail and manufacturing, those numbers crater: 33% in retail and 38% in manufacturing. AI is languishing in the same place as it always does—among the people already closest to the technology. Now, contrast this with JPMorgan Chase, which has become the poster child for successful enterprise AI adoption. When they launched their internal LLM suite, adoption went viral. Today, more than 60% of their workforce uses it daily. That’s six times the national average. Now, what did JPMorgan do differently? Their chief analytics officer, Derek Waldron, says they took a “connectivity-first” approach. Instead of giving employees a login to a generic chatbot and calling it a day, they built AI that actually connects to the bank’s internal systems—their customer relationship management package, their HR software, their document repositories. An investment banker can now generate a presentation in 30 seconds by pulling real internal business data. The bank also understood the Kano model of satisfaction. They made the tools genuinely useful and voluntary. They didn’t mandate usage. They bet that if the tool solved a problem, word would spread organically. They also ditched generic literacy training for segment training—that is, teaching people how to use AI for their specific work. Now here’s where things get a little uncomfortable. JPMorgan has been candid about the consequences. Operation staff are projected to decline by 10%. While new roles like context engineers are emerging, the bank hasn’t promised that everyone will keep their job. Meanwhile, at most other organizations, we’re hitting a “silicon ceiling.” BCG, formerly Boston Consulting Group, found that while three-quarters of leaders use generative AI weekly, use among frontline employees has stalled at 51%. The problem is a leadership vacuum. Only 37% of employees say their organization has adopted AI to improve productivity. A separate Gallup study found that even where AI is implemented, only 53% of employees feel their managers actively support its use. Then there’s the trust issue. Nine in 10 workers use AI, but three in four have abandoned tasks due to poor outputs. The issue here isn’t access; the issue is execution. People don’t know how to prompt or critically evaluate the results. Worse, 72% of managers report paying out-of-pocket for the tools that they need to do their work using AI. In response, some companies are taking a hard line. Meta has announced that starting in 2026, performance reviews will assess AI-driven impact. In other words, AI use is no longer optional at Meta. So where does this leave us? We have bullish leaders making massive investments while their workers are either unaware of the strategy or worried that using AI makes them look replaceable. The fundamental problem is that companies are deploying AI as if it’s just another software rollout. And it is not. It requires rethinking workflows, investing in specific training, and building tools that connect to real business data. The gap between AI hype and actual adoption isn’t going to close until organizations figure that out. Neville Hobson: There’s a lot in there, Shel, that is interesting, I have to say. I think JPMorgan is a use case that’s definitely worth studying what they’ve done. I’m reading the article that appeared in VentureBeat talking about that. It talks about “ubiquitous connectivity”—great, two words put together—plugged into highly sophisticated systems of record. You mentioned how integrated this was to all their internal systems. So you can see some things there that you don’t hear some other companies explaining things that way. The forward-looking approach… so they’ve got leaders who are treating this the right way. It talked about, as you said, they didn’t just enable this and then say, “here you go.” They literally developed it as an ongoing thing in conjunction with employees, which is really good. I think, though, that the alarm bells ring in the first part of your report, when you were talking about how employees say they’re fuzzy on their employer’s AI strategy, with many not knowing whether their employer has one or not. I’d like to think that that’s not the majority, but I fear I may be misplaced with that view, because the ones that don’t do this—in other words, they do it the right way—are the ones who are reaping the benefits. And there are lessons, simple lessons, to learn from that. Workers who use AI tend to be most likely to use it to generate ideas and consolidate information, Gallup says in introducing their survey report. That makes sense, doesn’t it? That they are… so you’ve got to enable that in an organization. I think there’s more we’ll talk about this when we get to the report you mentioned that we’ll talk about after Dan’s report that expands on this quite significantly. But there are some lessons to be learned from some of the things we discussed on this podcast in recent episodes. You mentioned Boston Consulting Group, where we’ll talk a bit more about the survey they did that paints a very different picture on this. Still, I have to say I’ve seen other reporting, including some of the ones you shared here, where it does talk about the huge gap between the views of leaders and organizations compared to the opinions of employees in those organizations on the state of AI and the benefits it’s supposed to bring. I think the Harvard Business Review report you shared as well—there’ll be links to that in the show notes—that says, “Leaders assume employees are excited about AI; they’re wrong,” says the Harvard Business Review. And they’ve got some really good credible data here to back up that review. The higher you sit in the organization, the rosier your view. Is that not true of many things in an organization, I wonder, that you’re insulated from some of the reality? Is there something communicators can do to alleviate that little problem? I suspect so. These are disconnects that do not help the organization if you really do have blind spots like that, I think. So it’s good to see this. The HBR talks about a survey they did—1,400 US-based employees. 76% of execs reported their employees feel enthusiastic about AI adoption. But the view from those employees was not that at all—just 31% of them expressed enthusiasm. That’s a bit different to what the execs are saying. So I wonder how we get to that reality and then add that to the climate of trust we discussed in the Edelman Trust Barometer and the landscape’s looking like a very tricky one for communicators in a wide range of areas. Add this to that list of concerns. Shel Holtz: Yeah, this report has really been focused on adoption among employees. You’re going to take a different spin on this after Dan’s report around the perception gap between executives and employees. But I think it comes down to mismanagement of the rollout of AI in, I would have to say, most organizations. And I think it’s a lot of different factors contribute to this, but leaders need to be paying more attention to what they want from AI. I mean, is it really just evaluating tools that have AI baked into them that we can bring into the organization? Or is it rethinking the organization writ large based on what AI can do in a more organic way? I love the point out of JPMorgan that an analyst can now create a deck in 30 seconds because the AI has access to all the internal data. That’s valuable. An employee can say, “That is something that is worthwhile to me.” Whereas you give them access to Copilot because you have an Office 365 contract in your organization and everybody has access to it and say, “Here’s Office 365, godspeed.” And you provide basic training to everybody that says, “Here’s how you write a prompt and here’s how you look for hallucinations and blah, blah, blah.” But it doesn’t tell somebody in a particular role what this can do for them. They’re going to leave that saying, “Okay, I think I can craft a good prompt now. Why would I want to do that? What would I prompt for?” I think this requires much more attention on the part of leadership and much more commitment to viewing this as a change initiative that has to be led from the top. Neville Hobson: Yeah, I think your point you mentioned earlier about this being to do with adoption and rollout as opposed to perception… but they’re both connected according to Harvard’s report anyway. They talk about: “When organizations see AI adoption as a way to make work better for employees and communicate that as opposed to as a pursuit of efficiencies and productivity, AI efforts gain traction.” And that’s repeated in many of the surveys that we could talk about. They communicate a shared purpose, involve employees in shaping the journey, and move people from resistance to enthusiasm. Makes total sense to me. The report also, the Harvard report, talks about employee-centric firms. I thought every firm was an employee-centric firm, but maybe I got that wrong. Employees on average are 92% more likely to say they are well-informed about their company’s AI strategy and 81% more likely to say that their perspectives are considered in AI-related decisions. That’s a huge percentage, I have to say. 70% more likely to feel enthusiastic and optimistic about AI adoption, reporting emotions such as empowerment, excitement, and hope rather than resistance, fear, or distrust. Communication, execution—that’s the kind of pathway, I suppose, or execution and communication, both hand in hand. So it’s… slow employee adoption of AI is clearly the norm, by my judgment, based on what you’ve been saying, what I’ve listened to, what I’m seeing in some of these reports. Makes me wonder: surely it’s a known status, if you like, that communicators can get a hold of and do something about, I would have thought. So would we expect to see a change in that area? I hope so. Great report, Dan. Thanks very much indeed. I enjoyed listening to your assessment of Wikipedia over the past 25 years. I’m a huge user of Wikipedia and I’m as conscious as you are and many others of some concern about the challenges Wikipedia is facing with misinformation, disinformation, AI, the works getting involved. Looking with interest at how Wikipedia is addressing some of those things. I receive a lot of communication with you; I’ve been a donor for years to support Wikipedia. I’m pleased to see them, I guess, recognizing the shifting landscape and doing something about including AI in some form in terms of the editorial or the editing elements of content on Wikipedia. It’s a challenge without question. So your take on being an editor all those years is interesting, Dan. I’ve done a bit of that, nowhere near as much as you have. And it is interesting… I come across things I read on Wikipedia—I do read it quite a bit when I’m looking for information—that I will see something and think, “That’s not right.” And I might propose an edit in the talk pages. Rarely do I dive in and edit unless it’s something so obviously wrong, unless I’ve got… if I don’t have a source I can cite. So yeah, it’s interesting. And I remember you mentioning before your live editing streams on Twitch. They’re pretty cool. Yeah. Shel Holtz: I remember watching those during the pandemic. That was fun. Neville Hobson: Yeah, so great recap, Dan, thanks very much, worth listening to. So let’s continue the conversation then on the views of CEOs, how they differ from employees in AI introduction. I’m going to reference a Wall Street Journal story that talks about a survey seeing that “CEOs say AI is making work more efficient; employees tell a different story.” Much of the public narrative around generative AI in organizations has been framed as a productivity story—one where AI is already saving time, streamlining work, and delivering efficiency at scale. We’ve touched on a lot of that in your earlier report, Shel, our conversation there. But a recent Wall Street Journal report suggests there’s a growing disconnect between how senior leaders perceive AI’s impact and how employees are actually experiencing it day to day. So the Journal’s reporting draws on a survey by the AI consulting firm Section, based on responses from 5,000 white-collar workers in large organizations across the US, UK, and Canada. The headline finding is stark: two-thirds of non-management employees say AI is saving them less than two hours a week or no time at all. By contrast, more than 40% of executives believe
FIR #497: CEOs Wrest Control of AI
The latest BCG AI Radar survey signals a definitive turning point: AI has graduated from a tech-driven experiment to a CEO-owned strategic mandate. As corporate investments double, a striking “confidence gap” is emerging between optimistic leaders in the corner office and the more skeptical teams tasked with implementation. With the rapid rise of Agentic AI — autonomous systems that execute complex workflows rather than just generating text — the focus is shifting from simple productivity gains to a total overhaul of culture and operating models. In this episode, Neville and Shel examine this evolution that places communicators at the center of a high-stakes transition as AI moves from a pilot phase into end-to-end organizational transformation. Links from this episode: As AI Investments Surge, CEOs Take the Lead Complete BCG Report The next monthly, long-form episode of FIR will drop on Monday, January 26. We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email fircomments@gmail.com. Special thanks to Jay Moonah for the opening and closing music. You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog. Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients. Raw Transcript: Shel Holtz: Hi everybody and welcome to episode number 497 of For Immediate Release. I’m Shel Holtz. Neville Hobson: And I’m Neville Hobson. For the past couple of years, AI in organizations has mostly been talked about as a technology story—a set of tools to deploy, experiments to run, and efficiencies to unlock. It was often led by IT, digital, or data teams, with the CEO interested but not always directly involved. The latest AI Radar survey from BCG suggests that phase is now over. For the third year running, BCG has surveyed senior executives across global markets—nearly 2,400 leaders in 16 markets, including more than 600 CEOs. The standout finding isn’t just how much money organizations are spending on AI, or even how optimistic leaders are about returns. It’s something more structural. Nearly three-quarters of CEOs now say they are the main decision-maker on AI in their organization. That’s double the share from last year. This is not a minor shift; it’s a transfer of ownership. AI is no longer being treated as another digital initiative that can be delegated at arm’s length. CEOs recognize that AI cuts across strategy, operating models, culture, risk, governance, and talent. In other words, AI isn’t just changing what organizations do, it’s changing how they run. Half of the CEOs surveyed even believe their job stability depends on getting AI right. We’re also seeing a striking “confidence gap.” CEOs are significantly more optimistic about AI’s ability to deliver returns than their executive colleagues. BCG describes this as “change distance.” People closest to the decisions feel more positive than those who have to live with the consequences. The survey identifies three types of AI leadership: Followers (cautious and stuck in pilots), Pragmatists (the 70% majority moving with the market), and Trailblazers. Trailblazers treat AI as an end-to-end transformation and are already seeing gains. What’s accelerating this is the rise of Agentic AI. Unlike earlier tools, agents run multi-step workflows with limited human involvement. This raises the stakes for governance and accountability. This is where communicators come in. If AI is now a CEO-led transformation, communication can’t just sit at the edges. It’s not just about writing rollout messages; it’s about helping leaders articulate why AI is being adopted and what it means for people’s roles and sense of agency. Is this the shift that turns ambition into transformation, or does CEO confidence risk becoming a blind spot? Shel Holtz: Excellent analysis, Neville. I think there’s data in this report that is incredibly heartening. One of the characteristics of the “Pragmatist” CEOs—who represent 70% of the responses—is that they are spending an average of seven hours a week personally working with or learning about AI. I’ve never seen that before. When we introduced the web or social media, CEOs weren’t using it personally. This immersion is very helpful for the communicators who need to tell this story. What’s troubling, though, is that 14-point confidence gap between CEOs and their managers. I don’t think this is just “resistance to change.” If the people implementing the systems are less confident than the person funding them, are we headed for an “AI winter” of unmet expectations? Communicators need to become translators. Our job isn’t just selling the vision; it’s bridging a reality gap. If managers are skeptical, a CEO’s “rah-rah” AI speech will backfire. We have to translate that vision into operational safety for the staff while advising the CEO on the actual temperature of the workforce. Neville Hobson: You’ve said that well. Communicators sit right at the center of whether AI transformation is trusted or resisted. This is a different picture than before. The senior communicator now has an unspoken challenge to assume a recognized leadership role to close that gap. The appeal here is that you have a landscape ripe for a communicator to take the lead. You don’t have to sell the idea to the leadership—they already have the budget and the will. You can concentrate on persuasion and diplomacy to make sure the support is there throughout the organization. CEOs are going to need support on how to fulfill this aspect of their job. It’s also interesting to note that confidence gaps are widening. The 2026 Edelman Trust Report also speaks to these issues regarding the relationship between people and organizations. The communicator is going to have to write a brand-new playbook. Shel Holtz: Absolutely. And for the “Trailblazers,” the report suggests AI will lead to flatter, cross-functional organizational models. This puts middle managers at risk. If Agentic AI can plan, act, and learn multi-step workflows, what happens to the layer of management whose job is coordination and oversight? Is the CEO leading us toward a future where the “human middle” becomes redundant? How do you communicate with people who fear the technology will put them out of a job? Neville Hobson: Unquestionably a challenge. Many CEOs recognize their own jobs are on the line, too. This isn’t petty cash; we are talking about massive investments. Communicators must help employees understand this shift in structure. It’s not a CIO-led digital transformation anymore; it’s a CEO-led business redesign. Shel Holtz: To complicate things, 90% of companies say they will increase AI spending even if it doesn’t pay off in the next year. This is a “burn the boats” strategy. At what point does commitment become a sunk-cost fallacy? Neville Hobson: To summarize, the main task for communicators is helping leaders articulate why AI is being adopted. We need to bring the human element in firmly as a foundational element. AI transformation will fail or succeed as much on meaning and legitimacy as on technology. Shel Holtz: It’s an organizational change process. If the CEO owns it, they are the chief spokesperson and must articulate the vision while maintaining two-way communication. We also have to look at the strategic plan. If the direction of the industry is shifting, organizations may need to change their very aspirations and strategic goals, which requires considerable communication. Neville Hobson: Fun times ahead, communicators. Shel Holtz: And that’ll be a 30 for this episode of For Immediate Release. The post FIR #497: CEOs Wrest Control of AI appeared first on FIR Podcast Network.