Content Operations

Content Operations

https://www.scriptorium.com/feed/podcast
16 Followers 193 Episodes Claim Ownership
The Content Operations podcast from Scriptorium delivers industry-leading insights for scalable, global, AI-optimized content.

Episode List

Good content = good AI: The fundamentals that never change

Mar 23rd, 2026 11:27 AM

Good content fundamentals have been the foundation of effective product content for decades, and those same principles are exactly what make content AI-ready today. In this episode, Bill Swallow and Alan Pringle explain how attending to your hierarchy of content needs is the key to AI success. Alan Pringle: Right now, AI is not going to fix bad content problems. It is going to regurgitate that bad information, giving your end users information that’s flat out wrong. If your content at the basic source level is wrong, your AI by extension is going to be wrong. And that is the unglossy, unvarnished, hard truth that is still, I don’t think, seeping in like it should across the corporate world. Bill Swallow: It really does come back to the fact that, despite the world changing on a day-to-day basis, the fundamentals have not changed. Related links: A hierarchy of content needs Technical Writing 101, 3rd edition Structured content: a backbone for AI success LinkedIn: Alan Pringle Bill Swallow Transcript: This is a machine-generated transcript with edits. Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Bill Swallow: Hi, I’m Bill Swallow. Alan Pringle: And I’m Alan Pringle. BS: And in this episode, surprise surprise, we’re going to talk about content. AP: Really? Who would have thought? BS: But more specifically, what good content means today. Today, everything is all about AI. There is lots of change in progress with regard to AI tooling and content delivery with AI. But have the needs for content really changed? And I would say that off the bat, if you’re doing content right, you really don’t have to reinvent the wheel to make it AI acceptable. AP: No, in this crazy AI-hyped world we’re in, there’s some very basic foundational things that tend to get overlooked because they’re not sexy, and they’re not special and hot and whatever else. All that kind of marketing garbage that just sets me on complete edge and makes me want to say profane things in podcasts.  The bottom line is, there are things that the content world, and especially our little subdomain of it, product content world, has been doing for decades now. And I mean decades.  BS: Or should have been doing. AP: Correct. There are basic tenants that have been in place for decades. That if you’re following them, you are starting down the road of success with AI. I think to kind of prove our point, we’re going to step back and look at some of the things that Scriptorium has talked about and written in the past and see how it stacks up. And Bill, you found one. And let’s talk about that blog post that Sarah O’Keefe wrote. What was the date on that again? BS: It was 2014. And that is when we came up with the hierarchy of content needs. And it really wasn’t so much an invention as it was just a regurgitation of what it means to create good content. So we have a pyramid of content needs. At the bottom, we have available. So is content available? Does it exist? Can someone get to it? I think that we’ve mostly solved that problem given the dearth of information we have out on the internet. But as we know, that information is not always useful. So we go up a rung or a layer on that pyramid and see whether or not the content is accurate. And if it’s accurate, if it provides the correct information, that’s fantastic. Then we go up another level and see whether or not the content is actually appropriate. So it can be correct. It can exist. But is it appropriate? Does it meet a reader’s needs? And is it formatted in a way that works for the reader to ingest? Then we go up a step further and see whether or not the content is connected. And this is where we kind of get to the more modern aspect of content. Does it link out to correct additional resources? Is it available to people in a variety of means? And does it engage with the audience? And then finally, at the top of the pyramid, we have intelligent content. Is the content intelligent? And we’re not talking about AI here at all, but we are really talking about is the content fashioned in a way that it can be used intelligently across different media? AP: That it can be manipulated for different purposes. And that is quoting Sarah directly. And I think that is key here, because that is what AI does. It takes information and basically chops, slices, dices it, and provides it in a new way via a chatbot, for example. So that is that whole manipulation that Sarah is talking about. And we will post a link to the post in the show notes so you can read this at a greater detail to see how well this hierarchy of content needs has stood up. And she even talks about, for example, integrating database content, how you can pull in other information product specifications. If you think about it from an AI lens, I think that parallels pretty closely to the idea of retrieval augmented generation, where you are pulling content from other sources and kind of weaving it in with what an AI engine is providing you. So RAG is, I think, could be kind of interpreted as another way of integrating other information into the way that AI is processing that content. BS: Right, mean, because AI, I mean, it’s not really an audience, but it is a delivery point. There are some structural needs that need to happen there. But ultimately, you’re still writing for people. You might be writing in a way that it allows the AI to repurpose and refactor the information so that the audience gets exactly what they’re looking for. But it still needs to be somewhat tailored to the needs of people because AI in itself, it doesn’t care what the content is, but it’s going to try to produce something for an eventual person to be able to read. AP: I think that then in turn points to something else in our vast compendium of Scriptorium content. And that is a book that Sarah and I wrote, the first edition in 2000, which just kind of makes me shake my head. I know this is not a video podcast yet, but I’m shaking my head in disbelief. The book, Technical Writing 101, has three editions, published between 2000 and 2009. We will put a link in the show notes. You can still download the third edition. And by the way, it’s free. You can get a PDF or EPUB. It’s free. You can get it from our store with some more recent resources from the store.  But to me, I flipped through that book this morning. And I was genuinely surprised at how much of the advice on how to create good product content still is true in this AI era. Everything of talking about modular, writing things in a modular way, being very systematic and structuring things, even if you’re not using a structured authoring tool, use a template, make things very standardized. These are all things that, yes, they make for better, consistent, standard, tech-com, product content for the person reading it. But let’s pretend like AI is the person reading, and I’m doing air quotes here, reading it. It is going to do a better job of understanding, again, I’m sort of personifying here, and I know that’s sort of a no-no. But if you feed AI, a large language model, content that is very structured, that is very templatized, that is standardized, that is in bite-sized chunks, and also, this is very important, the idea of metadata, which we do talk about in that book briefly. We do talk about it. Because you need to be able to label it for different audiences, because I’m thinking about someone sitting, trying to use a product, trying to use a piece of software, talking to a chatbot. And the chatbot is going to ask it, what product are you using? What’s the model number? All of those kinds of things. And now we’re getting to this whole idea of labeling and breaking things apart so that a chatbot, just like a user of a product. Let’s say somebody has a printer that’s on the highest end of the scale. They’re going to have a lot more features that apply to their model than to someone who bought a more basic one. But the thing is, if your product content has not clearly labeled what are features in each of the models, the chatbot is going to spit out the wrong thing. So again, this idea of breaking things up in discrete chunks and labeling them in a way where someone who wants specific information about a specific model, they can get it. And it doesn’t matter if it’s from a web page, it’s from a PDF, a printed book, God forbid in 2026, or from an AI chatbot. Those rules still apply. Those fundamental principles are still there. BS: Mm-hmm. AP: I think one of the biggest problems here is when people do not have those fundamentals already in place, right? BS: If they don’t have those fundamentals in place, they can’t get to the top of that pyramid that Sarah was talking about. And really those fundamentals are those first three layers. Content is available, content is accurate and content is appropriate. If you can actually nail those three layers of the hierarchy of content needs, you are set to then jump to connected and intelligent fairly quickly because your content is already well written, standardized, and appropriate for different audiences. AP: So we’re right back to talking about the way you put content together, your content operations, and how you have to have these fundamental principles basically embedded in your processes to create that content that goes up all the way up to the hierarchy, the very top of the hierarchy of need pyramid.  So then that begs the question, what is going to happen to your AI if you don’t have those fundamentals in place, if you aren’t all the way up that hierarchy of content needs? I’m afraid to tell you your AI is going to fail. And this is something that I’ve said often, but it bears repeating because it is clear. Unfortunately, a lot of people high up the corporate food chain do not understand this.  Merely slapping AI on top of content that is fundamentally outdated and incorrect. Right now, it is not going to fix those problems. It is not magically going to fix them because what is AI going to do? It is going to regurgitate that bad information, acting like it’s knowing what it’s talking about until your end users very definitively that you need to do this to make this happen and it’s flat out wrong. And again, right now, AI is not going to be able to fix that right now. One day it may be able to, but right now, if your information, your content at the basic source level is wrong, your AI by extension, is going to be wrong. And that is the unglossy, unvarnished, hard truth that is still, I don’t think, seeping in like it should across the corporate world. BS: It really does come back to the fact that, despite the world changing on a day-to-day basis, the fundamentals have not changed. Nothing is new. AP: No, no. And if you have an AI initiative and you are part of the content world and your content operations aren’t up to snuff, this is a way to get funding to get your content operations up into the 21st century. And I don’t want to say that as and sound glib and dismissive, but by the same token, I know for a fact there are a lot of companies out there who are still serving up their content locked up in PDFs that may be online. That is not going to fly. That does not follow. It doesn’t go high up the hierarchy of content needs, if you want to look at it from that perspective. So it is time to break free of this idea of you present content in a particular way. And you have to look at content as something that is basically, it’s a commodity, it’s data that AI is going to manipulate and do whatever to to meet the needs and the wants of the people who are using the chat bots and other agents that are accessing that large language model. BS: And I think that’s a good place to leave it. Thanks, Alan. AP: Thanks, Bill, short and sweet, but needed to be said. Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. The post Good content = good AI: The fundamentals that never change appeared first on Scriptorium.

Check in on AI: The true measure of success for AI initiatives

Feb 16th, 2026 12:25 PM

In this episode, Sarah O’Keefe and Alan Pringle explore how AI transforms content delivery from static documents into dynamic, consumer-driven experiences. However, the need for human-led governance is critical, and Sarah and Alan explore issues of accuracy, accountability, governance, and more. They challenge organizations to define AI success by its ability to deliver accurate, high-impact outcomes for the end user. Sarah O’Keefe: The metrics that are being used to measure the success of AI are all wrong. We should be measuring the success of various AI efforts based on, “Are people getting what they need? Are they having a successful outcome with whatever it is that they’re trying to do?” The metric we actually seem to be using is, “What percentage of your workflow is using AI? How many people can we get rid of because we’re automating everything with AI?” It’s the wrong metric. The question is, how good are the outcomes? Related links: Sarah O’Keefe: AI and content: Avoiding disaster Sarah O’Keefe: AI and accountability Alan Pringle: Structured content: a backbone for AI success Questions for Sarah and Alan? Register for our upcoming webinar, Ask Me Anything: AI in content ops. LinkedIn: Sarah O’Keefe Alan Pringle Transcript: This is a machine-generated transcript with edits. Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Alan Pringle: Hey everybody, I’m Alan Pringle, and today I’m here with Sarah O’Keefe, and we want to do something I’ve kind of dreaded to be honest, to do a check-in on AI in the content space. I’m very ambivalent about this topic. There’s still even two, three years in, there’s still a lot of hype, but there’s also been some good things that have emerged. We need to talk about it fairly realistically. So, Sarah, get ready. Let’s see if I can not curse during this. We’ll try. I’ll try my best not to be like that in this. Legitimately, there are some things that we need to talk about, and also about the challenges because I don’t think the content world is completely ready for a lot of what’s going on right now. Sarah O’Keefe: You know that we have AI that can remove cursing from podcasts, so I feel like we’re good here. AP: Well, also, it’s a challenge to me to behave in a PG-13 more family-friendly kind of way. So I’ll do my best.  SO: I have no idea what you’re talking about. AP: Yeah. So let’s start with the good and where things are right now with the positives. What is AI doing well right now? And let’s kind of get beyond the summarization. I think we can say objectively right now, in general, AI does a very good job of summarizing existing content. But I think it’s doing a lot more beyond that, and we should touch on those things instead. SO: The first thing that I would say is that summarization, but specifically the use case of a chatbot or a large learning model, an LLM, so now we’re talking about Claude, Gemini, ChatGPT and all the rest of them, which has the ability to provide an end user with a way of accessing information, an information access point that is different than what we had previously. In the olden days, you had a book, and you had to sort of flip it open and look at a table of contents or maybe an index and navigate to a page. Fine. Then along comes online content, and you can do full text search, or you can then go into an internet search, right? You type into the search bar, you get a bunch of results, you click, and you sort of, no, that’s not quite it. You modify your search string, you search again, and you sort of navigate your way to where you’re trying to go. With the interactivity of the, you know, ChatGPT class of tools. What happens is that I ask it a question and it gives me an answer. And then I say, that’s not quite what I wanted. And I can sort of zero in on exactly what I’m looking for and tell it, but actually make this easier. Or I don’t understand the words you’re using. Use simpler language. Give me more. Give me less. Give me a summary. Use this as a source. Do not use that as a source. It’s a new way to access information. People love it. There is something psychologically helpful about a conversational search. Now, there’s obviously huge issues with this, particularly around people, you know, using chatbots as their therapists, which introduces all sorts of horrifying, horrifying ethical issues.  AP: Personifying them as a person on the other end. Right. SO: But in the big picture, used well, it allows you to get to the information you’re looking for and get at it in the way that you want.  AP: There’s a control issue here. I don’t think the content consumer has ever had this level of control. SO: Yeah, and as a content consumer, that speaks to me. That is helpful. We’re seeing increasing use of, I would say, guardrails. So, not just slam out the AI with a bunch of stuff, but rather we’ve put some guardrails around it, and there’s various kinds of technologies that you can employ there. And that has been very helpful. And then the third thing I would point to is when we talk about generative AI and generating content, there’s a lot you can do in that sort low fidelity bucket. And what I mean here is I need an image for a presentation, but the background is the wrong color, so I can just swap it out. Now, I can do that with Photoshop. Well, some people can do that with Photoshop. AP: Well, I was about to say, don’t think you or I should be saying we can do the Photoshop because we kind of can’t. SO: Right. Well, and that’s exactly it. So it’s lowered the bar, right? Because I can tell the AI to swap out the background, and it will. And it applies a mid-level Photoshop capability to this image. And now I have the image that I need with a dark background so that the white text shows up in my presentation, that kind of thing.  AP: Right. Yeah. SO: We can do low-stakes synthetic audio if this podcast, which for the record we are recording with actual human beings, but let’s say that Alan curses extensively and we need to swap it out well, we could pretty easily generate some synthetic audio that sounds like him and that PG of eyes the original wording into something that is You know cleaner it would be way funnier to just bleep it. So I don’t know why we would do this but… AP: Correct. Well, and it may come to that. The bottom line is what you’re talking about here is things that have very low risk. This is more fun stuff, the thought of doing some of what we’re talking about and stuff that describes how to use a medical device, for example. Not sure I want to go there with that. But for something low stakes like some one-off presentation that you’re giving, maybe some humor is involved, I totally think that’s an acceptable use because there’s no risk there. SO: That’s really the key point because let’s say you’re writing content for a new medical device. Now you probably have a version one of said medical device, and you’re doing a version two. So, okay, fine. We take the version one content and we sort of, you know, say add color because that’s what we added, you know, in version two, and update all this stuff automatically. But it then becomes very important to actually read that, look at that information, look at all the images, make sure that everything is correct. And by the time you do that super carefully, you may have given back all the time that you saved on the back end when you basically made a copy and said generate the new version. There’s some, you have to be really careful with that, especially depending on what your stakes are in terms of regulating regulatory or compliance stuff.  You can, of course, get away with using AI, as you said, for low-stakes stuff. Now, the big risk you run there, and we’re seeing this in my favorite example of low-stakes content, which is video games, the video game industry has seen huge amounts of pushback against AI-generated game content, because it’s not fun. It’s not creative. It feels flat. It’s not art, and it’s not fun to play. And so it just becomes a slog. Again, same thing. Did you use it for maybe some backgrounds here and there? Okay. Did you use it to drive the story that you’re trying to establish or set up? You know, the enemies that you’re hypothetically fighting, and then they all have a certain sameness, or they all, you know, you’re sort of stealthing your way around the map. And it turns out that the AI-generated things are really dumb in that once they turn their back, you can do literally anything and they won’t notice because it was poorly designed. AP: Right, yeah. And that’s true even in the film entertainment industry. There’s been a tremendous amount of pushback for the very reason I read a review recently talking about a series of clips about history on, I believe it’s on YouTube, by a fairly well-known director I will not name. SO: Mm-hmm. AP: And some of the AI is frankly not done well. And one reviewer basically said that a lot of the people, when you look at the back of these AI-generated, like an AI-generated King George, the back of his head looks like a melted candle. This is not what we want here. If you’re so focused on that sort of thing, you’re not paying attention to the message. But again, this is low-stakes content. We have started getting into kind of more the content creator point of view. We’ve talked about the consumer and how AI gives them much more control, flexibility in how they receive information. But let’s talk about what that means more for the people that have to create the information because it’s a huge shift on multiple levels and this idea of creating, especially in the product content world, these lovely design page-based PDFs and whatever else, and even webpages, hate to say it, those days are gone, or should be at this point. SO: Yeah, again, you know, we step back to books, and you write the content, it goes through like a manuscript process of some sort, and then it gets poured into a book. It gets printed on paper, which is about the least flexible thing you can imagine, right? Because I, as the book publisher, get to decide what font is on the page and what size. And if you don’t like that font, well, maybe you can get your hands on a large print edition. Maybe you can get your hands on a braille edition. Maybe. But the form factor of the content was determined by the publisher of the content, or technically, the printer. But, you know, that physical book production process. PDF, not that different in the sense that the content is bound into the PDF and it’s fixed. Now. You get a little bit more control because you can zoom in. There’s some things you can do in PDF, but ultimately it’s more or less still a page factor determined by the author/publisher/gatekeeper. So now we talk about the web and HTML. This is all pre-AI, right? HTML goes out there, and there’s actually a decent bit you can do in your browser. You can override the default font. You can override the default font size. You can say, I’m using dark mode or light mode or those kinds of things.  AP: Light mode, exactly. SO: If you have an e-book reader, you can override the default font or font size. AP: I need that font size jacked up, please. Thank you.  SO: We weren’t going to use that example. Right. Yeah. So you get a little bit more control, right? You have a little bit more control over the presentation. Now, let’s talk about what AI does to this, and particularly the large language models. Now, I, as the author, create a whole bunch of content, and I put it somewhere. And the content consumer says… AP: I’ll use it. SO: Tell me about this concept or tell me about this thing or give me information about whatever. And they get a response to that prompt, which is a paragraph or two of, you know, here’s what you need to know. And then they say, make it easier, make it simpler, write this at a fourth grade level, write this at an eighth grade level. I’m a PhD in microbiology. Give me more detail. Right. You can change the writing level. You can say make the font bigger, make the font smaller, give it to me in a PDF, show it to me in a spreadsheet. AP: I’ve even seen someone create a podcast of this document and have two people talking about it, which was freaky, but you can do that. SO: Right. So as the author and the content creator and the backend people, right, the content people, we’re accustomed to taking our content and packaging it in certain ways. Like, here’s a topic for you, or here’s a PDF, or here’s a book, or here’s a deliverable, right, a package of content. And although with structured authoring, when that came in, we let go of this idea that we, as the author, got to control the page presentation. That got automated into the system. So the person controlling the page presentation was the person who designed the publishing pipelines. But the publishing pipelines were designed on the backend by the authoring people. Now all of a sudden, we have no control over that end product. Just because I thought it should be a PDF or an HTML page, you can turn around and say, like you said, give it to me in a podcast, make me a video, show it to me in French, and the LLMs will do it. AP: The publishing pipeline got moved over the fence basically to more of the content consumer side and they get to do what they want more or less. That’s where things are headed. SO: So pre-AI, we talked about content as a service, right? We load up all the content in a database somewhere, and then you, as the end user of that content or another machine, can reach over and say, give me some content out of there. But it was still a pretty discreet, like, show me that topic or show me that string. And what is fundamentally different about AI and large language models processing that content is the degree to which you can mix and match and rework, reformat, translate, and transform that content to be presented to you, the end user, in the manner of your choice. So as an author, I kind of hate this, right? Hey, you took my stuff and you mangled it and you presented it in Comic Sans, and how dare you? And that’s where we are. That authors get to create information, but they don’t get to control the manner and means of distribution or presentation or formatting or language of that information. AP: On the flip side of that, and here I am going to look on the sunnier side of things, which never happens. This may be a pod person version of me. If you, as a content creator, are no longer on the hook for thinking about the publishing pipelines and all of that sort of thing, theoretically, that should free you up to create better content on the back end because you don’t have to think about all those things. Allegedly. I don’t know if it’s happening, but… SO: It’s very hard as an author to let go of that end product, the target that you’re headed for. But fundamentally, there’s a bigger problem, which is that even if I write the world’s greatest explanation of how to do something, that world’s greatest explanation of how to do something is not being presented to the end user as the thing I wrote. It’s being presented after being run through the transformer, the LLM, the processing that the AI can do when they ask for it. So I could literally write how to do X. And the end user says, hey, tell me how to do X. They are not going to get that chunk of information that I wrote. They’re going to get something reprocessed. Of course, now we ask the fundamental question, which is, is the reprocessed version going to be better or worse than what I wrote? And the answer is, it kind of depends on whether I am an above average writer with an above average understanding of what that end user wants, or whether I’m a below average writer with a below average understanding of what that end user wants. AP: To me, it’s almost irrelevant as a content creator. My version is better because if the person receiving the information via the chatbot or whatever thinks that what it’s getting or what they are getting is what they want, that’s all that really matters. That the person on the receiving end of that information gets what they want and fine-tunes it to what they want. If they’re happy with it, then the content creator’s opinion about that is, I hate to say it, immaterial at this point. SO: Yeah, I kind of hate this timeline because, you know, where does my voice, you know, where does my voice go? And the answer is it’s gone. But you’re right, of course, the purpose of again, what is the purpose of technical and product information that we work on? The purpose is to enable people to use a product successfully. So if shoving it through an AI results in an outcome where that person uses the product successfully, then we’re good. AP: I don’t disagree. SO: That’s the purpose of the kind of thing that we produce. I think, though, that looking at this, and this is where I see some of the big challenges going forward. First of all, we have to acknowledge that an enormous percentage of the technical content that’s out there is really bad. Like, terrible. Really, really bad, and might be improved by a little trip through a chatbot that’s gonna render it into actually grammatically correct English. That’s a thing. AP: Harsh but fair. SO: Yeah, I think you’re not the only one that’s going to have some bleeping issues in this podcast. But the problem that I see right now is that the metrics that are being used to measure the success of AI are all wrong. We should be measuring the success of various AI layers and chatbots and things based on are people getting what they need? AP: Yeah. Yeah. SO: Are they having a successful outcome to whatever it is that they’re trying to do? Is the search or is the process of that conversational, whatever they’re doing, does it get them to the endpoint of, okay, I understand what I need to do and I’m good and I walk away? The metric we actually seem to be using is what percentage of your workflow is using AI? How many people can we get rid of? Because “we’re automating everything with AI” is the wrong metric. The question is, how good are the outcomes? AP: To me the idea of how much AI versus human effort, there’s a lack of, shall we say, human intelligence being applied here because merely applying AI to something is fundamentally not going to make something that is incorrect, bad, whatever. It’s not going to magically fix it. That’s a huge disconnect for me when you’re talking about measuring outcomes. Whatever you dump into your large language model, if it is fundamentally bad, as in outdated and incorrect, right now, I am pretty sure merely applying AI to it is not going to fix those two pretty gaping holes. And there’s, I don’t know what it is, people hear AI and they think there’s some magic involved. No, the underpinnings have to be good for that magic to be useful, basically. SO: And I think all of us have examples of asking the chatbot a question and getting answers that are just flat wrong. Or worse, they look plausible, like they’re in the form of a plausible answer, but then you read it and you read it carefully and you’re like, this doesn’t actually say anything. It’s just word salad. Which, since a chatbot effectively is the average of the database underlying it of content pretty much means that the underlying database of content doesn’t say anything useful on this topic. So I think the place that I kind of go with this is to the question of accountability.  AP: Yes. SO: Who is legally responsible for the outcomes? Now, pretty clearly, if I or an organization produces a user guide that covers a specific product and there is wrong information in that user guide, the organization is responsible. I mean, it’s your document, you’re responsible. Okay, if I, as an end user, query a public-facing LLM and get the wrong answer for something, and then I proceed to use that in my life, whose fault is that? Who is at fault when, or, you we saw this with, when the first came out, people were following the map, right, the GPS map, and it would send them off a cliff or it would send them into a construction area and they would drive off the side. Okay, whose fault is that? And the answer was always, well, it’s your fault because look up from the map and don’t drive past the sign that says, not enter construction zone cliff ahead. AP: Or one-way street. Right, yeah. SO: But AI doesn’t come with, I mean, it comes with warning labels, right? But we don’t see them. We don’t process them. What we see is a conversation where we say, tell me more about that. And it tells you more about that. And it feels as though you’re talking to a human. And therefore, when you push on something and say, are you sure? And it says yes, because what’s the typical answer when somebody says, are you sure? It’s yes. Is it actually sure? No, it’s not sentient. So if I query a public-facing LLM, it reprocesses a bunch of content and tells me how to do a thing that is in direct contradiction to what the official user documentation says, whose fault is that? I think it’s mine because I use the public-facing LLM. Now, what if the organization that makes the product puts up a chatbot and I query the organization’s chatbot? How do I do X? And especially if that chatbot is your frontline tech support, like you cannot get to a human. You have to go through the chatbot. I asked the chatbot a question, and it says, do it this way. And it happens to be wrong. Is the organization liable? I don’t know the answer, I think yes, but I’m not sure. And so fundamentally, yeah. AP: The bottom line here, yeah, we’re talking about governance here. The bottom line is governance and there is, there has to be some human AI interaction here. There has to be these guardrails that you mentioned earlier and that’s where humans have to be involved. SO: And the better the AI gets, if it’s accurate half the time, then my hackles are up. I know it’s gonna be wrong. It’s wrong all the time. If it’s accurate 80% of the time, I sort of trend it like psychologically, I just assume it’s accurate all the time. So the better they get, the worse the errors are because we don’t expect them. AP: That’s also dangerous. Yeah, right. Yeah. SO: I see occasionally, very, very occasionally, I had directions to go somewhere. And the directions were literally, put this address into Google Maps, but don’t do A, B, and C because it’s wrong. Like, the directions to get to this location are incorrect. Do not follow them. Because these days, our assumption is that the mapping apps just work. AP: And that’s it’s wrong most of the time, but I think part of this governance angle is we have to realize that AI is going to be wrong.  SO: Pretty much just do. AP: And there are lots of reasons we won’t get into all the reasons that can be wrong. So what are you going to do when it is wrong? How are you going to make sure it’s not wrong? Again, there’s this whole process, this whole governance process that has to be in place. And again, I think this is where human intervention is going to be necessary because I don’t think AI at this point has any business correcting itself in these matters. That seems sort of suboptimal to me. SO: Hmm. Yeah, I mean, hypothetically, you can tell it to check itself. And certainly there’s some people doing that type of work. I think for me, fundamentally, the takeaways are that, like any other tool, there’s some really useful productivity enhancements that we can and should be taking advantage of. To your point, there’s some really important governance work that needs to be done to ensure that your QA is appropriately scaled to the level of risk of your product. Medical device, very high. Silly gaming app, pretty low. Don’t really care. And we need to think about guardrails and what it means to inject the right kind of content and the various kinds of enablement tools that you can use to do that.  And finally, this issue of AI as a content customer, I think is really, really tricky because it’s a new, from our point as content creators, it is a delivery mechanism, right? Just like a PDF or a piece of HTML or anything else like that. and it’s a delivery mechanism that allows the end user to control how they access the content, which means we have to do way more work around the guardrails of what that means when they query the content and shape it to their own requirements. AP: Yeah, so things have progressed in the past two years, most definitely, especially in the content space. We’ve seen a lot of improvements. But there are still some big picture things we have to work out. And I think it’s gonna be interesting in the next year or two to see what happens. You briefly mentioned there are some companies who are setting up systems that can do a decent job of checking up on itself. That’s not where everything is right now, but I think the better these systems get, the better the guardrails that get in place, they can start to find out, this is wrong, I need to fix it, or I need to update this with the latest information, let me go get it. So that is starting to happen more and more. I think it will become more part of the LLM to chatbot process, but I don’t think we’re quite there yet. And I’m interested to see what happens next with that sort of scenario. SO: It’s definitely gonna be interesting. That much I’m sure about. AP: Yeah, I agree. So we managed to get through this without cursing. So that’s good. I think it turned out to be a more realistic conversation, and we kind of tuned out the hype because that’s what just makes me grit my teeth and sometimes yell at LinkedIn when I see certain promoted posts on LinkedIn that I think are full of you-know-what. So anyways, I think we’ll wrap it up there. Sarah, do you have any final points you would like to sign off with? SO: I think at the end of the day, when you try and contextualize, like, what is this AI thing and what does it mean for us fundamentally, we can look at some of the other sort of big picture shifts that we’ve made. I’ve been known to pretty dismissively compare it to a spell checker, you know? You can use it and it’ll fix some stuff, but you better check because it doesn’t know the difference between affect and effect, although some of the grammar checkers now maybe they do. So there’s that, but I think at the end of the day, if you are looking at content strategy, content operations and enterprise level, you really do have to say, okay, where does AI fit into my strategy and how can we employ it productively to do what we need to do inside this organization to produce, manage, deliver the content that we’re working on. AP: And I think we’re going to wrap up on that very good point. Thank you very much. SO: Thank you. Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. Questions for Sarah and Alan? Register for our Ask Me Anything: AI in content ops webinar! The post Check in on AI: The true measure of success for AI initiatives appeared first on Scriptorium.

From black box to business tool: Making AI transparent and accountable

Jan 26th, 2026 12:45 PM

As AI adoption accelerates, accountability and transparency issues are accumulating quickly. What should organizations be looking for, and what tools keep AI transparent? In this episode, Sarah O’Keefe sits down with Nathan Gilmour, the Chief Technical Officer of Writemore AI, to discuss a new approach to AI and accountability. Sarah O’Keefe: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. Nathan Gilmour: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. We want to bring clarity to these black boxes and make them transparent, because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. Related links: Sarah O’Keefe: AI and content: Avoiding disaster Sarah O’Keefe: AI and accountability Writemore AI LinkedIn: Nathan Gilmour Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. Sarah O’Keefe: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone. I’m Sarah O’Keefe. Welcome to another episode. I am here today with Nathan Gilmour, who’s the Chief Technical Officer of Writemore AI. Nathan, welcome. Nathan Gilmour: Thanks, Sarah. Happy to be here. SO: Welcome aboard. So tell us a little bit about what you’re doing over there. You’ve got a new company and a new product that’s, what, a year old? NG: Give or take, yep. SO: Yep. So what are you up to over there? Is it AI-related? NG: It is actually AI-related, but not AI-related in the traditional sense. Right now, we’ve built a product or tool that helps technical authoring teams convert from traditional Word or PDF formats, which would make up the bulk of much of the technical documentation ecosystem and help convert it to structured authoring. Meaning that they can get all of the benefits of reuse, easier publishing, high compatibility with various content management systems. And can do it in minutes where traditional conversions could take hours. So it really helps authoring teams get their content out to the world at large in a much more efficient and regulated fashion. SO: So I pick up a corpus of 10 or 20 or 50,000 pages of stuff, and you’re going to take that, and you’re going to shove it into a magic black box, and out comes, you said, structured content, DITA? NG: Correct. SO: Out comes DITA. Okay. What does this actually … Give us the … That’s the 30,000-foot view. So what’s the parachute level view? NG: Perfect. Underneath the hood, it’s actually a very deterministic pipeline. Deterministic pipeline means that there is a lot more code supporting it. It’s not an AI inferring what it should do. There’s actual code that guides a conversion process first. So going from, let’s say, Word to DITA, there are tools within the DITA Open Toolkit that allow and facilitate that much more mechanically, rather than trusting an AI to do it. We know that AI does struggle with structure, especially as context windows expand. It becomes more and more inaccurate. So if we feed these models with far more mechanically created content, they become much more accurate. You’re not trusting them to do much more, more of the nuanced parts of the process. So there’s a big difference between determinism and probabilism. Where determinism is the mechanical conversion of something, probabilism is allowing the AI to infer a process. So that’s where we differ is our process is much more deterministic versus allowing the AI to do everything on its own. SO: So is it fair to say that you combined the … And for deterministic, I’m going to say scripting. But is it fair to say that you combined the DITA OT scripting processing with additional AI around that to improve the results? NG: Correct. It also expedites the results so that instead of having a human do much of the semantic understanding of the document, we allow the AI to do it in a far more focused task. Machines can read faster. SO: Okay. And so for most of us, when we start talking about AI, most people think large language model and specifically ChatGPT, but that’s not what this is. This is not like a front-end go play with it as a consumer. This is a tool for authors. NG: Correct. And even further to that, it’s a partner tool for authors. It allows them to continue authoring in a format that they’re familiar with. Well, let’s take Microsoft Word, for example. Sometimes the shift from Word to structured authoring could be considered an enormous upheaval. Allowing authors to continue authoring in a format that they’re good at and they’re familiar with, and then have a partner tool that allows them to expedite the conversion process to structured authoring so that they can maintain a single source of truth, makes things a little bit better, more manageable, and more reliable in the long run. So instead of having to effectively cause a riot with the authoring teams, we can empower them to continue doing what they’re good at. SO: Okay. So we drop the Word file in and magically DITA comes out. What if it’s not quite right? What if our AI doesn’t get it exactly right? I mean, how do I know that it’s not producing something that looks good, but is actually wrong? NG: Great question. And that’s where, prior to doing anything further, there is a review period for the human authors. So in the event that the AI does make a mistake, it is not only completely transparent, so the output, the payload, as we describe it, comes with a full audit report. So every determination that the AI makes is traced and tracked and explained. And then further to that, the humans are even able to take that payload out anyway, open it up in an XML editor. So at this point in time, the content is converted, it is ready to go into the CCMS. Prior to doing that, it can go into a subject matter expert who is familiar with structured authoring to do a final validation of the content to make sure that it is accurate. The biggest differentiator, though, is the tool never creates content. The humans need to create content because they are the subject matter experts within their field. They create the first draft. The tool takes it, converts it, but doesn’t change anything. It only works with the material as it stands. And then once that is complete, it goes back into another human-centered review so that there are audit trails, it is traceable. And there is a final touchpoint by a human prior to the final migration into their content management system. SO: So you’re saying that basically you can diff this. I mean, you can look at the before and the after and see where all the changes are coming in. NG: Correct. SO: Okay. I’m not going to ask you why this is the only AI tool I’ve heard about that has this type of audit trail, because it seems like a fairly important thing to do. NG: It is very important because there are information security policies. AI is this brand-new, shiny, incredibly powerful tool. But in the grand scheme of things, these large language models, the OpenAIs, the Claudes, the Geminis, they’re largely black boxes. Where we want to come in is to bring clarity to these black boxes. Make them transparent, I guess you can say. Because organizations do want to implement AI tools to offer efficiencies or optimizations within their organizations. However, information security policies may not allow it. One of the added benefits that we have baked into the tool from a backend perspective is its ability to be completely internet-unaware. Meaning if an organization has the capital and the infrastructure to host a model, this can be plugged directly into their existing AI infrastructure and use its brain. Which, realistically, is what the language model is. It’s just a brain. So if companies have invested the time, invested the capital in order to build out this infrastructure, the Writemore tool can plug right into it and follow those preexisting information security policies. Without having to worry about something going out to the worldwide web. SO: So the implication is that I can put this inside my very large organization with very strict information security policies and not be suddenly feeding my entire intellectual property corpus to a public-facing AI. NG: That is entirely correct. SO: We are not doing that. Okay. So I want to step back a tiny bit and think about what it means, because it seems like the thing that we’re circling around is accountability, right? What does it mean to use AI and still have accountability? And so, based on your experience of what you’ve been working on and building, what are some of the things that you’ve uncovered in terms of what we should be looking for generally as we’re building out AI-based things? What should we be looking for in terms of accountability of AI? NG: The major accountability of AI is what could it look like if a business model changes? Let’s kind of focus on the large players in the market right now. There will always be risk with using these large language models that are publicly facing right now. A terms of service change could mean that all of the information that organizations use in order to leverage these tools could become part of a training data set later on down the road. It’s hard to determine what will happen in the future. So the ability to use online and offline models encourages the development of very transparent tools. So even if the Writemore tool is using a cloud model, I still hold the model almost accountable to report its determinations. It’s not just making things up, so to speak. So there’s a lot that goes into it. There’s a lot that we don’t know about these tools, to be totally honest. We’re still trying to determine what it looks like in a broader picture, in a broader use case. Because the industry is evolving so quickly that, quite simply, we don’t know what’s coming up. SO: Sounds to me as though you’re trying to put some guardrails around this so that if I’m operating this tool, then I can look at the before and after and say, “Don’t think so.” I mean, presumably it learns from that, and then my results get better down the road. Where do you think this is going? I mean, where do you see the biggest potential and where do you see the biggest risks or opportunities or … I’ll leave it to you as to whether this is a positive or a negative tilted question. NG: There’s a lot of potential in order to incorporate into organizations that can’t use these tools. Like we had mentioned earlier, organizations are looking into this. Municipalities are looking into AI. But with the state of the more open models right now, it’s very hard to say. So I know I keep circling back around the ability to use smaller language models. They are not only much more efficient to operate, they’re also cheaper, quite simply, to operate. We know that the large language models require enormous computing power. But if provided focused tasks in order to either assist in the classification of topics or fulfill requests in order to pull files, in that regard, you can get away with using smaller levels of compute. And in today’s day and age of computing, the price relativistically is coming down in terms of density is going up. So it’s cheaper to run a model at higher capacities than it ever has been. And it’s only going to improve over time. So empowering organizations to be able to incorporate these tools in order to streamline their own workflows is going to be very important to them. And on top of that, being able to abide or follow their information security policies only makes the ideas much more compelling. And on top of that, being able to encourage organizations to take full control of their documentation and not necessarily need it to go out of house allows organizations to keep internal costs down while still maintaining the security policies of making sure their content doesn’t leave their organization. There’s always going to be room for partner organizations to come in and help with their content strategy. But the conversion itself can be done in-house using their tools, using their content, using their teams. Which really helps keep costs down, they drive the priority lists, they can do everything that they need to do in order to maintain that control. SO: Now, we’ve touched largely … Or we’ve talked largely about migration and format conversion. But there’s a second layer in here, right? Which we haven’t mentioned yet. NG: There is. There’s the ability also during the conversion phase, it’s to have an AI model do light edits. So being able to feed it a style guide to abide by means the churn that we see with these technical teams isn’t as nearly impactful. You can have technical authors still write their content. But if a new person joins the team, they can still author the material just as normally. But then the tool can take over in order to ensure that it’s meeting the corporate style guide, the corporate language, so on, in order to expedite that process. So onboarding time for new team members shrinks as well. So like I said, it’s a very much it’s a partner tool in order to expedite the processing of content, authoring, conversion, migration, getting into a new CCMS and the real empowerment behind it. SO: And the style guide conformance. So I think we’re assuming that the organization has a corporate style guide? NG: Assuming, yes. SO: Okay. Just checking. NG: But then again, that’s… SO: Asking for a friend. NG: Of course. SO: So if they don’t have one, where’s the corporate style guide come from? NG: And that could be something that an organization can either generate internally or, as mentioned, work with an external vendor who specializes in these kinds of things in order to build a style guide so that all of their documentation follows the same voice and tone. The better the documentation, the better the trust of the content overall. SO: So, can we use the AI to generate the corporate style guide? NG: Probably. Yes. Short answer, yes. Longer answer, not without very close attention to it. SO: And doesn’t that also assume that we have a corpus of correctly styled content so that we can extract the style guide rules? NG: There’s a lot more. Yeah. SO: So I mean, what I’m working my way around to is if we have content chaos, if you have an organization that doesn’t have a style guide, doesn’t have consistency, doesn’t have all these things, can you separate out what is the work that the humans have to do? And what is the work that the machine can do to get to structured, consistent, correct, voice and tone and all the rest of it? How do you get from the primordial soup of content goo to structured content in a CCMS? NG: Great question. Typically, that starts with education. We work with the teams in order to identify these gaps first. We don’t just throw in a tool and say, “Good luck, hope for the best.” Because we see it time and time again, even in manual conversion processes where that simply doesn’t work. But in taking the time to work with teams, providing them with the skills and the knowledge in order to be successful serves a much longer term positive outcome than ever before. If we educate these teams on what any tool realistically needs, it means the accuracy of the tool goes up in the longer run. So you’re seeing multiple benefits on multiple sides. So to your point about primordial soup, well, working with teams in order to identify these gaps, these issues, working to identify the standards that should go into the content prior to anything sets not only them up for success in the long run, but also for any tools that they want to implement down the road. It all starts with strong content going in because, as the adage goes, garbage in, garbage out. So if we can clean up the mess prior or work with the teams prior in order to establish these standards, then the quality of output only goes up. SO: Yeah. And I mean, I think to me, that’s the big takeaway, right? We have these tools, we can do interesting things with them, but at the end of the day, we have to also augment them with the hard-won knowledge of the people. You mentioned subject matter experts, the domain experts, the people inside the organization that understand the regulatory framework or the corporate style guide, or all of those guardrails that make up what is it to create content in this organization that reflects this organization’s priorities and culture and all the rest of it. NG: And taking the time to educate users is a far less invasive process than exporting bulk material, converting it manually, and handing it back. Because realistically, if we take that avenue or that road, we’re not educating the users, we’re not empowering them to be successful in the long run. All we’ll end up doing is all the hard work, but then in one, two, five years, we run into the same issue where we’re back to primordial soup of content, and it’s another mess. So if we start with the education and the empowerment and then work towards the implementation of tools, the longer-term success will be realized. SO: Well, I think, I mean, that seems like a good place to leave it. So Nathan, thank you so much. This was interesting and I look forward to seeing where this goes and how it evolves over the next … Well, we’re operating in dog years now, so over the next month, I guess. NG: So true. And thanks, Sarah, for having me on. SO: Thanks, and we’ll see you soon. Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. For more insights on AI in content operations, download our book, Content Transformation. The post From black box to business tool: Making AI transparent and accountable appeared first on Scriptorium.

Futureproof your content ops for the coming knowledge collapse

Nov 17th, 2025 12:30 PM

What happens when AI accelerates faster than your content can keep up? In this podcast, host Sarah O’Keefe and guest Michael Iantosca break down the current state of AI in content operations and what it means for documentation teams and executives. Together, they offer a forward-thinking look at how professionals can respond, adapt, and lead in a rapidly shifting landscape. Sarah O’Keefe: How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us, what automation looks like, and the risk that is introduced by the limitations of the technology? What’s the roadmap for somebody that’s trying to navigate this with people that are all-in on just getting the AI to do it? Michael Iantosca: We need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right. Related links: Scriptorium: AI and content: Avoiding disaster Scriptorium: The cost of knowledge graphs Michael Iantosca: The coming collapse of corporate knowledge: How AI is eating its own brain Michael Iantosca: The Wild West of AI Content Management and Metadata MIT report: 95% of generative AI pilots at companies are failing LinkedIn: Michael Iantosca Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey everyone, I’m Sarah O’Keefe. In this episode, I’m delighted to welcome Michael Iantosca to the show. Michael is the Senior Director of Content Platforms and Content Engineering at Avalara and one of the leading voices both in content ops and understanding the importance of AI and technical content. He’s had a longish career in this space. And so today we wanted to talk about AI and content. The context for this is that a few weeks ago, Michael published an article entitled The coming collapse of corporate knowledge: How AI is eating its own brain. So perhaps that gives us the theme for the show today. Michael, welcome. Michael Iantosca: Thank you. I’m very honored to be here. Thank you for the opportunity. SO: Well, I appreciate you being here. I would not describe you as anti-technology, and you’ve built out a lot of complex systems, and you’re doing a lot of interesting stuff with AI components. But you have this article out here that’s basically kind of apocalyptic. So what are your concerns with AI? What’s keeping you up at night here?  MI: That’s a loaded question, but we’ll do the best we can to address it. I’m a consummate information developer as we used to call ourselves. I just started my 45th year in the profession. I’ve been fortunate that not only have I been mentored by some of the best people in the industry over the decades, but I was very fortunate to begin with AI in the early 90s when it was called expert systems. And then through the evolution of Watson and when generative AI really hit the mainstream, those of us that had been involved for a long time were… there was no surprise, we were already pretty well-versed. What we didn’t expect was the acceleration of it at this speed. So what I’d like to say sometimes is the thing that is changing fastest is the rate at which the rate of change is changing. And that couldn’t be more true than today. But content and knowledge is not a snapshot in time. It is a living, moving organism, ever evolving. And if you think about it, the large language models, they spent a fortune on chips and systems to train the big large language models on everything that they can possibly get their hands and fingers into. And they did that originally several years ago. And the assumption is that, especially for critical knowledge, is that that knowledge is static. Now they do rescan the sources on the web, but that’s no guarantee that those sources have been updated. Or, you know, the new content conflicts or confuses with the old content. How do they tell the difference between a version of IBM database 2 of its 13 different versions, and how you do different tasks across 13 versions? And can you imagine, especially when it comes to software where most of us, a lot of us work, the thousands and thousands of changes that are made to those programs in the user interfaces and the functionality? MI: And unless that content is kept up-to-date and not only the large language models, reconsume it, but the local vector databases on which a lot of chatbots and agenda workflows are being based. You’re basically dealing with out-of-date and incorrect content, especially in many doc shops. The resources are just not there to keep up with that volume and frequency of change. So we have a pending crisis, in my opinion. And the last thing we need to do is reduce the people that are the knowledge workers to update, not only create new content, but deal with the technical debt, so that we don’t collapse on this, I think, is a house of cards. SO: Yeah, it’s interesting. And as you’re saying that, I’m thinking we’ve talked a lot about content debt and issues of automation. But for the first time, it occurs to me to think about this more in terms of pollution. It’s an ongoing battle to scrub the air, to take out all the gunk that is being introduced that has to, on an ongoing basis, be taken out. Plus, you have this issue that information decays, right? In the sense that when, I published it a month ago, it was up to date. And then a year later, it’s wrong. Like it evolved, entropy happened, the product changed. And now there’s this delta or this gap between the way it was documented versus the way it is. And it seems like that’s what you’re talking about is that gap of not keeping up with the rate of change. MI: Mm-hmm. Yeah. I think it’s even more immediate than that. I think you’re right. But now we need to remember that development cycles have greatly accelerated. Now, when you bring AI for product development into the equation, we’re now looking at 30 and 60-day product cycles. When I started, a product cycle was five years. Now it’s a month or two. And if we start using AI to draft new content, for example, just brand new content, forget about the old content or update the old content. And we’re using AI to do that in the prototyping phase. We’re moving that more left upfront. We know that between then and CodeFreeze that there’s going to be a numerous number of changes to the product, to the function, to the code, to the UI. It’s always been difficult to keep up with it in the first place, but now we’re compressed even more. So we now need to start looking at AI to how does it help us even do that piece of it, let alone what might be a corpus that is years and years old, that’s not ever had enough technical writers to keep up with all the changes. So now we have a dual problem, including new content with this compressed development cycle. SO: So the, I mean, the AI hype says we essentially, we don’t need people anymore and the AI will do everything from coding the thing to documenting the thing to, I guess, buying the thing via some sort of an agentic workflow. But what, I mean, you’re deeper into this than nearly anybody else. What is the promise of the AI hype, and what’s the reality of what it can actually do? MI: That’s just the question of the day. Because those of us that are working in shops that have engineering resources, I have direct engineers that work for me and an extended engineering team. So does the likes of Amazon, other serious, not serious, but sizable shops with resources. We have a lot of shops that are smaller. They don’t have access to either their own dedicated content systems engineers or even their IT team to help them. First, I want to recognize that we’ve got a continuum out there, and the commercial providers are not providing anything to help us at this point. So it’s either you build it yourself today, and that’s happening. People are developing individual tools using AI where the more advanced shops are looking at developing entire agentic workflows.  And what we’re doing is looking at ways to accelerate that compressed timeframe for the content creators. And I want to use content creators a little more loosely because as we move the process left, and we involve our engineers, our programmers in the early, earlier in the phase, like they used to be, by the way, they used to write big specifications in my day. Boy, I want to go into a Gregorian chant. “Oh, in my day!” you know, but, but they don’t do that anymore. And basically the, the role of the content professional today is that of an investigative journalist. And you know what we do, right? We, we scrape and we claw. We test, we use, we interview, we use all of the capabilities of learning, of association, assimilation, synthesis, and of course, communication. And turns out that writing’s only 15% roughly of what the typical writer does in an information developer or technical documentation professional role, which is why we have a lot of different roles, by the way, that if we’re gonna replace or accelerate with people with AI, have to handle all those capabilities of all those roles. So, so where we are today is some of the more leading-edge shops are going ahead, and we’re looking at ways to ingest knowledge, new knowledge, and use that new knowledge with AI to draft new or updated content. But there are limitations to that. So, I want to be very clear. I am super bullish on AI. I think I use it every single day. I’m using it to help me write my novel. I’m using it to learn about astrophotography. I use it for so much. When the tasks are critical, when they’re regulatory, when they’re legal-related, when there’s liability involved, that’s the kind of content that we cannot afford to be wrong. We have to be right. We have to be 100% in many cases. Whereas with other kinds of applications, we can very well be wrong. I always say AI and large language models are great on general knowledge that’s been around for years and evolves very slowly. But things that move quickly and change very quickly, in my business, it’s tax rates. There’s thousands and thousands of jurisdictions. Every tax rate is different and they change them. So you have to be 100% accurate or you’re going to pay a heck of a penalty financially if you’re wrong. So we are moving left. We are pulling knowledge from updated sources, things like videos that we could record and extract and capture, Figma designs, code even, to a limited degree that there’s assets in there that can be caught, and other collateral, and we’re able to build out and initial drafts. It’s pretty simple. Several companies are doing this right now, including my own team. And then the question comes, how good could it be initially? What can we do to improve that, make it as good as it can be? And then what is the downstream process for ensuring validity and quality of that content? What are the rubrics that we’re going to use to govern that? And therein is where most of the leading edge or bleeding edge or even hemorrhaging edge is right now. SO: Yeah, and I mean, this is not really a new problem, and it’s not a problem specific to AI either, but we’ve had numerous projects where the delta between what, let’s say, the product design docs and the engineering content and the code, the as-designed documentation and the actual reality of the product walking out the door. So the as-built product, there was the resources, all that source material that you’re talking about, right, that we claw and scrape at. And I would like to also give a shout-out to the role of the anonymous source for the investigative journalists, because I feel like there’s some important stuff in there. But you go in there, you get all this as-designed stuff, right? Here’s the spec, here’s the code, here are the code comments, whatever. Or here’s the CAD for this hardware piece that we’re walking out the door. But the thing that actually comes down the factory assembly line or through the software compiler is different than what was documented in the designs because reality sets in and changes get made. And in many, many, many cases, the role of the technical writer was to ensure that the content that they were producing represented reality and not the artifacts that they started from. So there’s a gap. And there jobs to close that gap so that that document goes out and it’s accurate, right? And when we talk about these AI or automated or any sort of workflow, any sort of automation, any automation that does not take into account the gap between design and reality is going to run into problems. The level of problem depends on the accuracy of your source materials. Now, I wrote an article the other day and referred to the 100% accurate product specifications. I don’t know about you, I have seen one of those never in my life.  MI: Hahaha that’s absolutely true. That’s really true.  SO: The promise we have here is, AI is going to speed things up and it’s going to automate things and it’s going to make us more productive. And I think you and I both believe that that is true at a certain level. How do you talk to executives about this? How do you find that balance between the promise of what these new tool sets can do for us and what automation looks like and the risk that is introduced by the limitations of you know, of the technology itself? What does that conversation look like? What are the points that you try to make? What’s the roadmap for somebody that’s trying to, as you said, know, maybe in a smaller organization, navigate this with people that are, you know, all-in on “just get the AI to do it.” MI: That’s a great question too, because we need to remind them that the current state of AI still carries with it a probabilistic nature. And no matter what we do, unless we add more deterministic structural methods to guardrail it, things are going to be wrong even when all the input is right. AI can still take a collection of collateral and get the order of the steps wrong. It can still include things or do too much. We’ve been trained to write as professional writers in a minimalistic capability. And we can control some of that through prompting. Some of that can be done with guardrails. But when you think about writing tech docs, some people might think, we document. we’re documenting APIs or documenting tasks and we, you know, we’ve always been heavily task-oriented, but you can extract all the correct steps and all the correct steps in the right order, but what doesn’t come along with it all too frequently and almost universally is the context behind it, the why part of it. I always say we can extract great things from code for APIs like endpoints or puts and, you know, gets and puts and things like that. That’s a great for creating reference documentation for programmers.  But if you want to know, it doesn’t tell you the why, it doesn’t tell you the steps, the exact steps, the code doesn’t tell you that. Now maybe your Figma does. And if your Figma has been done really well, your design docs have been done really well and comprehensively. That can mitigate it tremendously. But what have we done in this business? We’ve actually let go more UX people than probably even writers, you know, which is, which is counterproductive. And then you’ve got things like the happy path and the alternate paths that could exist, for example, through the use of a product or the edge cases, right? The what-ifs that occur. You might be able to, and we should, we are able to do better with the happy path, but the happy path is not the only path. These are multifunction beasts that we built. When we built iPhone apps, we often didn’t need documentation because they did one thing and they did one thing really well. You take a piece of middleware, and it can be implemented a thousand different ways. And you’re going to you’re going to document it by example and maybe give some variance, you’re not going to pull that from Figma design. You’re not going to pull that from code. There’s too much of it there that the human fact-baking capability can look at it and say, this is important, this is less important, this is essential, this is non-essential, to actually deliver useful information to the end user. And we need to be able to show what we can produce, continue to iterate and try to make it better and better, because someday we may actually get pretty darn close with support articles and completed support case payloads, we were able to develop an AI workflow that very often was 70% to 100% accurate and ready for publish.  But when you talk about user guides and complex applications, it’s another story because somebody builds a feature for a product and that feature boils down into not a single article, but into an entire collection of articles that are typed into the kind of breakdown that we do for disclosure, such as concepts, tasks, references, Q&A. So AI has got to be able to do something much more complex, which is to look at content and classify it and apply structure to separate those concerns. Because we know that when we deliver content in the electronic world, we’re no longer delivering PDF. Well, of us are hopefully not delivering PDF books made up of long chapters that intersperse all of these different content types because of the type of consumption, certainly not for AI and AI bots. Then when we, so we need to document, maybe the bottom line here is we need to show what we can do. We need to show where the risks are. We need to document the risks, and then we need the owners, the business decision makers, to see those risks, understand those risks, and sign off on those risks. And if they sign off on the risks, then me, as a technology developer and an information developer, I can sleep at night because I was clear on what it can do today. And that is not a statement that says it’s not going to be able to do that tomorrow. It’s only a today statement so that we can set expectations. And that’s the bottom line. How do we set expectations when there’s an easy button that Staples put in our face, and that’s the mentality of what AI is. It’s press a button and it’s automatic. SO: Yeah, and I did want to briefly touch on, you know, the knowledge base articles are really, really interesting problem because in many cases you have knowledge base articles that are essentially bug fixes or edge cases when I, you know, hold my finger just so and push the button over here, you know, it blue screens. MI: Mm-hmm. SO: And that article can be very context-specific in the sense of you’re only going to see it if you have these five things installed on your system. And/or it can be temporal or time-limited in the sense that, while we fixed the bug, it’s no longer an issue. Okay. Well, so you have this knowledge-based article and you feed it into your LLM as an information source going forward, but we fixed the bug. So how do we pull it back out again? MI: I love that question.  SO: I don’t! MI: I love it. No, I’ve been, actually working for a couple of years on this very particular problem. The first problem we have, Sarah, is we’ve been so resource constrained that when doc shops built an operations model, the last thing they invested in is the operations and the operations automation. So when I’m in a conference and I have a big room of 300 professional technical doc folks. I love asking the simple question, how do you track your content? And inevitably, I get, yeah, well, we do it on Excel spreadsheets. To actually have a digital system of record, I get a few hands. And then I ask the question, well, does that digital system of record that you have for every piece of documentation you’ve ever published, does that span just the product doc or does that actually span more than product doc like your developer, your partner, your learning, your support, all these different things. Cause the customer doesn’t look at us as those different functions. They look at us as one company, one product. And inevitably, I’m lucky if I get one hand in the audience that says, yeah, we actually are doing that. So the first thing they don’t have is they don’t have a contemporary system of record that is digital that we can say, we know and can automate notifications as to when a piece of documentation should either be re-reviewed and revalidated or retired and taken out. The other problem we have is that all of these AI implementations and companies, almost universally, not completely, but most of them, were based on building these vector databases. And what they did, was often to the completely ignoring the doc team, was just go out to the different sources they had available, Confluence, SharePoint. If you had a CCMS, they’d ask you for access to your CCMS or your content delivery platform, and they suck it in. They may date-stamp it, which is okay, but pretty rudimentary. And they may even have methods for rereading those sources every once in a while, but they’re not, unless they’re rebuilding the entire vector database, and then what did they do when they ingested the content? They shredded it up into a million different pieces, right? Because the context windows for large language models have limitations for token numbers and things like that. Maybe they’re bigger today, but they’re still limited. So how would they even replace a fragment of what used to be whole topics and whole collection of topics? And this is why we wrote the paper and did the implementation and share with the world what we call the document object model knowledge graph because we needed a way outside of the vector database to say go look over here and you can retrieve the original entire topic or collection of topics or related topics in their entirety to deliver to the user. And again, we’re still unless we update that content and it’s don’t treat it like a frozen snapshot in time, we’ll still have those content debt problems. But it’s becoming a bigger, bigger, a much bigger problem now. It wasn’t as big a problem when we put out chatbots. And the chatbots we’ve been building, what, for three, you know, two, three, four years now. And, you know, everybody celebrated, they popped the corks, you know, we can deflect X amount percentage of support cases. They can self-service. And I always talk about the precision paradox that once you reach a certain ceiling, it gets really hard to increment and get above that 70%, 80%, 85%, 90% window. And as you get closer and better, the tolerance for being wrong goes down like a rock. And you now have a real big problem. So how do we do these guardrails to be more deterministic, to mitigate the probabilistic risk that we have and reality that we have? The problem is that people are still looking for fast and quick, not right. When I say right, I mean the building out of things like ontologies and leveraging our taxonomies that we labored over with all of that metadata that never even gets into the vector database because they strip it all away in addition to shredding it up. So if we don’t start building those things like knowledge graphs and retaining all of that knowledge, it’s even… now we’re compounding the problem. Now we have debt, and we have no way to fix the debt. And now when we get into the new world of agentic workflows, which is the true bleeding edge right now, when you have sequences of both agentic and agentive, and the difference between those two, by the way, is agentic is autonomous. There’s no human doing that task. It’s just doing it. And then agentive, which is a human in the loop, which is helping there. When you’ve got a mix of agentive and agentic processes in a business workflow, now you’ve got to worry about what happens if I get something wrong early in the chain of sequence in that workflow. And this doesn’t apply to just documentation, by the way. We’ll be seeing companies taking very complex workflows in finance and in marketing and in business planning and reporting and mapping out this is the workflow our humans do. And there’s hundreds, if not more steps and many roles involved in those workflows. And as we map those out and say, where can we inject AI, not as just individual tools, like just separately using a large language model or separately using a single agent, but stringing them together to automate a complex business workflow with dependencies upstream and downstream. How are we going to survive and make this work? And I think that’s why you saw the MIT study had come out where they said, you know, roughly only 5% or so of AI projects are succeeding. And I think that’s because we did the easy stuff first. We did the chatbots and they could be lossy in terms of accuracy. But when you now, when you get to these agenda workflows that we’re building, literally coding as we speak, now you’re facing a whole different experience and ballgame where precision and currency really matters. SO: Yeah, and I think I mean, we’ve really only scraped the surface of this. Both of the articles that you’ve mentioned, the one that I started with and the one that you mentioned in this context, we’ll make sure we get those into the show notes. I believe they are on your is it Medium? On your website. So we’ll get those links in there. Any final parting words in the last? I don’t know. Fifteen seconds or so. MI: No, that’s good. I want to give, I want to tell you the good news and the bad news for tech doc professionals. What I’m seeing in the industry hurts me. I think there’s a lot of excuse right now, not just in the tech doc space, but in all jobs where we’re seeing AI being used as an excuse to make business decisions, to scale back. It may take some time until the impact of some poor business decisions that are being made will reflect themselves, but there’s going to be reality that hits. And the question is, is how do we navigate the interim? I’m confident that we will. I’m confident that those of us that are building the AI, I feel like I’m evil and a savior at the same time. I’m evil because I’m building automation that can speed up and make people much more productive, meaning you need less people potentially. At the same time, I feel like we’re in a position when we do it, rather than an engineer that doesn’t even know the documentation space, we’re getting to redefine our space ourselves and not leave it to the whims of people that don’t understand the incredible intricacy and dependencies of creating what we know as high-quality content. So we’re in this tumult right now, I think we’re going to come out of it. I can’t tell you what that window looks like. There will be challenges through doing that, but I would rather see this community define their own, redefine their own future in this transformation that is unavoidable. It’s not going away. It’s going to accelerate and get more serious. But if we don’t define ourselves, others will. And I think that’s the message I want our community to take away. So when we go to conferences and we show what we’re doing and we’re open and we’re sharing all the stuff that we’re doing, that’s not, hi, look at us. That’s you come back to the next conference and the next webinar and show us what you took from us and made better and helped shape and mold that transformative industry that we know as knowledge and content. And I’m excited because I want to celebrate every single advance that I see as we share. And I think it’s incumbent upon us to share and be vocal. And I think when I write my articles, they’re aimed at not only our own community, they’re aimed at the executives and technologists themselves to educate them, so that if we don’t do it, who will? And it does fall on all of us to do that. SO: I think I’m going to leave it there with a call for the executives to pay attention to what you are saying, and some of the rest of this community, many of the rest of this community are saying. So, Michael, thank you very much for taking the time. I look forward to seeing you at the next conference and seeing what more you’ve come up with. And we will see you soon. MI: Thank you very much. SO: Thank you. Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. Want more content ops insights? Download our book, Content Transformation. The post Futureproof your content ops for the coming knowledge collapse appeared first on Scriptorium.

The five stages of content debt

Nov 3rd, 2025 12:33 PM

Your organization’s content debt costs more than you think. In this podcast, host Sarah O’Keefe and guest Dipo Ajose-Coker unpack the five stages of content debt from denial to action. Sarah and Dipo share how to navigate each stage to position your content—and your AI—for accuracy, scalability, and global growth. The blame stage: “It’s the tools. It’s the process. It’s the people.” Technical writers hear, “We’re going to put you into this department, and we’ll get this person to manage you with this new agile process,” or, “We’ll make you do things this way.” The finger-pointing begins. Tech teams blame the authors. Authors blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations say, “We’ve got to start making a change.” They’re either going to double down and continue building content debt, or they start looking for a scalable solution. — Dipo Ajose-Coker Related links: Scriptorium: Technical debt in content operations Scriptorium: AI and content: Avoiding disaster RWS: Secrets of Successful Enterprise AI Projects: What Market Leaders Know About Structured Content RWS: Maximizing Your CCMS ROI: Why Data Beats Opinion RWS: Accelerating Speed to Market: How Structured Content Drives Competitive Advantage (Medical Devices) RWS: The all-in-one guide to structured content: benefits, technology, and AI readiness LinkedIn: Dipo Ajose-Coker Sarah O’Keefe Transcript: Introduction with ambient background music Christine Cuellar: From Scriptorium, this is Content Operations, a show that delivers industry-leading insights for global organizations. Bill Swallow: In the end, you have a unified experience so that people aren’t relearning how to engage with your content in every context you produce it. SO: Change is perceived as being risky; you have to convince me that making the change is less risky than not making the change. Alan Pringle: And at some point, you are going to have tools, technology, and processes that no longer support your needs, so if you think about that ahead of time, you’re going to be much better off. End of introduction Sarah O’Keefe: Hey, everyone. I’m Sarah O’Keefe and I’m here today with Dipo Ajose-Coker. He is a Solutions Architect and Strategy at RWS and based in France. His strategy work is focused on content technology. Hey, Dipo. Dipo Ajose-Coker: Hey there, Sarah. Thanks for having me on. SO: Yeah, how are you doing? DA-C: Hanging in there. It’s a sunny, cold day, but the wind’s blowing. SO: So in this episode, we wanted to talk about moving forward with your content and how you can make improvements to it and address some of the gaps that you have in terms of development and delivery and all the rest of it. And Dipo’s come up with a way of looking at this that is a framework that I think is actually extremely helpful. So Dipo, tell us about how you look at content debt. DA-C: Okay, thanks. First of all, I think before I go into my little thing that I put up, what is content debt? I think it’d be great to talk about that. It’s kind of like technical debt. It refers to that future work that you keep storing up because you’ve been taking shortcuts to try and deliver on time. You’ve let quality slip. You’ve had consultants come in and out every three months, and they’ve just been putting… I mean writing consultants. SO: These consultants. DA-C: And they’ve been basically doing stuff in a rush to try and get your product out on time. And over time, those sort of little errors, those sort of shortcuts will build up and you end up with missing metadata or inconsistent styles. The content is okay for now, but as you go forward, you find you’re building up a big debt of all these little fixes. And these little fixes will eventually add up and then end up as a big debt to pay. SO: And I saw an interesting post just a couple of days ago where somebody said that tech debt or content debt, you could think of it as having principle and interest and the interest accumulates over time. So the less work you do to pay down your content debt, the bigger and bigger and bigger it gets, right? It just keeps snowballing and eventually you find yourself with an enormous problem. So as you were looking at this idea of content debt, you came up with a framework for looking at this that is at once shiny and new and also very familiar. So what was it? DA-C: Yeah, really familiar. I think everyone’s heard of the five stages of grief, and I thought, “Well, how about applying that to content debt?” And so I came up with the five stages of content debt. So let’s go into it. I’m not going to keep referring to the grief part of it. You can all look it up, but the first stage is denial. “Our content is fine. We just need a better search engine. We can actually put it into this shiny new content delivery platform and it’s got this type of search,” and so on and so forth. Basically what you’re doing is you’re ignoring the growing mess. You’re duplicating content. You’ve got outdated docs. You’re building silos, and then you’re ignoring that these silos are actually getting even further and further apart. No one wants to admit that the CMS or whatever system, bespoke system that you’ve put into place, is just a patchwork of workarounds. This quietly builds your content debt until, actually the longer denial lasts, the more expensive that cleanup is. As we said in that first bit, you want to pay off the capital of your debt as quickly as possible. Anyone with a mortgage knows that. You come into a little bit of money, pay off as much capital as you can so that you stop accruing that debt, the interest on the debt. SO: And that is where when we talk about AI-based workflows, I feel like that is firmly situated in denial. Basically, “Yeah, we’ve got some issues, but the AI will fix it. The AI will make it all better.” Now, we painfully know that that’s probably not true, so we move ourselves out of denial. And then what? DA-C: There we go into anger. SO: Of course. DA-C: “Why can’t we find anything? Why does every update take two weeks?” And that was a question we used to get regularly where I used to work at a global medical device manufacturer. We had to change one short sentence because a spec change and it took weeks to do that. Authors are wasting time looking for reusable content if they don’t have an efficient CCMS. Your review cycles drag through because all you’re doing is giving the entire 600-page PDF to the reviewer without highlighting what’s in there. Your translation costs balloon and your project managers or leadership gets angry because, “Well, we only changed one word. Can’t you just use Google Translate? It should only cost like five cents.” Compliance teams then start raising flags. And if you’re in a regulated industry, you don’t want the compliance teams on your back, and especially you don’t want to start having defects out in the field. So eventually, productivity drops, your teams feel like they’re stuck. And the cracks are now starting to show across other departments and you’re putting a bad name on your doc team. SO: Yeah. And a lot of this, what you’ve got here, is the anger that’s focused inward to a certain extent. It’s the authors that are angry at everybody. I’ve also seen this play out as management saying, “Where are our docs? We have this team, we’re spending all this money, and updates take six months.” Or people submit update requests, tickets, something, the content doesn’t get into the docs, the docs don’t get updated. There’s a six-month lag. Now the SOP, the standard operating procedure, is out of sync with what people are actually doing on the factory floor, which it turns out, again, if you’re in medical devices, is extremely bad and will lead to your factory getting shut down, which is not what you want generally. DA-C: Yeah, it’s not a good position to be in. SO: And then there’s anger. DA-C: Yeah. SO: “Why aren’t they doing their job?” And yet you’ve got this group that’s doing the best that they can within their constraints, which are, as you said, in a lot of cases, very inefficient workflows, the wrong tool sets, not a lot of support, etc. Okay, so everybody’s mad. And then what? DA-C: Everyone’s mad, and eventually, actually this is a closed little loop because all you then do is say, “Okay, well, we’re going to take a shortcut,” and you’ve just added to your content debt. So this stage is actually one of the most dangerous of the parts of it because all you end up trying to do without actually solving the problem is just add to the debt. “Let’s take a shortcut here, let’s do this.” The next stage is now the blame stage. “It’s the tools. It’s the process. It’s the people.” These here and then you get calls of technical writers or, “Well, we’re going to put you into this department and we’ll get this person to rule you with this new agile process,” or, “We’ll get you to be doing it in this way.” The finger-pointing begins. Tech teams will blame the authors. Authors will blame the CMS. Leadership questions the ROI of the entire content operations team. This is often where organizations see that we’ve got to start making a change. They’re either going to double down and continue building that content debt or they start looking for a scalable solution. SO: Right. And this is the point at which people look at it and say, “Why can’t we just use AI to fix all of this?” DA-C: Yep, and we all know what happens when you point AI at garbage in. We’ve got the saying, and this saying has been true from the beginning of computing, garbage in, garbage out, GIGO. SO: Time. DA-C: Yeah. I changed that to computing. SO: Yeah. It’s really interesting though because the blame that goes around, I’ve talked to a lot of executives who, and we’re right back to anger too, it is sort of like, “We’ve never had to invest in this before. Why are you telling us that this organization, this group, this tech writers, content ops,” whatever you want to call it, “that they are going to need enterprise tools just like everybody else?” And they are just halfway astounded and halfway offended that these worker bees that were running around doing their thing… DA-C: Glorified secretaries. SO: Yeah, that whole thing, like, “How dare they?” And it can be helpful, sometimes it is and sometimes it isn’t, to say, “Well, you’ve invested in tools for your developers. You wouldn’t dream of writing software without source control, I assume,” although let’s not go down the rabbit hole of vibe coding. DA-C: Let’s not go down that one. SO: And the fact that there are already people with the job title of vibe coding remediation specialist. DA-C: Nice. SO: Yeah. So that’s going to be a growth industry. DA-C: That’s what, if you can get it. SO: But this blame thing is we are saying, “This is an asset. You need to invest in it. You need to manage it. You need to depreciate it just like anything else. And if you don’t invest properly, you’re going to have some big problems.” And to your- DA-C: A lot of that- SO: Yeah, they don’t want to do it. They’re horrified. DA-C: Yeah. A lot of that comes to looking at docs departments as cost centers. They’re costing us money. We’re paying all these people to produce this stuff that people don’t read. The users don’t want to. But if you look at it properly, deeply, the documentation department can be classed as a revenue generator. What are your sales teams pointing prospects at? They’re pointing at docs. Where are they getting the information about how things work? They’re pointing at the docs. What are you using? Especially if you’re having people looking through trying to find a solution? I know I do this. I go and look at the user manuals. And first thing that I want to see in there that is properly written, if I see something that does not describe the gadget or whatever I’m trying to buy properly, then I’m like, “Well, if you’ve taken shortcuts there, you’ve probably done the same with the actual thing that I’m going to buy.” So I’m going to walk away. Reducing costs for online centers. If your customers can find the information very quickly that describes the exact problem that they’re trying to solve, then you’ve got fewer calls to your online help center. And then while escalating onto the next person, because the level, I don’t know how this goes, level three, two, one, let’s say the level three is the lowest level, if that person can not find the information that is true, clear, one source of truth, then they’re going to escalate it onto that person who you’re paying a lot more, is at that level two, that person can’t find it, moved on. So it’s basically costing you a lot of money not to have good documentation. It’s a revenue generator. SO: So my experience has been that the blame phase is perhaps the longest of all the phases. DA-C: Yeah. SO: And some organizations just get stuck there forever and they blame different people every year. I’ve also, I’m sure you’ve seen this as well, we were talking about reorganizing. “Well, okay, the tech writers are all in one group. Let’s burst them out and put them all on the product team.” DA-C: Yes. SO: “So you go on product team A and you go on product team B and you go on product team C.” And I talk to people about this and they say, “This is terrible and I don’t want to do it.” I’m like, “It’s fine, just wait two years.” DA-C: Yeah. SO: Because it won’t work, and then they’ll put them all back together. Ultimately, I’m not sure it matters whether they’re together or apart because we fall into this sort of weird intermediate thing. What matters is that somebody somewhere understands the value, to your point, and isn’t making the investment. I don’t care if you do that in a single group or in a cross-functional matrix, blah, blah, but here we are. All right. So eventually, hopefully, we exit blame. DA-C: And then we move into acceptance. SO: Do we? DA-C: “Okay, we need a better way to manage that.” And this is like when people start contacting you, Sarah, it’s like, “I’ve heard there’s a better way to manage this. Somebody’s talked to me about there’s something called the component content management system or the structured content,” and all of this. So teams start to acknowledge, one, that they’ve got debt and that debt is growing. Then they start auditing that content and then really seeing that, “Oh, well, yes, things are really going bad. We’ve got 15 versions of this same document living in different spaces in different countries. The translations always cost us a bomb.” So leadership then starts budgeting for a transformation. This is where they then start doing their research to find structured content, competent reuse, they enter the conversation. If they look at their software departments, software departments reuse stuff. You’ve got libraries of objects. Variables is the simplest form of that reuse. And they’ve been using this for years. And so, “Well, why aren’t we doing this? Oh, there’s DITA, there’s metadata. We can govern our content better. We can collaborate using this tool.” So there is a better way to do this. And then we know what to do. SO: I feel like a lot of times the people that reach out to us are in stage four, they’ve reached acceptance, but their management is still back in anger and bargaining and denial and all the rest of that. DA-C: They’re still blaming and trying to find a reason. SO: Yeah, blaming and all of it, just, “How dare you?” All right, so we acknowledge that we have a problem, which I think is actually the first step in a different step process, but okay. DA-C: Yeah. SO: And then what? DA-C: And then there’s action. Let’s start fixing this before it gets totally out of control, before it gets worse. Then they start investing in structured content authoring platforms like Tridion Docs, I work for RWS, I’ve got to mention it. They start speaking with experts, doing that research, listening to their documentation team leaders, speaking with content strategists to define what the content model is, first of all, and then where can we optimize efficiency by having a reuse strategy? A reuse without a strategy is just asking for trouble. You’re basically going to end up duplicating content. And then you’ve got to govern how that is used. What rules have you got in place and what ways have you got to implement those rules? The old job of having an editor used to work in the good old days where you’d print something off and somebody would sign it off and so on and so forth. Now, we’re having to deliver content really quickly and we’re using a lot of technology to do that. And so, well, you need to use technology to govern how that content is being created. Then your content becomes an asset. It’s no longer a liability. This is where that transformation happens, and then you start paying down your content debt. You’re able to scale the content that you’re creating a lot faster without raising the number of the headcount, without having to hire more people. And if you want to then really expand, let’s say, because you’ve got this really great operation now and you’re able to create that content that takes hours and not weeks, then you’re able to expand your market. You’re able to say, “Okay, well, now we’re going to tackle the Brazilian market. Now, we can move into China because they’ve got different regulations.” Again, I speak a lot on the regulatory side of things. That’s where I passed most of my time as a technical writer. Having different content for different regulatory regimes and so on is just such a headache where you don’t have something that is helping you with that structure, applying structure to that content, applying rules to that content, making sure that your workflows are carried out in the way that you set it out six months ago and people have changed and so on and they’re not doing their own thing again. If your organization is stuck at stages one to three, as I just mentioned it, it’s basically time to move. SO: Yeah, I think it’s interesting thinking about this in the larger context of when we talk about writers, the act of writing, right? DA-C: Yes. SO: Culturally, that word or that process is really loaded with this idea of a single human in an attic somewhere writing the great American or French or British novel, writing a great piece of literature or creating a piece of art on their own, by themselves, in solitude. And of course, we know that technical writing- DA-C: Starting at A and going all the way to Z. SO: And we know that technical writing is not that at all, but it does really feel as though when we describe what it means to be a writer or a content creator in a structured content environment, it is just the 180 degree opposite of what it means to be a writer. It’s not the same thing. You are a creator of these little components. They all get put together. We need consistent voice and tone. You have to kind of subordinate your own voice and your own style to the corporate style and to the regulatory and to all the rest of it. And so it’s just this sort of… I think we maybe sometimes underestimate the level of cultural push and pull that there is between what it is to be a writer and what it is to be a technical writer. DA-C: Yes. SO: Or a technical communicator or content creator, whatever you want to call that role. Okay, so we’ve talked about a lot of this and then we’ve not talked a lot about AI, but a big chunk of this is that when you move into an environment where you are using AI for end users to access your content, so they go through a chatbot to get to the content or they’re consulting ChatGPT or something like that, and asking, “Tell me about X.” All of the things that you’re describing in terms of content debt play into the AI not performing, the content not getting in there, not being delivered. So what does it look like? What are some of the specifics of good source content, of paying down the debt and moving into this environment where the content is getting better? What does that mean? What do I actually have to do? We’ve talked about tools. DA-C: Yeah. So first, you’ve got to understand how AI accesses content and how large language models get trained. AI interprets patterns as meaning. If your content deviates from pattern predictability, then you’re going to get what we call hallucinations. And so asking the ChatGPT without having it plugged as an enterprise AI thing where you’ve really trained it on your own content, you get all sorts of hallucinations. Basically, they’ve taken two PDFs that have similar information, but two different conclusions. And so you’re looking for a conclusion in document A, but ChatGPT has given you the one in B. And it’s mixed and matched those because it does not know how one bit of information relates to the other. So good source content needs to be accurate. Your facts are correct. They reflect the current state of the product or subject. It needs to be kept up to date. You need to have single copies of it, that’s what we talk about, a single source of truth. You can not have two sources of truth. It’s either black or it’s white. There are no gray zones with AI, it will hallucinate. You’ve got to have that consistency in style and tone. How do you get that? Well, you’ve got the brand and the way we speak. In French, you would say, “Do you vouvoie or do you tutoie?” Do use the formal voice, formal tone, or do you speak like you’re speaking with your friends? How do you enforce some of that? Well, you can use controlled terminology. These are special terms that you’ve defined, a special voice. But the gold part of it is having that structured formatting and presentation. There’s always a logical structure and sequence to the way that you present that information. Your heading, subheading, steps, lists, are always displayed in the same way. You’ve defined an information architecture to then give that pattern. And the way AI then understands or creates relationships with those patterns is from the metadata that you’re adding onto it. And so good source content is accurate, up to date, consistent in style and tone, uses control terminology, has structure in formatting. Forget the presentation because that you put on the end of things, in that what it looks like, how pretty it is. But the presentation in terms of I always start with a short description and then I follow up with the required tools. And then I describe any prerequisites, and that is the way every one of my writers are contributing towards this central repository of knowledge, this single repository of knowledge. And you can do that as well if you’ve got a great CCMS by using templates, building templates into that CCMS so that it guides the author. And the author no longer has to think about, “Oh, how is this going to look? Should I be coloring my tables green, red, blue? Should they be this wide?” They’re basically filling in a template form. And some of the standards that we’ve developed like DITA allow you to do this, allow you to have a particular pattern for creating that information and the ability to put it into a template which is managed by your CCMS. SO: Yeah, and that’s the roadmap, right? We talk about how as a human, if I’m looking at content and I notice that it’s formatted differently, like, “Oh, they bolded this word here but not there,” and I start thinking, “Well, was that meaningful?” DA-C: Yeah. SO: And at some point, I decide, “No, it was just sloppy and somebody screwed up and didn’t bold the thing.” But AI will infer meaning from pattern deviations. DA-C: Yeah. SO: And so the more consistent the information is in all the levels that you’ve described, the more likely it is that it will process it correctly and give you the right outcome. Okay, so that seems like maybe the place that we need to wrap this up and say, folks, you have content debt. Dipo is giving you a handy roadmap for how to understand your content debt and understand the process of coming to terms with your content debt, and then figuring out how and where to move forward. So any closing thoughts on that before we say good luck to everybody? DA-C: Basically before, or, I mean, most enterprises today have already jumped on the AI bandwagon. They’re already trying to put it in, but at the same time, start taking a look at your content to ensure that it is structured and has semantic meaning to it. Because the day that you then start training your large language model on that, if you’ve not built those relationships into it, it’s like teaching a kid bad habits. They’re going to just continue doing it. It’s basically train your AI right the first time by having content that is structured and semantic, and you’ll find your AI outcomes are a lot more successful. SO: So I’m hearing that AI is basically a toddler? Okay. Well, I think we’ll leave it there. Dipo, thanks, it’s great to see you as always. DA-C: Thanks for having me. SO: Everybody, thank you for joining us, and we’ll see you on the next one. Conclusion with ambient background music CC: Thank you for listening to Content Operations by Scriptorium. For more information, visit Scriptorium.com or check the show notes for relevant links. Want more content ops insights? Download our book, Content Transformation. The post The five stages of content debt appeared first on Scriptorium.

Get this podcast on your phone, Free

Create Your Podcast In Minutes

  • Full-featured podcast site
  • Unlimited storage and bandwidth
  • Comprehensive podcast stats
  • Distribute to Apple Podcasts, Spotify, and more
  • Make money with your podcast
Get Started
It is Free