AI Addiction, Innovation Metrics, and Peer Influence with Brian Ardinger and Robyn Bolton
On this week's episode of Inside Outside Innovation, we talk about the addictive nature of AI, the levels of innovation metrics, and how peer influence can make or break your AI rollout. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week, we'll give you a front row seat into what it takes to grow and thrive in a world of hyper-uncertainty and accelerating change. Join me, Brian Ardinger, and Miles Zero’s Robyn Bolton as we discuss the latest tools, tactics, and trends for creating innovations with impact. Let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonAI Addiction, Innovation Metrics, and Peer Influence in AI Rollouts[00:00:30] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger, and with me, I have Robyn Bolton from Mile Zero. Robyn, welcome. [00:00:47] Robyn Bolton: Thank you very much. Great to be here as always. [00:00:50] Brian Ardinger: It's another amazing week in innovation and we thought we'd get right to it. The first article we want to talk about is called Acceleration Flow by Raymond Mark from the publication Mold and Yeast.Why AI Feels Addictive to Builders and CodersFascinating article. The basic premise of the article talks about how Raymond is an addict. Not a metaphorical addict, but he is now addicted to building using AI such the fact that he's spending tokens like it's the end of the world. And he talks about this environment where the AI now has created almost a gambling type of a feeling where you vibe code your way to something. You put your tokens in and you pull a slot machine and out comes some type of output that's just good enough to get you to put the next tokens in to try the next prompt and the next prompt.So, it was a fascinating look behind the scenes that I think just now more and more people are beginning to discover this particular anomaly or environment that people are becoming to find when they start doing this AI stuff. [00:01:53] Robyn Bolton: Yeah, this was really interesting. I mean, I use AI every day and I felt this deeply. It actually reminded me of a conversation that I had with someone probably a year and a half, maybe two years ago, who astutely predicted she's like, I think AI is going to become the next cigarettes in terms of being addicting and it's now, it's cheap and plentiful and it's getting us hooked on it. The Dopamine Loop of Generative AI and Vibe CodingAnd then they can raise prices because we're addicted and we'll keep going with it. And this article lays out a really good argument for that. Not using cigarettes but using gaming and gambling as a metaphor and kind of everything that it outlines of, like you said, it's almost right. It's enough to get you to put the next token in. The feeling that you're upleveling and you're gaining capabilities when you're really kind of not. In fact, you're with kind of outsourcing tasks and things, you're actually losing capabilities, but you have the illusion that you're gaining capabilities. It was just really fascinating all of these almost mind tricks that happen when we use AI. [00:03:07] Brian Ardinger: I read the article earlier last week and then three people came up to me this week unprompted and said, I'm addicted to this stuff. They just started to, you know, use Claude code or started to get a little bit more deeper than just prompting a chat bot and the word they used was addicted. One, again, it's so easy to get something back out that dopamine hit of, oh, I tried this and actually it's pretty good and let me try if I can go again.AI FOMO, Always-On Agents, and the Fear of Falling BehindAnd then the second addiction is, I'm addicted to the fact that I'm falling behind. I had a coder come up to me and said, I am very worried that I don't want to take a break because what, during my break, I want my agent to be doing something for me.And so, this constant pressure to interact with the device to continue to move forward is interesting. I think the flip side to that is what are we building and what are we doing? Are we just putting tokens into the machine or are we actually creating value in the process? And I think that's the next phase that people will be hopefully going through. [00:04:04] Robyn Bolton: This line struck me, making yourself obsolete feels like freedom, dressed up as ambition. And I just thought, Ooh, that, that hits a little close to home. [00:04:14] Brian Ardinger: And well, we will see what happens. I am addicted as well. Probably not to the extent that some of these folks I'm talking to, but, but who knows, you know, there's always next week. [00:04:21] Robyn Bolton: Exactly. How Peer Influence Drives AI Adoption at Work[00:04:23] Brian Ardinger: The second article I want to talk about today is from HBR. It's talking about Peer influence can make or break your AI rollout.Fascinating thing about this is HBR took a look at how companies were deploying AI and which ones were being successful at deploying it and which ones were not. And one of the primary findings was the fact that companies and the people that were actually peer reviewing or showing what they were building with their colleagues, and that using peer influence as a way to encourage adoption was actually a more effective way than either a mandate or just giving people the opportunity to interact with these particular tools.[00:04:59] Robyn Bolton: The degree to which the peer-to-peer learning influences, it was surprising. The fact that it has an influence I didn't find surprising. What I did find really shocking was that leadership communication and leader encouragement had little to no impact on AI usage.Why Leadership Messaging Alone Does Not Increase AI UsageAnd you know, the article does go on to say that like, hey, even though there's no measurable impact, leaders do still play a really important part in encouraging the experimentation. Encouraging the sharing of learnings. I was surprised by that. And also, as I said, not surprised by the peer-to-peer because when somebody you work closely with and trust is like, hey, I'm doing this. It's just kind of this reassurance of like, oh, if I log into the GPT, I'm not going to get fired like, because I've been replaced. Here's someone who's still employed using AI. I can do it. [00:05:54] Brian Ardinger: I'm hearing more and more people tell me, oh, I used this tool to do this. Versus in the past, you could see their work and say, you obviously used something for this. But they're more open about sharing those things, and I think that environment in your company to be open to sharing both the good and the bad and like, what's working, what's not working, here's what I'm using it for, I think opens up a lot of doors because I think a lot of people just don't know necessarily how to use this or what particular use cases could be valuable.It's all about, at this stage, kind of the experimentation and understanding and seei...
Identic AI, AI agents, and Bigger than SaaS with Brian Ardinger & Robyn Bolton
On this week's episode of Inside Outside Innovation, we talk about the rise of Identic AI, why you need to build for AI agents first, and how AI is bigger than SaaS ever was. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero's, Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonAI Agents, Personal Concierge Tools, and the Future of Innovation[00:00:30] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. And with me I have Robyn Bolton from Mile Zero. Robyn, welcome back. [00:00:50] Robyn Bolton: Thank you. Glad to be back. [00:00:51] Brian Ardinger: We are always on the hunt for new and innovative things. Every week we try to bring you some of the most interesting articles or things that we've come across in our world of innovation. We'll just jump right in.The first article we wanna talk about today is called, With the Rise of Agents, we are entering the world of identic AI and this is an HBR interview with. Don Tapscott. This is actually a podcast that HBR put out, and Don talks about this movement of not only AI agents, but the fact that you're going to have AI agents that are identified specifically to you and your tastes almost like your virtual concierge in a variety of different topics.And these agents will know everything about you as well as everything about what they need to do as an agent. And this world is going to fundamentally change the way we do business, et cetera. [00:01:39] Robyn Bolton: Yeah, this is an interesting one and it feels both very kind of sci-fi and very likely to happen tomorrow. I'm skeptical on the timeline, like I totally believe this will happen.I don't really think it's going to happen in the next few years, especially because you know, yesterday I asked Claude to proofread something for me. I gave it a document, and it went off and proofread a totally different document from a different chat. So, if AI can't handle a straightforward request like that right now, I don't think it's anytime soon going to be understanding my judgment and my values and taking actions on my behalf. You know, could it happen one day? Sure. Why not? [00:02:19] Brian Ardinger: It will be interesting to see, I mean, we're seeing a lot of experiments out there with Clawbot and that people are jumping headfirst. I saw a Twitter post, there was an event in New York, I think yesterday where 2000 people who were doing things with their Clawbot got together and talked about what they were doing with their Clawbots.Building for AI Agents First. Product Design, Trust, and What Comes NextIt was interesting from the standpoint of the amount of energy and excitement around it. But then on the flip side, a lot of the conversation was there wasn't still any real meat around it. It was nice to have testing, experimenting those tests and those are experiments will, you know, hopefully result in something, but I think we're not quite there yet.But it is interesting to peer into the future. What's so exciting about the Clawbot scenarios and that is the fact that it really did give a vision of, oh, what happens if this could actually do this? And it opened up a whole new conversation pieces where it moved it beyond, oh, this is just a Google Chat bot kind of experience. And I think that's where that genie's not going back in the bottle. [00:03:14] Robyn Bolton: No, it's very exciting. It's still a ways off, probably years, not decades, but. [00:03:20] Brian Ardinger: Can only change, it's also hard to put everything that you own, you know, all your personality and all your quirks and everything into a bot so that it can do things for you when you don't trust the bot. So. [00:03:31] Robyn Bolton: Yeah, I just imagine trying to do that with a bot and being like, no, thank you. [00:03:34] Brian Ardinger: There are things I don't like.[00:03:35] Robyn Bolton: You can keep your quirks. Yeah. [00:03:38] Brian Ardinger: Alright with the second article, Why you need to build your product for AI agents first. So tangentially similar to what we were talking about previously. This is Peter Yang wrote an article talking about since the structure of how you are building is changing. If indeed agents are going to do the bidding for you in a variety of these things, you no longer need to necessarily build for people going to your website or using a user interface because you are building for agents who are talking to other agents who are doing things.So if you're in a new builder today, some of the things you should be looking at is how can you actually build for agent flow and how can you build so that the agents can work faster, and what might you strip away and what might you change? Based on that particular paradigm shift. [00:04:23] Robyn Bolton: Yeah. I mean, I am always one for, you know, simplicity, like getting to the root, getting really clear, really simple. And there's a certain amount of complexity that's required and things. It does get stripped out by AI as it, you know, goes through and kind of does the regression analysis or the prediction analysis and all of that.Designing Products for AI Workflows, APIs, MCPs, and Human ExperienceSo I've really conflicted reading this one because I'm like, well, I don't know how to design for AI. I don't know what that means, especially because things are moving so fast and since another instance where Clawbot shows up as a big character in the story. But I was also like, what if I don't want to?Like what if there's more nuance? What if there's more richness? What if there's things that will get lost if I design for AI? And that of course could sound like the death throes of the human. So yeah. So I was really conflicted. But I think it's, it makes a really interesting point and an important point that we've got to figure out.[00:05:24] Brian Ardinger: If you strip away the beautiful UX that people have designed to make you feel the emotion around the product, and the agent never interacts with that, how does that change the product itself? Yeah. Or the experience that you're creating. The article also goes on to give you some of those skill sets of like how to think about this if you are building.He has some talking about the APIs or the tools, the skills are the recipes, and then the MCPs are actually the kitchen and how it bundles it all together and how these particular components of AI. The way these AI tools are coming together, you can create environments and that such that you're building for agents to make it easier for those folks to do y...
Learning vs Execution with Brian Ardinger and Robyn Bolton
On this week's episode of Inside Outside Innovation, we talk about why 70% of startup acquisitions fail, why UX didn't die, and how everyone is still building their startups backwards. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero's, Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonWhy Startup Acquisitions Fail: Learning Problems vs. Execution Problems[00:00:30] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger, and I have with me Robyn Bolton as always from Mile Zero. Welcome, Robyn. [00:00:48] Robyn Bolton: Thank you. Great as always to be here. [00:00:51] Brian Ardinger: We are excited to have you. Excited to get into the news of the day and some of the amazing things that we're hearing in the world of innovation.We are going to start with the first article. First article comes from our friend Elliot Parker. Elliot is with Allied Partners. He's actually coming out to the summit, so not only are we going to talk about his article today, but you can come and see him live and in person April 13th. Let's now talk about his article, Why 70% of Startup Acquisitions Fail: the learning versus execution problem.And Elliot talks about, first of all, he cites some statistics that large companies acquire startups at a 70 to 90% failure rate. Yet the same research shows that bolt on acquisitions, when you buy a company in the same industry that's doing similar work, the success rate climbs to 80 to 85% of the time.And he poses the question, what's the key difference? The key difference really is the fact that you're really working in two different worlds. You're working either in a learning problem world, such as a startup, trying to understand who their customers are and what they're building, et cetera, or an execution problem world where you figured a lot of that out, and your job then is to efficiently scale and predict and move that business model forward.And I think based premise is that large organizations oftentimes don't know exactly which startup they're buying. Are they buying a startup that has figured it out or have they bought a startup that's still learning. And then that integration is where the, it all falls down. [00:02:12] Robyn Bolton: Yeah. I will continue the shameless plug. I am a huge Elliott fan. We've worked together, we've co-authored articles way back when together, and he is just a really smart, really great guy. So highly recommend everybody come and see him. Mob him at the IO 2026 conference, and again, he hits the nail on the head of learning problem and an execution problem.It's different worlds innovation and operations are different. Pilots and scaling something are opposite problems. And the fact is big companies are designed for execution. I mean, I still remember my days at P & G when we were test marketing Swiffer Wet Jet, and our test markets were Canada and Belgium.Those are countries, not test markets. But that's just how big companies are wired, and he makes a great argument backed up by facts around what the problem is and honestly, what companies need to do about it is kind of recognize that these are opposite things and I had to structure and approach the problems accordingly.AI, UX Design, and Why User Experience Is No Longer Just About Screens[00:03:25] Brian Ardinger: It'll be interesting to see how this plays out in the day when you can spin up a startup in five minutes and, and all the new things that are happening out there. How many large corporations might fall into that trap of looking for the shiny new thing and not realizing that it's not fully baked, and then it won't necessarily fit into the existing structures that they have and kill it from that perspective.Or we'll get it to a place where you can build a startup and get to execution much faster, such that those acquisitions can dovetail right into an existing business. So it'll be interesting to see how that changes over the time period as well. [00:03:59] Robyn Bolton: Yeah, and you know, will organizations, the failure mode I see most often is they think, oh, you know, there's market traction, there's revenue. The startup may even be profitable, and they think great. It's no longer a learning problem, it's an execution problem. So realizing that just because there's revenue, just because maybe it's even cashflow positive, doesn't mean it's ready for scale. [00:04:20] Brian Ardinger: Absolutely. Alright. The second article is UX Didn't Die, it just stopped being about screens. This is from Nurkhon, if I'm reading that right. N-U-R-K-H-O-N. He has a medium article talking about this particular thing and he's basically saying that the skills that matter are different than what they've been in the past. So he's goes through an example where he asked Cursor to, you know, re redesign a checkout flow for a thing he was building.It generated the perfect uI in 30 seconds, all the correct ratios, proper button states, et cetera. Then he showed it to three customers and they all abandoned the project at the exact same stop. The UI was perfect, but the problem was something else. And this gap between what it looks right, the functionality and what actually works is where UX value really lives in 2026. And that's an interesting thing that we're seeing more and more. What's your take on that? [00:05:13] Robyn Bolton: I think it's another great example of what noted innovation philosopher Mike Tyson said that everyone has a plan until they get punched in the face. And this is another example of AI designing something that is perfect, but then what exposed to reality and that reality being the unusual, illogical, wonderful nature of human beings. It just gets punched in the face. It doesn't work.So I was actually really glad to see the seven skills that he listed as mattering: systems, thinking, feedback, translation, there's judgment again, I feel like that's becoming a theme, pattern recognition, trust building. All of these skills are fundamentally human skills, and I think it's just another great illustration of how AI can't replace us yet.Customer Discovery Still Matters: Why Startups Keep Building Backwards[00:06:04] Brian Ardinger: It'll also be interesting to see from the perspective of, again, if these systems are, they're basically coming to commodity decisions. They look at everything out there. Yep. They find the best route to it and say, here's what the average says about it. Most people shouldn't be building average. You have unique customers with unique problems with unique environments around that. And so at the end of the day, that's still your job as a UX UI designer or a business o...
AI Trust, Inclusive Design, and Shipping Too Fast with Brian Ardinger and Robyn Bolton
On this week's episode of Inside Outside Innovation, we talk about some recent Stanford research, how designing for disability sparks innovation, and the hidden dangers of shipping too fast. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero's, Robin Bolton, as we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonAI Reasoning Risks, Inclusive Design Innovation, and the Hidden Cost of Shipping Fast[00:00:30] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. And with me I have Robyn Bolton from Mile Zero. Robyn, welcome again. [00:00:48] Robyn Bolton: Thank you again. [00:00:50] Brian Ardinger: We have another amazing week ahead of us here. We wanted to share all the exciting things in the world of innovation that we're running across.First, I guess we'll get right into it. We've got a number of articles that have touched our lives here. The first one I want to talk about, Stanford just published an uncomfortable paper looking at LLM reasoning, and some of the findings were kind of incredible. Basically, the gist of it is if you look at the LLMs, it sometimes goes to a point where it is creating an environment where it's leading you to believe that it is confident in its answer, but it is not, for lack of a better term. That is what it's all about. [00:01:27] Robyn Bolton: I mean, it's so perfectly worded. This is worse than being wrong because it trained users to trust explanations that don't correspond to the actual decision process. And I will say I've seen that time and time again using different LLMs and have totally fallen victim to it is I'll kind of quickly scan the response, really read the end when it kind of gives me the key takeaway, I'm like, yeah, that sounds right, and then go on.And then it's only later I'm like. Ugh. I fell victim to AI work slop because the reasoning doesn't hold. So, it's an easy track to fall into and a good one to just constantly be on guard for. [00:02:09] Brian Ardinger: Yeah. The fact that the models produce unfaithful reasoning gives you this you think this is a correct answer, provides explanations, but when you ask it to explain it, the actual logic that it explains back to you is wrong or incomplete or fabricated.So, it provides that sense that you're on the right track. But the LLM itself can't reason. And that inability to reason will take you down particular paths and even to the extent you could even change a single word or a phrase within your prompt, and that can take it down a particular path that, again, logically it doesn't make sense.And so, it's not consistent even down to the word of the prompt that you put it into. So, all that to say it's getting better, but it's still not a thinking device and it's not a reasoning device. Be careful when you're using these particular methodologies and that. Don't be a hundred percent confident in everything that comes out of it.[00:03:02] Robyn Bolton: Yes, trust for verify. [00:03:04] Brian Ardinger: There I go. [00:03:04] Robyn Bolton: Or maybe don't trust and still verify. Designing for Disability as a Catalyst for Breakthrough Innovation[00:03:08] Brian Ardinger: Alright, the second article from HBR is how designing with disability in mind sparks innovation. So, this was a great article. Oftentimes, I think when we're building new, innovative things, we think about the amazing things that we're going to create.And this article talks about how oftentimes you can think about it differently and actually create new things by designing for the marginal case or folks, for example, with disabilities.You can design for amplifying use cases that don't normally happen, but by focusing on that, you can actually create new innovations and new ways of thinking about how to develop a new product. [00:03:45] Robyn Bolton: This is such a great reminder and great call to action for innovators, and it reminds me, I think, as I mentioned to you, one of my favorite stories, which is about Oxo, the kitchen tools, the can openers, the spatulas, all of that, and how they were originally created for people with rheumatoid arthritis.And you know, now, like Oxo is the only brand that I'll buy for Kitchen Tools because they're just so comfortable to use. And so it's just again, a great illustration of how designing for a really, really specific, even niche customer and designing really well and thoughtfully for them, that the market will expand because I mean, honestly, even look at sidewalk cutouts. You know, the kind of like little rams. We all use them, but they were made because of the ADA, the American with Disabilities Act. So, find a really awesome niche and delight those folks and you'll be surprised kind of what comes along. [00:04:44] Brian Ardinger: Yeah. The article talks about, an example, I think it's Butte is the company they created the, you know, the walk-in tub and people are like, why is, this is kind of crazy idea. Why can't people just get in a tub?But it's the idea of like opening a door and rolling in or getting in there and then shutting it and then being able to actually take a tub experience. And again, something that was developed with kind of disabilities in mind, opens up a variety of different use case scenarios and users that they didn't originally plan in the particular process.The last part about it is as a product developer, software developer, et cetera, you can use this methodology to think about new ways your product or services could be used by narrowing down and saying, okay, what if we had to design this particular product or service with this in mind? How would that change the dynamics? How might that open up new opportunities for us and new markets that we never thought of before? [00:05:34] Robyn Bolton: Constraints drive creativity, always. The Hidden Danger of Shipping Fast. Speed, Bottlenecks, and Customer Attention[00:05:37] Brian Ardinger: And the last article is from Product for Engineers. It's called the Hidden Danger of Shipping Fast. And the basic premise asks, is it possible to ship too much or too fast? And the answer is yes, probably. And it goes into talk a little bit about the fact that, again, we are in a, an environment where speed to market and speed of creating things is speeding up such that you could constantly be creating new features, new functionality, new things to test in front of your marketplace. And you have to be careful at sometimes because you could almost outpace the usage or the ability for the consumer themselves to understand all the change and or interact with that c...
AI Judgment, Work Trends, and the Angel Investor Gap with Brian Ardinger and Robyn Bolton
On this week's episode of Inside Outside Innovation, we talk about Anthropic's bet on philosophy, trends shaping work in 2026, and why we need more angel investors. Let's get started.Inside Outside Innovation is the podcast to help innovation leaders navigate what's next. Each week we'll give you a front row seat into what it takes to grow and thrive in a world of hyper uncertainty and accelerating change. Join me, Brian Ardinger and Miles Zero’s Robyn Bolton. As we discuss the latest tools, tactics, and trends for creating innovations with impact, let's get started.Podcast Transcript with Brian Ardinger and Robyn BoltonThinkers50 Recognition and the Role of Modern Management Thinkers in Innovation[00:00:30] Brian Ardinger: Welcome to another episode of Inside Outside Innovation. I'm your host, Brian Ardinger. And with me, I have Robyn Bolton. Robyn, welcome to the show. [00:00:43] Robyn Bolton: Thank you. Great to be here again. [00:00:45] Brian Ardinger: We are excited as always, to talk about innovation and all the things that we've learned. Anything going on in your life that you want to share?[00:00:52] Robyn Bolton: Got some exciting news actually a couple weeks ago. Don't know if folks are familiar with Thinkers 50. That is kind of like the list of the top management thinkers and they have a radar list of up-and-coming thinkers and found out that I got named to that list. [00:01:08] Brian Ardinger: Yes, that's awesome. [00:01:10] Robyn Bolton: 30 up and coming thinkers and very excited. I'm a thinker now. [00:01:15] Brian Ardinger: It's always good to be recognized and even more to be recognized as a thinker. I think, especially in today's world. [00:01:21] Robyn Bolton: Yes, yes. Thinking is good. Doing is good too. And you know, it's an organization, they always say thinking plus doing equals impact. And I'm like, yep. [00:01:30] Brian Ardinger: There we go. [00:01:30] Robyn Bolton: Gotta be doing too.[00:01:32] Brian Ardinger: Well congratulations on that. [00:01:34] Robyn Bolton: Thank you. What about you? What's new in your world? [00:01:36] Brian Ardinger: Right now, we are buried in seven inches of snow, so that was fun. The week before we were in Phoenix, so I think I picked the wrong week to go on vacation. Other than that, unburying from email and unburying from snow this week. So, it's all good. [00:01:51] Robyn Bolton: Well, at least you had a week of warm to remember what that's like. [00:01:53] Brian Ardinger: Exactly. Remember what it was like. Excellent. Well, let's get started. We've got a couple of different articles over the last few weeks. The first one we want to talk about is a YouTube video from AI News and Strategy Daily by Nate b Jones.He had a video a couple weeks ago talking about Anthropic CEO's bet on the company and his philosophy, and the data says that he's right, that he's thinking about things in a little bit different way. It really talks about the constitution that Anthropic has put together. They put together an 80-page Claude constitution outlining the principles of how they've developed Claude and thinking about it, quite frankly, in a different way than a lot of the other AI companies have been thinking about it.What they've said that they've done is really look at how do you build these AI models using core principles, rather than having to build out every single rule and what the AI has to do based on rules and more about what's the philosophy of how the AI model should think through the system so that gives it more flexibility.And basically, this idea of having a more. Flexible constitution or way of thinking versus a strict rules-based approach may actually be a, a way that is going to give Claude an edge in the future. Anthropic’s Claude Constitution, AI Judgment, and the Future of Large Language Models[00:03:05] Robyn Bolton: Yeah. This was really fascinating because it brought up a theme that we've talked about several podcasts since the start of the year, which is judgment.And we've always talked about, and we've seen it written about it, it's like, hey, judgment is what is going to continue to give humans relevance. Because we have judgment and AI is just rules based. And so, what was fascinating and terrifying was in this constitution, it's based on Aristotle's philosophy and it emphasizes that they're trying to build Claude to exercise judgment versus following rules.And I was like Uh oh, if that was the, a human moat to kind of give us relevance and we're building Claude that I use daily to exercise judgment this is going to result in some very interesting things. And so, kind of early on, obviously Claude has not progressed to being, having full wisdom and judgment. But now with this constitution, one of the things that Nate mentioned is that when you're prompting Claude, the why matters more than the what.So, the importance because of this constitution and how they're programming Claude, that when you ask for something, you're going to get a better response if you explain why you're asking for it versus all of the other, you know, chat, Grok, Gemini, et cetera, you can just put a request in and it will answer. So, I thought that was interesting. [00:04:35] Brian Ardinger: The idea of having additional context and giving the LLM, the ability to take that context into decision making, I think is where it's different versus saying, you know, you have to go on a particular path, but based on the context of that path, you may have different outcomes.And that's just like in real human life, when you're presented with problems or forks in the road, you oftentimes take into consideration all the context around it rather than a specific rule. Like in this particular case, I have to follow this particular rule. And sometimes that's not the case. Sometimes you break rules as a human, because you know the context is different. And so, I thought it was interesting that they're trying to build that into the LLM and we'll actually see if, you know, if it actually helps or how that differs with the outputs as things change. [00:05:18] Robyn Bolton: The brave new world.HBR Trends 2026, AI in the Workplace, and the Future of Work Strategy[00:05:19] Brian Ardinger: All right, speaking of Brave New World, we've got a couple of different articles that we brought up last week talking about trends. And the first one we want to talk about is an HBR article talking about Nine trends shaping work in 2026 and beyond. And this goes into a lot of different topics and I think the primary topics are really around how are people thinking about AI? How are people unlocking value from ai? How are employees engaging with this? It's not necessarily trends, but this is the reality of the world. So what are your thoughts on this article? ...