From Translation to Transformation: Paula Reichenberg on AI, Legal Quality, and the Future of Good Enough
This week we welcome Paula Reichenberg, founder of Neuron, for a sharp and thoughtful conversation about legal translation, artificial intelligence, and what happens when professional expertise collides with tools that look polished but still miss the mark. Paula shares her path from M&A and capital markets law into business school, legal services, machine learning, and finally legal tech entrepreneurship. What started as frustration with inefficiencies inside law firms grew into a translation business, then evolved again as machine translation improved and forced a harder question about survival, adaptation, and quality.Paula explains how her early company, Hieronymus, found success by handling sensitive, high-stakes legal translations in Switzerland, especially where precision and confidentiality mattered most. But as machine translation improved, the market for average work started to disappear. Clients began doing more on their own, leaving only the hardest, highest-value assignments for specialists. Rather than ignore the shift, Paula leaned into it. That decision led her back to university, into data science and machine learning, and toward building Neuron, a company focused less on replacing expertise and more on improving the process around imperfect AI output.A central theme of the discussion is the uncomfortable truth that many users do not care as much about excellence as professionals do. Paula makes the point with refreshing honesty. AI often produces work that is mediocre, but for a large share of users, mediocre is enough. That creates both a market shift and a professional dilemma. In legal translation, as in legal drafting more broadly, the issue is rarely whether AI produces something flawless. The issue is whether the user notices what is wrong, has the time to fix it, and has the systems in place to improve the result efficiently. Paula argues that the real value is not in claiming perfection. It is in helping experts find the mistakes faster, correct them with less pain, and avoid wasting hours doing work that feels like cleanup on aisle five.The conversation also digs into trust, user behavior, and the strange authority people give to AI-generated answers. Paula recounts how, in one negotiation, a party trusted ChatGPT’s answer more than a human tax lawyer’s detailed explanation, even when the AI response was wrong. That anecdote opens up a broader discussion about confidence, presentation, and why polished outputs often feel more persuasive than expert judgment. Greg and Marlene connect that idea to legal systems, translation quality, and access to justice, especially where technology might offer better service than overworked and underfunded human systems. The result is not a simple pro-AI or anti-AI position. It is a grounded look at where human excellence still matters, where automation fills gaps, and where the future may split between mass-market convenience and premium, highly tailored expertise.Looking ahead, Paula sees consolidation coming to legal tech, along with a growing push toward seamless interfaces that bring best-in-class features into one place. For Neuron, that means becoming an embedded layer inside other legal tools rather than forcing lawyers to juggle yet another standalone platform. Her crystal ball view is both stylish and sobering. The legal industry is not simply moving toward automation. It is sorting itself into tiers of service, quality, and expectation. And if Paula is right, the future belongs to those who understand where “good enough” ends and where true expertise still earns its premium.Listen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack[Special Thanks to Legal Technology Hub for their sponsoring this episode.] Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca Transcript
Anthropic’s Matt Samuels and Den Delimarsky - Claude & MCP: Building the USB-C for the Legal Tech Stack
This week, we sit down with two guests from Anthropic, Matt Samuels, Senior Product Counsel, and Den Delimarsky, a core maintainer of the Model Context Protocol, or MCP. Together, they unpack why MCP is drawing so much attention across the legal industry and why some are calling it the USB-C for AI. For law firms long burdened by disconnected systems, scattered data, and the infamous integration tax, MCP offers a shared framework for connecting models to the places where real work and real knowledge live, from iManage and Slack to email, data lakes, and internal tools.Den explains that the promise of MCP is not tied to one model or one vendor. Instead, it creates a standardized way for AI tools to securely interact with many different systems without forcing organizations to build one-off integrations every time they want to connect a new source. The conversation gets especially relevant for legal listeners when Greg and Marlene press on issues like permissions, ethical walls, least-privilege access, and auditability. The answer from Anthropic is reassuring. MCP is built to work with familiar enterprise security concepts such as OAuth and role-based access, meaning firms do not have to throw out their security model in order to explore new AI workflows.Matt brings the legal and operational lens, translating MCP into practical use cases for lawyers, legal ops teams, and security leaders. He describes how AI becomes far more useful once it has access to the systems lawyers already rely on every day, while still operating within carefully defined administrative controls. The discussion highlights a key shift in how firms should think about AI. This is no longer about asking a chatbot a clever question and getting a polished paragraph back. With MCP, firms are moving toward systems where AI can retrieve, correlate, summarize, draft, and support actions across multiple platforms, all while staying inside the guardrails set by the organization.The episode also explores how MCP fits into the rise of agentic workflows, apps, plugins, and skills. Rather than treating AI as a static assistant, Anthropic describes a future where these tools become active participants in legal work, pulling together information from multiple sources, helping assemble case timelines, drafting notes into a shared document, and supporting lawyers in a far more integrated workspace. The conversation around skills is especially useful for firms thinking about standard operating procedures, preferred drafting styles, escalation rules, and repeatable work product. Skills and MCP do different jobs, but together they start to look like the operating system for structured legal workflows.By the end of the conversation, one message comes through clearly. The legal profession is still early in this shift, but the pace is picking up fast. Both Matt and Den encourage listeners to stop treating these tools like abstract future concepts and start experimenting with them now. At the same time, they offer an important note of caution. As much as these systems promise speed and efficiency, lawyers still need to protect the craft of lawyering, their judgment, and the human choices that matter most. For firms trying to make sense of where AI is headed next, this episode offers a grounded and practical look at the infrastructure layer that could shape what comes next.Listen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack[Special Thanks to Legal Technology Hub for their sponsoring this episode.] Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca
Niki Black on AI Adoption, Billing Pressure, and the Governance Gap in Legal
This week we welcome back Niki Black to unpack the findings from the newly released 2026 Legal Industry Report from 8am The conversation centers on a legal profession moving into a new phase of AI adoption, where individual lawyers are embracing general purpose AI tools at a striking pace, while many firms still lack even basic policies or training. Niki explains that this disconnect is especially visible among solo, small, and mid-sized firms, where limited resources often slow formal governance even as day-to-day use rises fast.A major theme of the discussion is the widening gap between personal experimentation and institutional readiness. Niki notes that lawyers are not waiting for permission, and many are already relying on AI to support research, drafting, and routine work. At the same time, firms are struggling to provide guidance, training, and guardrails. The episode highlights the growing risk of shadow AI in legal practice, especially when lawyers and staff turn to unsanctioned tools to keep pace with client demands. For smaller firms, the answer is not elaborate bureaucracy, but practical direction, clear expectations, and a recognition that even a modest policy is better than none.The conversation also turns to client expectations and the economic pressure AI is placing on the traditional law firm model. Greg and Marlene press Niki on whether firms are truly ready to move away from the billable hour as AI compresses the time needed to complete legal work. Niki argues that large firms face deep structural obstacles because compensation systems, staffing models, and internal economics remain tied to hourly billing. Still, she sees pressure building from in-house counsel, boutique competitors, and smaller firms that use technology to deliver comparable work at lower cost. The result is a market that may resist change, but not escape it.Another standout part of the episode explores how AI is reshaping access to justice. Niki points to the promise of generative AI as a force multiplier for legal aid lawyers and public defenders, especially when paired with trusted tools and better funding. She rejects the idea that technology alone will solve the justice gap, but makes a strong case that AI, combined with stronger institutional support, helps lawyers serve more people with better results. At the same time, the hosts and Niki acknowledge the risks of a two-tiered system, where wealthier clients benefit from high quality tools while vulnerable users face lower quality, error-prone outputs.By the end of the episode, the conversation expands from AI tools to a broader structural shift across firms, clients, and law schools. Niki sees the next three to five years as a period of deep change, where pricing, training, competition, and professional expectations all evolve at once. She also shares her own methods for keeping up, including RSS feeds, trusted blogs, and LinkedIn, with a few playful complaints about Substack making life more complicated. The episode leaves listeners with a clear message: the biggest issue is no longer whether AI will affect legal practice. It already is. The real question is whether the profession can adapt fast enough to manage the consequences wisely.Listen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack[Special Thanks to Legal Technology Hub for their sponsoring this episode.] Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCiccaTranscript:
Anastasia Boyko on Advisor Mode, Training Lawyers for the Post-Pyramid Firm
Anastasia Boyko joins us this week for a wide-angle conversation about AI adoption, leadership, and the uncomfortable truth behind “we are watching what peer firms do.” A Yale-trained tax lawyer with experience spanning Axiom, legal education, and innovation leadership, Boyko argues that precedent-driven instincts are turning into a liability when the underlying rules of the market are shifting in real time.The episode opens with lessons from the Women + AI 2.0 Summit at Vanderbilt and the “AI competence penalty” narrative. Boyko’s central principle for law firm leaders is simple, stop copying the competition and start operating with intention. Strategic planning matters more than tool shopping, especially when uncertainty makes leaders freeze, over-index on fear, or chase noise instead of outcomes.From there, the conversation sharpens into client reality. Boyko shares what she is hearing from in-house leaders, and it is not comforting for firms. Legal departments are working to reduce dependence on outside counsel, business partners inside companies often accept “good enough,” and the models keep improving. The risk is not losing to a peer firm; it is losing the client relationship because the work stops feeling necessary.A major theme is talent and the apprenticeship gap. Boyko argues firms underinvest in people, even as they spend aggressively on software stacks. AI can help junior lawyers with coaching and confidence, but it does not replace mentorship, judgment-building, or context. The skills that matter now include client advisory, operational thinking, critical judgment, and the ability to solve problems across a complex system, not only perform discrete tasks in a vacuum.The episode closes on legal education and the future value of the JD. Boyko urges students to be selfish about learning AI, especially when faculty guidance comes from avoidance or philosophy rather than experimentation. Looking ahead, she predicts the JD’s value shifts upward, away from rote production and toward proactive advisory work, relationships, anticipatory counsel, and wisdom-driven judgment. In other words, fewer fire drills, more looking around corners.PLI - How to Navigate Law School PodcastPower Paradox webinarListen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack[Special Thanks to Legal Technology Hub for their sponsoring this episode.]Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca
Women + AI Summit, Real Talk: Leadership, Learning, and Not Letting “The Trap” Write Your Story
This week we go “talk show mode” for a special episode where Marlene recaps her trip to the Women + AI 2.0 Summit at Vanderbilt Law, hosted by Cat Moon, and shares why the event felt different from the standard conference grind, more energy, more structure, and yes, a DJ.The summit’s core focus sits right on a tension point in the wider AI conversation. There’s a persistent narrative that women use AI less than men. Cat Moon’s framing, if it’s true, it’s a problem, and if it’s false, it’s also a problem, sets the tone for a day built around participation and peer connection. The format uses “spark” cards, mini, midi, and maxi prompts, to push attendees into small conversations, deeper reflection, and a final takeaway.Marlene also highlights sobering research shared during the opening, including an “AI competence penalty” dynamic where identical work is judged differently depending on whether evaluators believe a man or a woman used AI. The discussion lands on why these biases matter inside legal workplaces, and what leaders and peers can do to reduce the social cost of being open about AI usage.Interspersed throughout are short interviews with attendees and speakers. Nicole Morris (Emory) captures the day’s purpose, expanding AI knowledge, talking risks, and connecting across roles. Sabra Tomb (University of Dayton School of Law) reframes AI as a leadership amplifier, moving from day-to-day management overload toward strategy and vision. Adele Shen (Vanderbilt) offers a funny but sharp taxonomy of AI “experts,” including “technocratic oracles,” “extinction alarmists,” and “touch grass humanists,” which sparks a candid side conversation about self-promotion, authority vibes, and who becomes “the story” in AI discourse.The episode closes with a look at how education and training can work better. Marlene and Greg lean into peer show-and-tell sessions, leadership modeling, and safe spaces, both governance-safe and learning-safe. A two-person segment from Suffolk Law (Chanal Neves McClain and Dyane O’Leary) adds a teaching twist, integrating AI tools into skills instruction without isolating “AI week” from real lawyering judgment. The final note comes from Stephanie Everett (Lawyerist) on the power of stories, and the reminder that people do not need to internalize the narrative someone else hands them.Listen on mobile platforms: Apple Podcasts | Spotify | YouTube | Substack [Special Thanks to Legal Technology Hub for their sponsoring this episode.]Email: geekinreviewpodcast@gmail.comMusic: Jerry David DeCicca