S5E18 Books, Bots, And The Battle For Credit: Julie Trelstad
Spill the tea - we want to hear from you!A quiet revolution is underway in publishing, and it’s not just about formats. We dive into how AI is reshaping discovery, licensing, and authorship with Julie Trelstad of Amlet, a rights registry built to make books machine-recognisable—so creators can be identified, consent can be captured, and royalties can actually flow. From the surge of AI-generated lookalikes to landmark lawsuits over scraped libraries, we trace how the industry arrived at this moment and what a fair, practical path forward looks like.Julie breaks down the crucial difference between input and output licensing, why ISBNs and legacy metadata fail modern systems, and how the ISCC standard enables a robust digital fingerprint for each work—down to paragraphs and images. We talk candidly about fear in the creative community, the flood of bland AI prose, and the very human qualities that machines tend to erase: voice, quirk, and risk. Along the way, we share hands-on advice for authors using AI: practise discernment, slow the process, maintain a single brief, and edit with intent. Treat the model like an eager intern—helpful, fast, but never the author.Looking ahead, we imagine AI-native discovery where books, audiobooks, and summaries live inside conversational interfaces—and attribution becomes a visible badge of trust. With transparent licensing and machine-readable rights, micro-royalties for model usage become possible, piracy loses its edge, and independent creators gain leverage. If ebooks taught us to distribute better, AI is our chance to account better and to value the people behind the pages.Enjoyed the conversation? Follow and subscribe, share with a friend who writes or reads obsessively, and leave a quick review to help others find the show. Your support helps us bring more smart, human-centred conversations about AI and creativity to your feed.Support the show
S5E17 AI Meets Dermatology: Jonathan Benassaya
Spill the tea - we want to hear from you!A near miss changed everything. When a dermatologist almost sent Jonathan Benasaya home during COVID without checking under his mask, a hidden melanoma forced a reckoning with how we detect skin cancer: slow access, manual exams, and invasive treatments that arrive too late. That moment now fuels SkinBit, a clinical‑grade, full‑body scanner built to triage quickly, track change over time, and help dermatologists focus on the patients who need them most.We sit down with Jonathan to unpack the big idea: use dermatoscopic‑resolution imaging to create a digital twin of your skin, then apply AI to score suspicious lesions and prioritise care. Instead of waiting months for rushed visual checks, people could be scanned at clinics or even pharmacies, with flagged results routed to specialists. It is a practical, scalable way to expand dermatology capacity without replacing clinicians. Jonathan also shares what comes next: millimetre‑wave imaging to look beneath the surface, a human‑in‑the‑loop workflow for safety, and a data strategy that follows regulatory guidance by training on consistently acquired images and biopsy‑confirmed outcomes.Beyond the tech, we dig into trust. Jonathan outlines a plain‑spoken covenant: transparent consent, meaningful opt‑out, and a firm promise not to sell data. When models improve, retrospective reviews can benefit the same patients who contributed, turning participation into shared progress. He also reflects on leadership in a risky space—hardware, health, regulation—and why a mission that saves lives powers teams through the hard parts. Expect candid insights on aligning patients, clinicians, and payers, managing milestones, and keeping ethics at the core.Make a small move that matters: book a skin check and talk to your loved ones about doing the same. If you want updates on when you can try the system, join the waitlist at https://www.skinbit.co/. If this conversation resonates, subscribe, share it with a friend, and leave a review to help more people find it.Support the show
S5E16 Banking The New Majority: Tamara Laine
Spill the tea - we want to hear from you!Credit scores were built for a world of steady paycheques and long mortgages—so what happens when half the workforce earns through gigs, multiple clients, and flexible hours? We sit down with Tamara Laine, an Emmy-winning journalist turned fintech founder, to explore how AI and verified alternative data can open fair credit to the people traditional underwriting overlooks: drivers, carers, creators, renters, and newly arrived citizens who keep the economy moving.Tamara walks us through MPWR’s approach to building lender-ready profiles from real life signals—rent payments, utilities, bank inflows, and multi-source income—sourced directly from institutions rather than hype-heavy tech. We unpack why human-in-the-loop systems matter for trust, how feedback loops keep products grounded in user reality, and why diverse teams aren’t a “nice to have” but essential to preventing bias at scale. Along the way, we challenge outdated assumptions about the gig economy and map a path where financial inclusion is not charity, but overdue modernisation.You’ll hear a clear case for storytelling and community as the engines of adoption, practical ways to evaluate AI tools for privacy and safety, and a forward look at the next decade of work where soft skills and emotional intelligence rise in value. If you’ve ever paid your rent on time yet struggled to access credit, or if you build products and want to keep people at the centre, this conversation offers both strategy and hope.If this resonates, follow and share the show with someone who needs a fairer shot at finance. Subscribe, leave a quick review, and tell us: what everyday data should count toward credit that doesn’t today?Show Link: https://mpwr.money/about/Support the show
S5E15 Trust, Code, And The Future Of Truth: Billy Luedtke
Spill the tea - we want to hear from you!What happens when a handful of companies can quietly steer what we see, buy, and believe? We sit down with Billy Ludke, founder of Intuition and a veteran of EY and ConsenSys, to map a path where trust is built on cryptographic proof, portable reputation, and your own data — not a platform’s black box. Billy argues that while crypto started by decentralising money, the bigger prize is decentralising information itself. If discovery flows through opaque feeds and proprietary AIs, power concentrates. The antidote is simple in concept and ambitious in practice: verifiable attestations about people, agents, and platforms that travel with you anywhere.We dig into how cryptographic attribution shows who said what, while reputation adds the nuance that pure math cannot. One trusted voice beats ten thousand gamified reviews, so Intuition focuses on a neutral substrate for signed claims and lets multiple scoring models compete on top. That choice avoids a central arbiter of truth and keeps bias in check. From there we explore the rise of agent swarms — many specialised agents coordinating like a brain — and why open, portable reputations will decide how requests are routed and which tools act on your behalf.Billy also shares how this vision lands on device with Samsung’s Gaia phones: a second brain for your preferences and trusted sources that you control, usable across any model without lock‑in. We talk healthcare records, bank reputations, and why your ChatGPT context should be yours to carry. The through‑line is clear: don’t let anyone control the truth. Treat it as a prism of perspectives anchored by verifiable facts and accountable actors. If that future excites you — or challenges your assumptions — tune in, share with a friend, and leave a review so more curious listeners can find us.Support the show
S5E14 Algorithms, Beauty, And The Artist: Gretchen Andrew
Spill the tea - we want to hear from you!What if the internet that trained today’s AI also rewired our sense of beauty, originality, and self? We sit down with artist and former Googler Gretchen Andrew to explore how algorithms shape culture—from who gets seen on social platforms to why so many rooms, faces, and feeds now look the same. Gretchen’s path from information systems to the Whitney offers a rare inside-out view: she uses AI not to generate images, but to expose how machine-enforced standards flatten difference and reward sameness.Gretchen breaks down the feedback loop that began a decade ago when adtech and SEO drove the kind of content the web produced. Those archives became the fuel for generative models, and now those models steer taste back into the feed. We talk practical signals for spotting AI images, the difference between building your own dataset versus prompting in a black box, and why the best AI artists still make work that is unmistakably theirs. Her Facetune Portraits turn invisible edits into physical marks, revealing the embedded judgements inside “beautifying” tools and how they travel from screens to surgeons’ offices.The conversation gets personal and urgent. Filters can destabilise your self-image even when you know how they work. Plastic surgery trends among men and women rise as we optimise our 2D selves for Zoom and Instagram. For artists worried about replacement, Gretchen offers a path forward: study art history to know what’s actually new, build a practice that explains why the tool matters, and lean into the messy human qualities machines can’t convincingly fake. If you care about AI, culture, and creative integrity, this one will challenge how you see your feed—and your face.Enjoy Gretchen's work: https://www.gretchenandrew.com/facetune-portraits/facetune-portraits-gretchen-1Enjoyed the conversation? Follow the show, share this episode with a friend, and leave a review to help more curious listeners find us.Support the show