Could AI Have Moral Worth?
My guest today, Josh Gellers, Dean at the University of North Florida, argues that AI has more awards. More specifically, he thinks that AI has been used to create new biological organisms that meet the criteria for moral worth. Does that mean that AI itself has moral worth? Should we think that if something is not natural it lacks moral worth? All this and more in today’s episodeAdvertising Inquiries: https://redcircle.com/brands
Don’t Believe the Hype About AI Job Displacement
My guests today - Professor Kate Vredenburgh and VR specialist Lauren Wong - argue that there are at least two strong reasons for calming down: first, AI isn’t good enough to replace us at our jobs. Second, even if they were, it’s up to us to develop AI in a way that supports rather than replaces us. We also talk about whether AI adoption is suffering for the same reasons the metaverse was never successful: we’re failing to appreciate how to get people to justifiably buy in to the technology.Advertising Inquiries: https://redcircle.com/brands
Does Social Media Diminish Our Autonomy?
Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today. Originally aired in season two.Advertising Inquiries: https://redcircle.com/brands
How AI Robs Us of Meaning
Much of what we find fulfilling in life isn’t the having but the doing. It’s the process of working through a problem, taking action, doing what needs to be done. But that meaning may be on the verge of being greatly diminished; so contends my guest, Sven Nyholm, Professor of Ethics of AI at lMU MUNICH. I push back in various ways: how real and/or imminent is this threat, really? And who is responsible for staving it off?Advertising Inquiries: https://redcircle.com/brands
Should Anthropic Have Allowed Autonomous Weapons Systems?
Anthropic just got the axe from the U.S. government for refusing to allow the Department of Defense (War?) to use Claude for autonomous weapons systems and mass surveillance. For the first 15 minutes of this conversation with Michael Horowitz - professor at UPenn, Senior Fellow for Technology and Innovation at the Council on Foreign Relations, and formerly Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities and Director of the Emerging Capabilities Policy Office at the DoD - we talk explicitly about Anthropic vs. the U.S. government. Why Anthropic did it, why this is more about personality than policy, and more. In the remaining 45 minutes you’ll hear a replay of an episode Michael and I did back in October, in which Michael defends the functional and ethical importance of potentially using AI for autonomous weapons systems.Advertising Inquiries: https://redcircle.com/brands