In this post, Larks argues that the proposal to make AI firms promise to donate a large fraction of profits if they become extremely profitable will primarily benefitting the management of those firms and thereby give managers an incentive to move fast, aggravating race dynamics and in turn increasing existential risk.
https://forum.effectivealtruism.org/posts/ewroS7tsqhTsstJ44/a-windfall-clause-for-ceo-could-worsen-ai-race-dynamics
Holden Karfnosky — Success without dignity: a nearcasting story of avoiding catastrophe by luck
Otto Barten — Paper summary: The effectiveness of AI existential risk communication to the American and Dutch public
Shulman & Thornley — How much should governments pay to prevent catastrophes? Longtermism's limited role
Elika Somani — Advice on communicating in and around the biosecurity policy community
Riley Harris — Summary of 'Are we living at the hinge of history?' by William MacAskill.
Riley Harris — Summary of 'Longtermist institutional reform' by Tyler M. John and William MacAskill
Hayden Wilkinson — Global priorities research: Why, how, and what have we learned?
Piper — What should be kept off-limits in a virology lab?
Ezra Klein — This changes everything
Create your
podcast in
minutes
It is Free
강유원의 책담화冊談話
The Art of Manliness
Conversations With Coleman
Dear Hank & John
The Lila Rose Podcast