welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: Objections to Value-Alignment between Effective Altruists, published by CarlaZoeC on the effective altruism forum.
With this post I want to encourage an examination of value-alignment between members of the EA community. I lay out reasons to believe strong value-alignment between EAs can be harmful in the long-run.
The EA mission is to bring more value into the world. This is a rather uncertain endeavour and many questions about the nature of value remain unanswered. Errors are thus unavoidable, which means the success of EA depends on having good feedback mechanisms in place to ensure mistakes can be noticed and learned from. Strong value-alignment can weaken feedback mechanisms.
EAs prefer to work with people who are value-aligned because they set out to maximse impact per resource expended. It is efficient to work with people who agree. But a value-aligned group is likely intellectually homogenous and prone to breed implicit assumptions or blind spots.
I also noticed particular tendencies in the EA community (elaborated in section: homogeneity, hierarchy and intelligence), which generate additional cultural pressures towards value-alignment, make the problem worse over time and lead to a gradual deterioration of the corrigibility mechanisms around EA.
Intellectual homogeneity is efficient in the short-term, but counter-productive in the long-run. Value-alignment allows for short-term efficiency, but the true goal of EA – to be effective in producing value in the long- term – might not be met.
Disclaimer
All of this is based on my experience of EA over the timeframe 2015-2020. Experiences differ and I share this to test how generalisable my experiences are. I used to hold my views lightly and I still give credence to other views on developments in EA. But I am getting more, not less worried over time, particularly because others members have expressed similar views and worries to me but have not spoken out about them because they fear losing respect or funding. This is precisely the erosion of critical feedback mechanism that I point out here. I have a solid but not unshakable belief about the theoretical mechanism I outline is correct but I do not know to what extent it takes effect in EA. But I’m also not sure whether those who will disagree with me will know to what extent this mechanism is at work in their own community. What I am sure of however (on the basis of feedback from people who have read this post pre-publication) is that my impressions of EA are shared by others within the community, that they are the reason why some have left EA or never quite dared to enter. This alone is reason for me to share this - in the hope that a healthy approach to critique and a willingness to change in response to feedback from the external world is still intact.
I recommend the impatient reader to skip forward to the section on Feedback Loops and Consequences.
Outline
I will outline reasons that lead EAs to prefer value-alignment and search for definitions of value-alignment. I then describe cultural traits of the community which play a role in amplifying this preference and finally evaluate what effect value-alignment might have on EAs feedback loops and goals.
Axiomaticity
Movements make explicit and obscure assumptions. They make explicit assumptions: they stand for something and exist with some purpose. An explicit assumption is, by my definition, one that was examined and consciously agreed upon.
EA explicitly assumes that one should maximise the expected value of one’s actions in respect to a goal. Goals differ between members but mostly do not diverge greatly. They may be a reduction of suffering, the maximisation of hedons in the universe or the fulfilment of personal preferences, and others. But irrespective of individual goals EAs mostly agree that resources sh...
view more