The Nonlinear Library: EA Forum
Education
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Personal fit is different from the thing that you already like, published by Joris P on March 18, 2024 on The Effective Altruism Forum.This is aDraft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome.I wrote most of this last year. I also think I'm making a pretty basic point and don't think I'm articulating it amazingly, but I'm trying to write more and can imagine people (especially newer to EA) finding this useful - so here we goLast week[1] I was at an event with a lot of people relatively new to EA - lots of them had recently finished the introductory fellowship. Talking through their plans for the future, I noticed that many of them used the concept 'personal fit' to justify their plans to work on a problem they had already found important before learning about EA.They would say they wanted to work on combating climate change or increasing gender equality, becauseThey had studied this and felt really motivated to work on itTherefore, their 'personal fit' was really good for working on this topicTherefore surely, it was the highest impact thing they could be doing.I think a lot of them were likely mistaken, in one or more of the following ways:They overestimated their personal fit for roles in these (broad!) fieldsThey underestimated the differences in impact between career options and cause areasThey thought that they were motivated to do the most good they could, but in fact they were motivated by a specific causeTo be clear: the ideal standard here is probably unattainable, and I surely don't live up to it. However, if I could stress one thing, it would be that people scoping out their career options could benefit from first identifying high-impact career options, and only second thinking about which ones they might have a great personal fit for - not the other way around.^This was last yearThanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
EA - Unpacking Martin Sandbu's recent(ish) take on EA by JWS
EA - Good job opportunities for helping with the most important century by Holden Karnofsky
EA - CEA is spinning out of Effective Ventures by Eli Nathan
EA - Forecasting accidentally-caused pandemics by JoshuaBlake
EA - Some heuristics I use for deciding how much I trust scientific results by Nathan Barnard
EA - Against Learning From Dramatic Events by bern
EA - First book focusing on EA and Farmed Animals: The Farm Animal Movement: Effective Altruism, Venture Philanthropy, and the Fight to End Factory Farming in America by Jeff Thomas
EA - Report on the Desirability of Science Given New Biotech Risks by Matt Clancy
EA - EA Infrastructure Fund Ask Us Anything (January 2024) by Tom Barnes
EA - EA Nigeria: Reflecting on 2023 and Looking Ahead to 2024 by EA Nigeria
EA - Giving Farm Animals a Name and a Face: The Power of The Identifiable Victim Effect by Rakefet Cohen Ben-Arye
EA - Meta Charity Funders: Summary of Our First Grant Round and Path Forward by Joey
EA - Various roles at The School for Moral Ambition by tobytrem
EA - AI doing philosophy = AI generating hands? by Wei Dai
EA - EAGxAustin Save the Date by Ivy Mazzola
EA - Cause-Generality Is Hard If Some Causes Have Higher ROI by Ben West
EA - Help the UN design global governance structures for AI by Joanna (Asia) Wiaterek
EA - GiveWell from A to Z by GiveWell
EA - CE will donate £1K if you refer our next Outreach Director by CE
EA - Social science research on animal welfare we'd like to see by Martin Gould
Create your
podcast in
minutes
It is Free
Positive Thinking Mind
In the Great Khan’s Tent
Visualize Meditations
The Jordan B. Peterson Podcast
The Mel Robbins Podcast