welcome to the nonlinear library, where we use text-to-speech software to convert the best writing from the rationalist and ea communities into audio.
this is: 2018-2019 Long-Term Future Fund Grantees: How did they do?, published by NunoSempere on the effective altruism forum.
Introduction
At the suggestion of Ozzie Gooen, I looked at publicly available information around past LTF grantees. We've been investigating the potential to have more evaluations of EA projects, and the LTFF grantees seemed to represent some of the best examples, as they passed a fairly high bar and were cleanly delimited.
For this project, I personally investigated each proposal without consulting many others. This work was clearly limited by not reaching out to others directly, but requesting external involvement would have increased costs significantly. We were also partially interested in finding how much we could figure out with this limitation.
Background
During its first two rounds (round 1, round 2) of the LTF fund, under the leadership of Nick Beckstead, grants went mostly to established organizations, and didn’t have informative write-ups.
The next few rounds, under the leadership of Habryka et. al., have more informative write-ups, and a higher volume of grants, which are generally more speculative. At the time, some of the grants were scathingly criticised in the comments. The LTF at this point feels like a different, more active beast than under Nick Beckstead. I evaluated its grants from the November 2018 and April 2019 rounds, meaning that the grantees have had at least two years to produce some legible output. Commenters pointed out that the 2018 LTFF is pretty different from the 2021 LTFF, so it’s not clear how much to generalize from the projects reviewed in this post.
Despite the trend towards longer writeups, the reasoning for some of these grants is sometimes opaque to me, or the grant makers sometimes have more information than I do, and choose not to publish it.
Summary
By outcome
Flag Number of grants Funding ($)
More successful than expected
6 (26%) $ 178,500 (22%)
As successful as expected
5 (22%) $ 147,250 (18%)
Not as successful as hoped for
3 (13%) $ 80,000 (10%)
Not successful
3 (13%) $ 110,000 (13%)
Very little information
6 (26%) $ 287,900 (36%)
Total 23 $ 803,650
Not included in the totals or in the percentages are 5 grants worth a total of $195,000 which I tagged didn’t evaluate because of a perceived conflict of interest.
Method
I conducted a brief Google, LessWrong and EA forum search of each grantee, and attempted to draw conclusions from the search. However, quite a large fraction of grantees don't have much of an internet presence, so it is difficult to see whether the fact that nothing is findable under a quick search is because nothing was produced, or because nothing was posted online. Overall, one could spend a lot of time with an evaluation. I decided to not do that, and go for an “80% of value in 20% of the time”-type evaluation.
Grantee evaluation examples
A private version of this document goes by grantees one by one, and outlines what public or semi-public information there is about each grant, what my assessment of the grant’s success is, and why. I did not evaluate the grants where I had personal information which people gave me in a context in which the possibility of future evaluation wasn't at play. I shared it with some current LTFF fund members, and some reported finding it at least somewhat useful.
However, I don’t intend to make that version public, because I imagine that some people will perceive evaluations as unwelcome, unfair, stressful, an infringement of their desire to be left alone, etc. Researchers who didn’t produce an output despite getting a grant might feel bad about it, and a public negative review might make them feel worse, or have other people treat them poorly. This seems undesirable because I imagine that most grantees were taking risky bets with a high expected value, even i...
view more