Why Hasn't Effective Altruism Grown Since 2015?

Follow up post here. See discussions on r/scc, LessWrong and EA Forum.

Here’s a chart of GiveWell’s annual money moved. It rose dramatically from 2014 to 2015, then more or less plateaued:

Open Philanthropy doesn’t provide an equivalent chart, but they do have a grants database, so I was able to compile the data myself. It peaks in 2017, then falls a bit and plateaus:

(Note that the GiveWell and Open Philanthropy didn’t formally split until 2017. ~GiveWell records $70.4m from Open Philanthropy in 2015, which isn’t included in Open Philanthropy’s own records. I’ve emailed them for clarification, but in the meantime, the overall story in the same: A rapid rise followed by several years of stagnation.~ **Edit: I got a reply from OpenPhil. Basically they say grants are sometimes a year off, so what GiveWell says is 2015 may be listed as 2016 in OpenPhil’s database. See [0] for their full reply.)

Finally, here’s the Google Trends result for “Effective Altruism”. It grows quickly starting in 2013, peaks in 2017, then falls back down to around 2015 levels. Broadly speaking, interest has been about flat since 2015.

If this data isn’t surprising to you, it should be.

Several EA organizations work on actively growing the community, have been funding community growth for years and view it as an active priority:

  • 80,000 Hours: The Problem Profiles page lists “Building effective altruism” as a “highest-priority area”, right up there with AI and existential risk.
  • Open Philanthropy: Effective Altruism is one of their Focus Areas. They write “We’re interested in supporting organizations that seek to introduce people to the idea of doing as much good as possible, provide them with guidance in doing so, connect them with each other, and generally grow and empower the effective altruism community.”
  • EA Funds: One of the four funds is dedicated to Effective Altruism Infrastructure. Part of its mission reads: “Directly increase the number of people who are exposed to principles of effective altruism, or develop, refine or present such principles”

So if EA community growth is stagnating despite these efforts, it should strike you as very odd, or even somewhat troubling. Open Philanthropy decided to start funding EA community growth in 2015/2016 [1]. It’s not as if this is only a very recent effort.

As long as money continues to pour into the space, we ought to understand precisely why growth has stalled so far. The question is threefold:

  • Why was growth initially strong?
  • Why did it stagnate around 2015-2017?
  • Why has the money spent on growth since then failed to make a difference?

Here are some possible explanations.

1. Alienation

Effective Altruism makes large moral demands, and frames things in a detached quantitative manner. Utilitarianism is already alienating, and EA is only more so.

This is an okay explanation, but it doesn’t explain why growth initially started strong, and then tapered off.

2. Decline is the Baseline

Perhaps EA would have otherwise declined, and it is only thanks to the funding that it has even succeeded in remaining flat.

I’m not sure how to disambiguate between these cases, but it might be worth spending more time on. If the goal is merely community maintenance, different projects may be appropriate.

3. The Fall LessWrong and Rise of SlateStarCodex

Several folk sources indicate the LessWrong went through a decline in 2015. A brief history of LessWrong says “In 2015-2016 the site underwent a steady decline of activity leading some to declare the site dead.” The History of Less Wrong writes:

Around 2013, many core members of the community stopped posting on Less Wrong, because of both increased growth of the Bay Area physical community and increased demands and opportunities from other projects. MIRI’s support base grew to the point where Eliezer could focus on AI research instead of community-building, Center for Applied Rationality worked on development of new rationality techniques and rationality education mostly offline, and prominent writers left to their own blogs where they could develop their own voice without asking if it was within the bounds of Less Wrong.

Specifically, some blame the decline on SlateStarCodex:

With the rise of Slate Star Codex, the incentive for new users to post content on Lesswrong went down. Posting at Slate Star Codex is not open, so potentially great bloggers are not incentivized to come up with their ideas, but only to comment on the ones there.

In other words, SlateStarCodex and LessWrong catered to similar audiences, and SlateStarCodex won out. [2]

This view is somewhat supported by Google Trends, which shows a subtle decline in mentions of “Less Wrong” after 2015, until a possible rebirth in 2020.

Except SlateStarCodex also hasn’t been growing since 2015:

The recent data is distorted by the NYT incident, but basically the story is the same. Rapid rise to prominence in 2015, followed by a long plateau. So maybe some users left for Slate Star Codex in 2015, but that doesn’t explain why neither community saw much growth from 2015 - 2020.

And here’s the same chart, omitting the last 12 months of NYT-induced frenzy:

4. Community Stagnation was Caused by Funding Stagnation

One possibility is that there was not a strange hidden cause behind widespread stagnation. It’s just that funding slowed down, and so everything else slowed down with it. I’m not sure what the precise mechanism is, but this seems plausible.

Of course, now the question becomes: why did Open Philanthropy giving slow? This isn’t as mysterious since it’s not an organic process: almost all the money comes from Good Ventures which is the vehicle for Dustin Moskovitz’s giving.

Did Dustin find another pet cause to pursue instead? It seems unlikely. In 2019, they provided $274 million total, nearly all of which ($245 million) went to Open Philanthropy recommendations.

Let’s go a level deeper and take a look at the Good Ventures grant database aggregated by year:

It looks a lot like the Open Philanthropy chart! They also peaked in 2017, and have been in decline ever since.

So this theory boils down to:

  • The EA community stopped growing because EA finances stopped growing
  • EA finances stopped growing because Good Ventures stopped growing
  • Good Ventures stopped growing because the wills and whims of billionaires are inscrutable?

To be clear, the causal mechanism and direction for the first piece of this argument remains speculative. It could also be:

  • The EA community stopped growing
  • Therefore, there was limited growth in high impact causes
  • Therefore, there was no point in pumping more money into the space

This is plausible, but seems unlikely. Even if you can’t give money to AI Safety, you can always give more money to bed nets.

5. EA Didn’t Stop Growing, Google Trends is Wrong

Google Trends is an okay proxy for actual interest, but it’s not perfect. Basically, it measures the popularity of search queries, but not the popularity of the websites themselves. So maybe instead of searching “effective altruism”, people just went directly to forum.effectivealtruism.org and Google never logged a query.

Are there other datasets we can look at?

Giving What We Can doesn’t release historical results, but I was able to use archive.org to see their past numbers, and compiled this dataset of money pledged [3] and member count:

So is the entire stagnation hypothesis disproved? I don’t think so. Google Trends tracks active interest, whereas Giving What We Can tracks cumulative interest. So a stagnant rate of active interest is compatible with increasing cumulative totals. Computing the annual growth rate for Giving What We Can, we see that it also peaks in 2015, and has been in decline ever since:

To sum up:

  • Alienation is not a good explanation, this has always been a factor
  • EA may have declined more if not for the funding
  • SlateStarCodex may have taken some attention, but it also hasn’t grown much since 2015
  • Funding stagnation may cause community stagnation; the causal mechanism is unclear
  • Giving What We Can membership has grown, but it measures cumulative rather than active interest. Their rate of growth has declined since 2015.

A Speculative Alternative: Effective Altruism is Innate

You occasionally hear stories about people discovering LessWrong or “converting” to Effective Altruism, so it’s natural to think that with more investment we could grow faster. But maybe that’s all wrong.

Thing of Things once wrote

I think a formative moment for any rationalist-- our “Uncle Ben shot by the mugger” moment, if you will-- is the moment you go “holy shit, everyone in the world is fucking insane.” [4]

That’s not exactly scalable. There will be no Open Philanthropy grant for providing experiences of epistemic horror to would-be effective altruists.

Similarly, from John Nerst’s Origin Story:

My favored means of procrastination has often been lurking on discussion forums. I can’t get enough of that stuff …Reading forums gradually became a kind of disaster tourism for me. The same stories played out again and again, arguers butting heads with only a vague idea about what the other was saying but tragically unable to understand this.

…While surfing Reddit, minding my own business, I came upon a link to Slate Star Codex. Before long, this led me to LessWrong. It turned out I was far from alone in wanting to understand everything in the world, form a coherent philosophy that successfully integrates results from the sciences, arts and humanities, and understand the psychological mechanisms that underlie the way we think, argue and disagree.

It’s not that John discovered LessWrong and “became” a rationalist. It’s more like he always has this underlying compulsion, and then eventually found a community where it could be shared and used productively.

In this model, Effective Altruism initially grows quickly as proto-EAs discover the community, then hits a wall as it saturates the relevant population. By 2015, everyone who might be interested in Effective Altruism has already heard about it, and there’s not much more room for growth no matter how hard you push.

One last piece of anecdotal evidence: Despite repeated attempts, I have never been able to “convert” anyone to effective altruism. Not even close. I’ve gotten friends to agree with me on every subpoint, but still fail to sell them on the concept as a whole. These are precisely the kinds of nerdy and compassionate people you might expect to be interested, but they just aren’t. [5]

In comparison, I remember my own experience taking to effective altruism the way a fish takes to water. When I first read Peter Singer, I thought “yes, obviously we should save the drowning child.” When I heard about existential risk, I thought “yes, obvious we should be concerned about the far future”. This didn’t take slogging through hours of blog posts or books, it just made sense. [6]

Some people don’t seem to have that reaction at all, and I don’t think it’s a failure of empathy or cognitive ability. Somehow it just doesn’t take.

While there does seem to be something missing, I can’t express what it is. When I say “innate”, I don’t mean it’s true from birth. It could be the result of a specific formative moment, or an eclectic series of life experiences. Or some combination of all of the above.

Fortunately, we can at least start to figure this out through recollection and introspection. If you consider yourself an effective altruist, a rationalist or anything adjacent, please email me about your own experience. Did Yudkowsky convert you? Was reading LessWrong a grand revelation? Was the real rationalism deep inside of you all along? I want to know.


I’m at applieddivinitystudies@gmail.com, or if you read the newsletter, you can reply to the email directly. I might quote some of these publicly, but am happy to omit yours or share it anonymously if you ask.

Data for Open Philanthropy and Good Ventures is available here. Data for Giving What We Can is here. If you know how Open Philanthropy’s grant database accounts for funding before it formally split off from GiveWell in 2017, please let me know.

Disclosure: I applied for funding from the EA Infrastructure Fund last week for an unrelated project.


Footnotes
[0] From Open Philanthropy over email:

Hi, thanks for reaching out.

Our database’s date field denotes a given grant’s “award date,” which we define as the date when payment was distributed (or, in the case of grants paid out over multiple years, when the first payment was distributed). Particularly in the case of grants to organizations based overseas, there can be a short delay between when a grant is recommended/approved and when it is paid/awarded. (For more detail on this process, including average payment timelines, see our Grantmaking Stages page.) In 2015/2016, these payment delays resulted in top charity grants to AMF, DtWI, SCI, and GiveDirectly totaling ~$44M being paid in January 2016 and falling under 2016 in your analysis even as GiveWell presumably counted those grants in its 2015 “money moved” analysis.

Payment delays and “award date” effects also cause some artificial lumpiness in other years. For example, some of the largest top charity grants from the 2016 giving season were paid in January 2017 (SCI, AMF, DtWI) but many of the largest 2017 giving season grants were paid in December 2017 (Malaria Consortium, No Lean Season, DtWI). This has the effect of artificially inflating apparent 2017 giving relative to 2018. Other multi-year grants are counted as awarded entirely in the month/year the first payment was made – for example, our CSET grant covering 2019-2023 first paid in January 2019. So I wouldn’t read too much into individual year-to-year variation without more investigation.

Hope this helps.

[1] For more on OpenPhil’s stance on EA growth, see this note  from their 2015 progress report:

Effective altruism. There is a strong possibility that we will make grants aimed at helping grow the effective altruist community in 2016. Nick Beckstead, who has strong connections and context in this community, would lead this work. This would be a change from our previous position on effective altruism funding, and a future post will lay out what has changed. [emphasis mine]

[2] For what it’s worth, the vast majority of SlateStarCodex readers don’t actually identify as rationalist or effective altruists.

[3] My Giving What We Can dataset also has a column for money actually donated, though the data only goes back to 2015.

[4] I’m conflating effective altruism with rationalism in this section, but I don’t think it matters for the sake of this argument.

[5] For what it’s worth, I’m typically pretty good at convincing people to do things outside of effective altruism. In every other domain of life, I’ve been fairly successful at getting friends to join clubs, attend events, and so on, even when it’s not something they were initially interested in. I’m not claiming to be exceptionally good, but I’m definitely not exceptionally bad.

But maybe this shouldn’t be too surprising. Effective Altruism makes a much larger demand than pretty much every other cause. Spending an afternoon at a protest is very different from giving 10% of your income.

Analogously, I know a lot of people who intellectually agree with veganism, but won’t actually do it. And even that is (arguably) easier than what effective altruism demands.

[6] In one of my first posts, I wrote:

Before reading A Human’s Guide to Words and The Categories Were Made For Man, I went around thinking “oh god, no one is using language coherently, and I seem to be the only one seeing it, but I cannot even express my horror in a comprehensible way.” This felt like a hellish combination of being trapped in an illusion, questioning my own sanity, and simultaneously being unable to scream. For years, I wondered if I was just uniquely broken, and living in a reality that no one else seemed to see or understand.

It’s not like I was radicalized or converted. When I started reading Lesswrong, I didn’t feel like I was learning anything new or changing my mind about anything really fundamental. It was more like “thank god someone else gets it.”

When did I start thinking this way? I honestly have no idea. There were some formative moments, but as far back as I can remember, there was at least some sense that either I was crazy, or everyone else was.

The Byrne Hobart Portfolio

Alternative title: Byrne Hobart is 64x better than the average hedge fund.

On December 14, Byrne wrote: “Disclosure: I’m long a small amount of FireEye now that I understand it a bit better. 300,000 customers and 400 of the Fortune 500 is a large addressable market.”

Then this happened:

Over the next 7 days, Byrne’s investment shot up 66%. If this trade was representative of a typical week, it would indicate annual returns of 305,439,776,880%. Starting 2020 with $1, he would be the richest man in the world by the end of the year, and 50 times richer than Jeff Bezos by today.

Interested in whether this performance was typical, I searched back issues of his newsletter for “disclosure”, wrote down every ticker that Byrne claims a stake in, and inferred his annualized returns.

None of this is financial advice, either on my part or on Byrne’s. All returns are an approximation, not intended to reflect Byrne Hobart’s personal earnings. Data is available here.

Results:

  • I estimate annualized returns of 478.5% for Byrne’s portfolio
  • In contrast, hedge funds average around 7.5% (Reuters, Investopedia)
  • The S&P has returned 9.81% annually since 1994, 14.4% in 2020, and 16.7% the last 12 months, and 77.6% annualized since the March 2020 bottom
  • Not including Bitcoin, Byrne’s annualized returns are down to 84.2%

Qualifications:

  • Disclosures in The Diff do not represent Byrne’s entire portfolio
  • He sometimes says stuff like “i’m long a bit”, but I just weigh everything evenly
  • I assume he bought on the date the ticker was first mentioned, and has not sold
  • All “current prices” are from around 3pm ET on 2021/02/16
  • I interpret “Brazilian equities” as EWZ
  • I interpret “hertz puts” as a short on Hertz
  • I interpret every long as simple stock ownership
  • I interpret every short as simply an inverse long

Thanks to Byrne for reviewing a draft of this post. He writes:

I’m ok with it as long as you point out that I’m definitely not writing an investment-advice newsletter, and that the returns are an approximation.

Appendix: Modern Portfolio Theory, or Why This Isn’t Investment Advice

Byrne Hobart writes “I own shares of TSM”. He insists this is not investment advice, but why not? Byrne is smart, and knows much more about finance than (presumably) you or I. Why not mimic his portfolio?

Just as truth is dependent on context and purpose, investments are dependent on your portfolio and risk tolerance.

Do you believe in TSM because you’re bullish on the semiconductor market as a whole? If so, why take on exposure to east asian geopolitics as well? Instead of investing in TSM, you would be better off with a diversified portfolio of top semiconductor fabs and designers, or better yet, a professionally designed ETF.

Alternatively, do you believe in TSM because you think they’ll perform well relative to competitors, but don’t have an opinion on the semiconductor industry as a whole? If so, you should hedge by shorting Intel.

Or perhaps you think TSM is likely to do well, absent some catastrophic risk such as Taiwan being destroyed by an earthquake. In that case, you can use various techniques to remain long while benefiting from extreme volatility.

Okay, but say you don’t care about any of this, you just want a hot stock tip, and as the resident smart finance person, Byrne is in as good a position as anyone to provide it.

If so, you’re still not getting it. The point of MPT is that there is no such thing as “stock tip”. From Wikipedia: “[MPT’s] key insight is that an asset’s risk and return should not be assessed by itself, but by how it contributes to a portfolio’s overall risk and return”. Or at greater length from Markowitz’s 1959 monograph:

This illustrates a basic principle: the security which is risky or conservative, appropriate or inappropriate, for one portfolio may be the opposite for another. One must think of selecting a portfolio as a whole, not securities per se (114)

This is all to say that “long TSM” does not mean that you should go out and buy shares.

Disclosure: I’m long TSM.

FAQ

Peter Thiel once warned me about this kind of indefinite optimism! Plus, I heard index funds are communist.
You know what’s absolutely not definite optimism? Taking stock tips from strangers on the internet. If you want to bet on something, bet on yourself.

The “communism” thing is about (supposed) negative externalities. You are personally better off just buying index funds instead of financially martyring yourself in the name of allocative efficiency.

Isn’t this all selection bias? There are lots of people on the internet, but relatively few hedge funds. Plus, maybe you’re still waiting until his returns look good to write this.
Byrne is the only finance person I follow. His newsletter is ranked #2 in Substack’s Technology section, so it’s not like he’s a random internet person. I first thought about conducting this analysis on Wednesday, then finished it this morning.

Having said that: if I hadn’t found out that Byrne was up a hilariously high amount, I probably would not have published.

So it’s not selection bias for me, but it might be for you!

Why does he even write the disclosures if it’s not advice?
Byrne writes:

I’d emphasize that my writing is not investment advice, that I don’t trade everything I write about (at all!) or write about everything I own/short, and that when I’m disclosing it’s meant to show that I have a financial interest might color my views, not that I’m specifically endorsing something as an investment. For small companies, I generally have to make the decision about whether it’s more interesting to trade it or to own it; I would feel uncomfortable writing up a microcap stock I owned, because I would potentially move the stock price. Meanwhile, gold is one of the most liquid assets in the world, so it’s not like my newsletter would affect its valuation at all.

Bonus

  • Not including Byrne’s short positions, he would be up 723%. Of 5 short positions listed, 4 are way down. The only exception is Hertz, except even there he writes “I did lose money on the Hertz options position, because the implied volatility I paid for was so high.”
  • While searching back issues, I came across this great line: “Disclosure: I am long Bitcoin, and occasionally sell a little for diversification purposes. I’m also long some gold, which I have not had to sell for diversification purposes.”
  • If you’ve made it this far, you’ll probably enjoy this Byrne Hobart fan fiction.

[Edit 02/28/2021]: An earlier version read:

The S&P has returned 9.81% annually since 1994, 14.4% in 2020, and 16.7% the last 12 months, and 113.7% annualized since the May 2020 bottom

This was a msitake. The bottom was in March 2020, giving annualized returns of 76.6%, not 113.7%.

The Irony of "Progress Studies"

The term “nominative determinism” captures the idea that name is destiny. Storm Field is the real given name of a meteorologist, Igor Judge the real name of a justice. As Peter Thiel once put it, “the names of companies are often very predictive of future failure or success.”

I favor the contrary hypothesis: nominative anti-determinism. Uber Express is the slowest of their offerings, Operation Iraqi Freedom was a violation of sovereignty. This occurs not by coincidence, but as the result of desperate overcompensation. As John Searle once put it, “anything that calls itself ‘science’ probably isn’t”

Viewed through this lens, we can finally understand what Progress Studies is actually about. It is neither a real field of study, nor is it about progress. Instead, it’s proponents have been mired in a fixation on the past.

Patrick Hsu says his biggest dream is to build the Bell Labs of biomedical research. Adam Marbelstone, new Manhattan Projects. The Mercatus Center wants to know: “What’s Your Moonshot?”

Why do the ostensible leaders in innovation insist on defining themselves by the glories of past generations?

I don’t know how Robert Oppenheimer conceived of the actual Manhattan Project, but I’ll bet it was not “the transcontinental railroad of nuclear weapons”


Frequently Proposed Answers

We haven’t accomplished anything since 1970.
Whether or not there is a stagnation of some sort, it would be absurd to suggest we haven’t had any successes. We got a rover on mars, developed genome editing, nuclear power plants, vaccines and cell phones.

Jason Crawford suggests the accomplishments just haven’t not been as visible, or as unambiguous. There was no parade for quantum supremacy. No one is building the Human Genome Project for X.

It was all war funding, and war-time immigrants.
Per Nintil, we may want to get more specific.

The Manhattan Project was undertaken to win WWII. The Apollo Program was meant to win the Cold War. Both were powered by immigrant scientists, (Leo Szilard, Enrico Fermi, Edward Teller, von Braun [1]). There is no such confluence of funding, urgency and talent aggregation today, and we are in need of a scientific equivalent of war.

As for Bell Labs, it seems to be the result of a momentary void in alternative sources of research funding. We might be accomplishing just as much today, but spread out across dozens of universities, and without the same concentration of talent.

There’s a 50 year lag before we’re allowed to experience nostalgia.
Even in 2013 Google was branding X as a “moonshot factory” and it’s Chief Executive as “Captain of Moonshots”. So at best it is a 44 year lag.

I would be shocked if in another few years we start glorifying accomplishments from the 1970s that were previously ignored


Footnotes
[1] Of course von Braun was less of a refugee and more of a surrendered nazi, but his presence in the US was a product of the war all the same.