I Couldn't Read Papers Until I Learned The Greek Alphabet

I’ve always found it incredibly difficult to read papers. For years, I thought they were just written poorly or that academic writing is cryptic, but mostly I just thought I was stupid.

Here’s an example. The author starts out:

Some technologies save lives—new vaccines, new surgical techniques, safer highways.

Which is simple enough, and easy to parse. Then a paragraph later, he hits you with:

The agent is endowed with some initial stock of knowledge that generates a consumption level c and has a utility function u(c) = ū + [c1 – γ / (1 – γ)].

I want to clarify that I don’t have math anxiety. I would actually consider myself pretty decent, and am certainly comfortable with the basic math required to understand an exponential.

But for whatever reason, I just could not parse that formula.

Today, something finally something clicked and I realized it’s not a failure of reading, it’s a failure of over vocalization.

There are different varieties of inner speech and personally, I literally cannot read without sounding the words out loud in my head. It is equivalent to speaking silenty with my lips shut, or almost like whispering very very softly.

The problem is, I don’t know Greek! When I read “u(c) = ū + [c1 – γ / (1 – γ)]”, I can’t sound it out in my head because I don’t know how to pronounce γ.

That’s literally it. That’s the stupid and totally trivial problem behind so many of my failures. It’s unbelievably silly in retrospect, as so much of life is.

So to put it to the test, I downloaded Anki, found some Greek alphabet flash cards, and started learning. And now? It’s dramatically easier. It’s not even that I’m reading 2x faster, it’s that I’m able to read papers I would have previously given up on entirely.

This would all end up being really funny and stupid and embarassing in an “aw shucks” kind of way, except that this totally screwed me for years! I have no idea what else I’m missing, or what other low hanging fruit could be enriching my intellectual life. I also don’t know how many other people are suffering from this same problem, and just think they’re irreparably stupid.

If this describes you, or if you’re not sure and want to give it a try, here Anki, here’s the flash card deck, and best of luck!


[Edit: 03/23/21]: It turns out reading silently was not even normal until a couple hundred years ago, though there’s some debate over exactly when the shift happened, and if reading silently was considered difficitul, or just unusual at various times. Some monks were made to scribe silently in the 9th century. In the 4th century, Augustine of Hippo mentions Ambrose reading silenty, but seems to be surprised by his ability to do so. This Quartz article indicates that reading silently may have been unusual until the 18th century.

Replying to Robert Wiblin on Young Rationalists

[Edit 2021/02/07] I got a couple ages wrong in the original spreadsheet. Mean/median age at founding was previously listed as 26, but is actually 27. Mean/median age in 2020 was listed as 36 and 35, but are actually both 37. This very slightly weakens my arguments. The historgrams were also off and have been corrected. Thanks to Gytis Daujotas for catching these.

In response to Where are all the Successful Rationalists, Robert Wiblin writes:
The EA community seems to have a lot of very successful people by normal social standards, pursuing earning to give, research, politics and more… Typically they aren’t yet at the top of their fields but that’s unsurprising as most are 25-35.

He’s basically right. On the SSC 2019 survey, the median reader was 30. [1]

So Wiblin’s right, but his comment begs a much more important question: why do rationalists think being young is incompatible with being at the top of your field?

Take for example, Patrick Hsu. He’s 29, an assistant professor at Berkeley, and has his name on several seminal CRISPR papers. He reads and endorses Alexey Guzey so he’s clearly into weird blogs on the internet. It’s not unreasonable to think that people like this should be part of the rationality community.

Or to take a more visible case, consider all the very young startup founders.

Brian Timar looked at the companies he’s interested in, and found that the average and median age at founding was 30. If you scan YC’s top 100 companies, many of the founders were under 35 when they started:

92% of founders were under 30 when they started their companies, mean and median age were both 27.

But okay, we might not expect to know about founders who are just getting started. You’re not successful until your company actually succeeds. So how old are those same founders now? Still pretty young.

38% are still under 35, and 85% are under 40. Mean and median are both 37.

Data for both charts here.

Granting Wiblin’s point about youth, I’ll ask again: where are all the successful rationalists?


It gets a lot worse when you consider that ideologically, rationalists should be uniquely well positioned to start a company.

Linear Returns from Wealth
For a normal person, the expected financial value of a startup may be high, but the expected returns to personal happiness are very low. A billion dollars will probably only make you a little bit happier than million, so it makes sense to be risk averse and keep your day job. But for a rationalist utilitarian, returns from wealth are perfectly linear! Every dollar you earn is another dollar you can give to prevent Malaria. So when it comes to earning, rationalists ought to be risk-neutral, and skew more heavily than normal people towards starting companies.

Willingness to be Weird and Lame
Paul Graham says “One of the biggest things holding people back from doing great work is the fear of making something lame.” Meanwhile, rationalists totally disregard this tendency to spend years working on things no one else cares about. Will MacAskill has been speaking for years on the importance of keeping EA weird, but even before that there was a decade of posts about nootropics, longevity and superintelligence, long before it was cool.

Another panel from back in 2009: “I think we can fairly say that we’re all, Peter maybe less so, not afraid to being weird”

Low Probability High Return Bets
Rationalists are also already into doing things with a tiny probability of huge impact, as described in OpenPhil’s Hits-based Giving. This is the entire justification for caring about this like AI Safety. You multiply out the probabilities, and preventing even a 1-in-a-thousand chance of extinction turns out to be a very effective use of time.

Slant Towards Tech
Many rationalists are in the UK, but today the epicenter skews more towards the Bay Area. As I wrote last time, 40% work in software engineering, so they should be relatively well poised to start companies.

So seriously, where are all the successful rationalists?


Many qualities are ascribed to startup founders. Visionary, optimist, contrarian, workaholic.

What you don’t hear is founders praised for their intellectual honesty.

That shouldn’t come as a surprise. It’s no secret that you need a kind of unreasonable self-confidence to pitch VCs. Less discussed but analogous is the process of recruiting early employees. It is not very compelling to offer: “Come work for me, if we’re incredibly lucky there’s a miniscule chance you’ll get .1% of a billion dollar company which comes out to $500 thousand after dilution, $250 thousand after tax, over 4 years, minus the strike price.”

Alex Danco asks Are Founders Allowed to Lie. It’s a good piece and you should read it, but the fact that he even has to ask means it’s possible the answer is “yes”, which is just not something any normal industry says about its leaders.

The easiest explanation is that founders really are just consciously manipulative, but I worry we’re underestimating how hard that would be. It takes enormous energy to maintain a lie, and tremendous sociopathy to do so consciously. From what I can tell, a lot of these founders are actually disproportionately philanthropic. I can’t rule out that this is just a PR move or whatever, but this whole idea just feels somewhat extreme and conspiratorial.

So okay, maybe it’s unconscious? Maybe founders are uniquely out of touch with reality and genuinely believe that they’re very likely to beat the overwhelming odds against them?

Again, this doesn’t feel right. Sure, you have to be optimistic, but you can’t be straight up delusional and continue to function at a very high level. Kara Swisher and Elon have a great exchange about this, maybe my favorite moment in any interview ever:

[KS:] What about things that are just critical of you that you don’t like? Do you think you’re particularly sensitive?

[EM:] No. Of course not. Count how many negative articles there are and how many I respond to. One percent, maybe. But the common rebuttal of journalists is, “Oh. My article’s fine. He’s just thin-skinned.” No, your article is false and you don’t want to admit it.

Do you take criticism to heart correctly?

Yes.

Give me an example of something if you could.

How do you think rockets get to orbit?

That’s a fair point.

Not easily. Physics is very demanding. If you get it wrong, the rocket will blow up. Cars are very demanding. If you get it wrong, a car won’t work. Truth in engineering and science is extremely important.

Right. And therefore?

I have a strong interest in the truth.


If founders aren’t liars or delusional, what could explain their seemingly irrational optimism?

Rather than general dishonesty, my theory is that founders neglect one kind of reasoning very specifically. The same kind most rationalists are obsessed with: taking the outside view.

I’m using “outside view” as a kind of general term for meta-level thinking, consulting base rates, or using bayesian epistemology. Basically, it means not trusting your first-order estimates too much, looking around to see whether or not those estimates are justified, and reasoning “from behind the veil”. As Inadequate Equilibria describes it:

Modest epistemology doesn’t need to reflect a skepticism about causal models as such. It can manifest instead as a wariness about putting weight down on one’s own causal models, as opposed to others’…

If we were fully rational (and fully honest), then we would always eventually reach consensus on questions of fact. To become more rational, then, shouldn’t we set aside our claims to special knowledge or insight and modestly profess that, really, we’re all in the same boat? [2]

Here’s a more concrete example: A rationalist has a good startup idea, so they set out to calculate expected value. YC’s acceptance rate is something like 1%, and even within YC companies, only 1% of them will ever be worth $1 billion. So your odds of actually having an exit of that magnitude are 10,000 to 1, and then you’re diluted down to 10% ownership and taxed at around 50%. Of course, there are exits under and above a billion, but back-of-the-napkin, you’re looking at an expected $5,000 for 10+ years of work so grueling that even successful founders describe it as “eating glass and staring at the abyss”. [3]

This is so deeply ingrained in my head as the rational way to think, that it took me a long time to realize that other people just fundamentally don’t approach problems this way. I would venture to guess that the normal train of thought is closer to: “Most startups fail, but that’s because their ideas are bad. Since my idea is very good, I’ll neglect the base rates. I am very special.”

Does that sound mean? It shouldn’t. There’s nothing wrong with thinking that you’re special. It’s not a moral belief, or a claim to entitlement. It’s just the understanding that you are not a median member of the general population, so base rates about “anyone who has ever applied to YC” don’t apply.

The rationalist sees the 1% acceptance rate and gets intimidated. Normal people see that applying to YC explicitly does not require a business plan, incorporation, existing revenue, or an introduction, and understand that any idiot with a couple hours can will out a web form. Accordingly, they totally ignore the base rate.

That’s all the acceptance part of starting a company, but much more important is actually coming up with an idea you believe in. Rationalists tend to accept the Efficient Markets Hypothesis. They look at an industry, think “what are the odds I know more than people who have done this for a decade?” and assume any seeming inefficiencies are just a Chesterton’s Fence.

That’s not what normal people do at all. Normal people look at an industry, they see a gross inefficiency staring at them in the face, and they think “wow, that’s grossly inefficient!”

And then sometimes, they even set out to solve it.


[1] If the median rationalist is now 30, and Yudkowsky started writing in 2007, was his audience mostly teenagers?

[2] I’m not exaggerating. In fact, this is a massive oversimplification of the estimates of startup success rationalists actually put together.

[3] To be fair, Yudkowsky is specifically attempting to correct against modest epistemology, concluding with an exhortation to not take the outside view so much and instead “spend most of your time thinking about the object level”. To be clear, this is not his solution for all humans, nor his model of perfect rationality. It’s targeted specifically at the kinds of people who read this book and who he believes are a) disproportionately likely to overvalue the meta level and b) disproportionately likely to have good object level beliefs.

Revising my Views on the Impact of Teachers

In No One is Even Trying, I wrote that Grant Sanderson’s Youtube channel 3B1B has 161 million views. Based on how Youtube counts “views”, that’s between 1.3 and 21.4 million hours of learning.

This is incredible to me. If you asked me last night who my hero is, I would have said Grant Sanderson. If you asked me what’s wrong with the world, I would have said that there aren’t more people like him.

But maybe there are.

A regular teacher, teaching 8am–3pm 180 days a year to a class of 20 is producing 25,000 learning hours a year! Over a 50 year career, that’s 1.3 million, same as Sanderson, though in 10x as much time.

It’s even more surprising to compare a teacher with Khan Academy. Depending on how you count “views” they’re at 15 million to 120 million learning hours. But they have 600 employees, and they’ve been around for 12 years. So that’s best case 17,000, worst case 2,000 learning hours per employee per year, compared to a regular teacher’s 25,000.

That’s a little unfair, Khan Academy hasn’t had 600 employees for the full 12 years. But if we assume linear growth, they’ve averaged 300 employees, which would increase their learning hours to 4,000 (best case 33,000).

To be clear, the point is not that Khan Academy is bad (they have way more than lectures, and they’re available to anyone with an internet connection for free). The point is that seemingly non-scalable things like teaching can still multiply out to surprisingly large impact.

Since views counts on Youtube are legible and quantified, it’s really easy to see 100,000,000 and feel really impressed. But counting a view as 30 seconds, a teacher with 20 students and a 50 year career could hit 1.3 million learning hours, the equivalent of 151 million “views”!

Until now, I didn’t have that much respect for teachers. I understood that they’re probably good people, and a lot of them teach as a labor of love, but I didn’t appreciate their impact.

In fact, teaching has been my go-to example for illustrating how effective altruism differs from lay person opinions. Teaching seems like a great job, but if you think about it through an EA lens, it’s just not very impactful. 80,000 hours has very negative reviews of teaching, and another article of reasons not to go into education.

Nothing I’ve learned recently invalidates those articles. And yet, it’s hard to reconcile my belief in the seeming inefficacy of teaching jobs with my reverence for Sanderson. As always, one man’s modus ponens is another man’s modus tollens. I can have whichever belief I want, but not both of them at once. Either teachers are heroes, or Sanderson is merely moderately praiseworthy.

To further drive this point, consider the scenario:

You’re driving through a small town, and while stopping for lunch, hear rumors of a teacher who delivers unusually engaging lectures. As someone interested in pedagogy and social impact at scale, you decide to go check it out in person, and sure enough, the teacher’s amazing!

What would your first thought be? If you’re like me, your first conscious reaction would be “we’ve got to get this guy on Youtube!” If you’re like me, you see it as a horrendous waste of potential that this brilliant educator is stuck in a small town when he could be on the internet, delivering content to the masses at scale.

So you talk to him, set up a small film crew, convince him to take time off teaching to record Youtube lectures, and sure enough it’s a hit! Within your first few years, he’s up to 10 million views.

And yet, it turns out that intuition is totally wrong. He was already clocking in “10 million views” every few years. Of course there are benefits of being online. The content is recorded, the students could be anywhere in the world, you can go back if you missed a section. But none of that feels sufficient to justify my initial reaction.

Maybe I’m just a shameless technophile retroactively justifying my beliefs, but I think there’s still a good reason to revere Sanderson.

I love 3B1B videos because they’re creative and engaging in a way that math never was in school. It genuinely feels like he’s broken through whatever obfuscation prevents kids from understanding math, and discovered better ways of explaining key concepts. And because math is so fundamental to reasoning, this is more than a good educational channel. It feels almost like a leap forward for civilization.

But still, that’s not a metric effective altruism cares about. Maybe the real modus tollens is to abbandon the value system.