Isolated Demands for Rigour in New Optimism

I leave on vacation for a week, and you all go wild. Apparently, The Great Stagnation ended while I was away, and we are now celebrating a brave new era of progress.

Graciously compiled by Caleb Watney, we have:

This recent wave of anti-stagnation writing consists largely of anecdotes, supported by scant evidence, and without any sign of serious thought.

The problem is not even that the ideas are wrong.

The problem is the blatantly imbalanced and isolated demands for rigour.

Three years ago, to be convinced that there was actually a Great Stagnation, we required:

And of course, The Great Stagnation itself, a book length treatment of the question.

These were not luke-warm proposals. Bloom et al concludes “Our robust finding is that research productivity is falling sharply everywhere we look.” Collison and Nielsen write: “the evidence is that science has slowed enormously per dollar or hour spent”. And from Cowen and Southwood:

To sum up the basic conclusions of this paper, there is good and also wide-ranging evidence that the rate of scientific progress has indeed slowed down, [in] the disparate and partially independent areas of productivity growth, total factor productivity, GDP growth, patent measures, researcher productivity, crop yields, life expectancy, and Moore’s Law we have found support for this claim.

In contrast, each of the new anti-stagnation posts consists largely of one-off innovations that the authors happen to think are cool. There is no evidence that they are indicative of more generalized trends. Noah Smith leads with a meme comparing Juicero to SpaceX Starship, Caleb throws out a heartwarming video of a self-driving Tesla.

I don’t think it’s controvertial to say that the pro-stagnation evidence was of dramatically higher caliber.

But today, all it takes is some guy tweeting out a few bullet points, and everyone loses their minds. Seriously, how credulous are you?

Has the Evidence Shifted?

If you actually care about overturning the stagnation hypothesis, start with the papers that proved it in the first place.

Is science accelerating once again? Are we in a new era of progress? If so, it should show up in the data.

In 2017, Bloom et al concluded that ideas were getting harder to find on the basis of R&D data from semiconductor companies. By infering headcount from R&D spending, they produced this chart:

What’s changed since in the last 3 years?

I don’t have access to Bloom’s data provider, but I was able to grab Intel’s recent R&D spending from their 10-K filings, and append it onto Bloom’s historical data:

It does actually look like costs might be slowing! Hurray for progress, and hurray for the end of stagnation!

But wait a minute, Intel is no longer the most Moore’s Law-relevant company. Their 7nm process was delayed to 2022, and they no longer lead the pack.

Instead, TSMC is now one of only two fabs (including Samsung) able to keep up with Moore’s Law. (For what it’s worth, they also manufactured the Apple M1 chip.) This is their R&D data from Bloom, with the last few years added.

Data here, collected from TSMC SEC Filings.

Costs continue to rise exponentially, with no clear break in trend since 2017. If you believed the Bloom et al paper when it was published three years ago, you have no legitimate reason to turn your back on it now.

What about the other fields Cowen/Southwood cite as proof of stagnation?

Here’s their data on life expectancy up to 2016:

As you can see, there’s a plateau starting around the early 2000s, with expectancy actually dropping off in the most recent data. When you all saw this data in 2019, you seemed pretty darn convinced that science was slowing down. Upon reading the monograph in its entirety, Scott Alexander wrote:

I had previously argued technological progress wasn’t slowing down; based on the work of Tyler Cowen and Ben Southwood I now think it is; my previous position was mistaken.

Okay, so the data up until 2016 was convincing. What’s happened since then? Here’s life expectancy again, extended to include data up to 2020:

So sure, there is a slight uptick, but basically it is still at a plateau with growth far below historical levels.

If you believed in stagnation when the paper first came out, you had better continue believing it in now.

Is the New Evidence Exceptionally Compelling?

So fine, none of those factors have changed. But surely these new discoveries still constitute categorical shifts in our trajectory?

Let’s take a closer look. Aggregated across the aforementioned posts plus more from Tyler Cowen and Eli Dourado, here is some of the evidence presented in favor of progress (full list in the appendix):

  1. Affordable Solar Power
  2. Crypto Going Mainstream + Ethereum 2.0
  3. China Announces Quantum Supremacy

1. Affordable Solar Power
Here’s the cost per watt for solar power:

Can you show me where The Great Stagnation was? Maybe there was one from 1990 to 2006, but you can’t reasonably look at this graph and infer that 2006-2017 was a bad decade.

And yet somehow, the Stagnation Hypothesis was convincing in 2012 as the price dropped from $1.68 $/W to 0.88 $/W. And it remained convincing in 2017 as they dropped further from 0.55 $/W to 0.46 $/W.

What exactly about the last year feels compelling in a way that previous progress was not?

Noah Smith writes:

…the price drops for solar and wind over the last decade have just been nothing short of revolutionary.

Caleb Watney similarly writes:

the almost ho-hum daily progress in solar, wind, and battery technology where prices have fallen 90, 70, and 87 percent over the last ten years

So both are citing work over the last 10 years, rather than more recent progress. What’s going on here?

How is this “Optimism for the 2020s” as Noah writes or a “Crack in the Great Stagnation” as Watney suggests if it’s been going on for the last 10 years?

More likely, solar power has been making great strides on a pretty consistent basis for decades, and the only recent break is in how high-status it is to say that out loud.

2. Mainstream Crypto / Ethereum 2.0

Again, a big deal, and certainly neat to have crypto adopted by PayPal.

Except that it hasn’t actually happened yet. And also, crypto was already adopted by Stripe, before being subsequently shut down.

And again, I have to ask, is this a bigger deal than when Bitcoin first launched in 2009? Is it a bigger deal than Ethereum 1.0 first going live in 2015?

Here’s some actual data on adoption and usage:

Ethereum Transactions Per Day

Bitcoin Transactions Per Day

Again, please explain to me how 2020 was the year crypto went mainstream. Please explain why you were unconvinced by crypto as a sign of progress in 2017, but have since changed your mind.

3. Quantum Supremacy

  • 2011: D-Wave sells a quantum computer to Lockheed Martin, you all say there is a great stagnation
  • 2019: Google achieves quantum supremacy, you all say there is a great stagnation
  • 2020: China achieves quantum supremacy (but this time with photonics), you all lose your minds

Here are the Scott Aaronson posts for Google and for China. He explains that the latter:

…represents the first demonstration that quantum supremacy is possible via photonics. Finally, as the authors point out, the new experiment has one big technical advantage compared to Google’s: namely, many more possible output states (~1030 of them, rather than a mere ~9 quadrillion). This makes it infeasible to calculate the whole probability distribution over outputs and store it on a gigantic hard disk (after which one could easily generate as many samples as one wanted), which is what IBM proposed doing in its response to Google’s announcement.

So yes, it is a new achievement. Is it a demonstration of progress in a way that the Google achievement was not? I don’t understand enough about quantum computing to say, but critically, neither do any of you.

My point isn’t to celebrate ignorance. It’s to say that the 2019 announcement was clearly a categorical leap forward. Somehow, that was not enough to convince you all of scientific progress.

This year, we have another advance. Because there is not a clear index here, and because the 2019 achievement was so tremendous, it’s difficult to say whether or not this is a break in trend.

Again, unless you have a strong belief otherwise, you should either take both achievements as a sign of progress, or neither. You cannot merely pick and choose what to celebrate based on what is trendy rhetoric.

Conclusion

I say this not because I am “against progress”, but because I am very much in your corner of the internet. I also get all of my news from Marginal Revolution, and consider myself part of the broader Progress Studies community.

And so speaking as an insider, I am saying that we have to do better.

The recent trend of rhetoric against stagnation is not founded in evidence or driven by data. It is pure mood affiliation. You bought into the stagnation hypothesis when it was hip and contrarian, and now buy into the optimism hypothesis to be even more hip and counter-contrarian. At no point did you stop to look at the data or actually think for yourself.

To be clear, none of this is to say that The Great Stagnation is not over! Maybe yes, maybe no, maybe the whole thing was an illusion and reality is driven by the gods of straight lines.

The same month Cowen and Southwood published their monograph on stagnation, Alexey Guzey wrote:

science is not slowing down… I think that the perception of stagnation in science – and in biology specifically – is basically fake news, driven by technological hedonic treadmill and nostalgia. We rapidly adapt to technological advances – however big they are – and we always idealize the past – however terrible it was.

If anyone now has the moral authority to speak on optimism (or to identify as a contrarian with a straight face), it is him.

And if you sincerely believe that we are in a new era of progress, then argue for it rigorously! Show it in the data. Revisit the papers that were so convincing to you a year ago, and go refute them directly.

Maybe then I will be happy to celebrate alongside you.


TLDR
Go read this SMBC comic. Then read this SlateStarCodex post. Then go look at yourself in the mirror.


Appendix: Notes on Herd Mentality and Mood Affiliation
As much as I hesitate to make essentialist remarks, I do believe that there are fundamentally two types of people. Those who see a bunch of people agree, and think “wow, everyone believes this, it must be correct!”

And then those who think “hmm, suspicious”.

It’s not that I’m positing any kind of coordinated conspiracy, it’s that I don’t even have to. Because humans are basically apes and prone to shallow mimicry, it only takes one very prominent ape to have an opinion, and everyone else will rush to share it.

More specifically, additional opinion pieces don’t qualify as additional evidence for your cause unless they actually make different points than each other. All four of those articles cite the Moderna vaccine as evidence. That’s fair, it is a strong piece of evidence. But as a reader, you get to count it exactly once.

To make matters worse, many of the pieces cite each other. Noah reads Caleb, who reads Tyler. If you trace the intellectual lineage, it quickly becomes clear that approximately one person has any original ideas, and everyone else is just piling on.

The Wisdom of Crowds only functions when each individual is capable of thinking and acting independently. Absent this vital condition, it is just madness.

Appendix: All Innovations Cited in Favor of Progress
Aggregated across all sources listed above, here are all the innovations:

  • mRNA Vaccine
  • Apple M1 Chip
  • SpaceX Launch / SpaceX Starship (delayed until Monday)
  • GPT-3 / AI
  • Electric Cars
  • Mainstream Crypto / Ethereum 2.0
  • Operation Warp Speed
  • Affordable Solar Power / Green Energy
  • The Eggplant
  • Remote Work
  • V-Shaped Recovery
  • Tons of cool companies IPO’ing and tons more getting started,
  • DeepMind Protein Folding
  • Lab-Grown Meat Lab Grown Meat Approval
  • Sight Restored in Mice

I’ve done a few already, showing that the rest are not an obvious departure from existing trends is left as an exercise to the reader.

To be clear, the question is not “is this innovation very cool”, but rather “does this innovation depart from the previous decade’s trend of progress”.

For example:

  • The mouse study is cool, but is this a bigger deal than the 2012 discovery of CRISPR-Cas9 programming?
  • GPT-3 is very cool, but we’ve been on a sharp trajectory of progress in ML since 2012. GANs have seen enormous progress every year since 2014, as have many other tasks.
  • The Starship hop is very impressive. But is this a bigger leap forward than in 2008 when SpaceX became the first private company to ever launch, orbit and recover a spacecraft? Is it bigger than in 2015 when they achieved the first vertical landing, or in 2017 when they achieved the first vertical landing of an orbital rocket?
  • Electric cars have seen substantial progress with more competition from mainstream automakers, but surely the biggest breakthroughs were the General Motors EV1 in 1996, the Tesla Roadster in 2008 and perhaps the Tesla Model 3 in 2017? Here’s some actual data, if you care at all about that kind of thing. It demonstrates steady progress, with no clear inflection point or recent change of trajectory.

About Applied Divinity Studies

Applied Divinity Studies is going on hiatus, effective immediately.

Some people ask why he’s screaming on “I Am a God.” It’s not like a James Brown scream — it’s a real scream of terror. It makes my hair stand on end. He knows they could turn on him in two seconds.

Lou Reed

Applied Divinity Studies began 3 months ago, and so far, it’s going well:

Growth is generally consistent, with two obvious anomalies.

Can you guess what they are? I’ll give you a hint. It is not “spent a lot of time and effort writing a very good post”.

It’s getting featured by Tyler Cowen on Marginal Revolution.

Okay, but maybe subscribers aren’t my objective function. Maybe what I really care about is making money. Here’s another chart of cumulative revenue.

The vertical line is getting an Emergent Ventures grant. Also from Tyler Cowen.

From this analysis, I conclude that there is no point in joining Twitter, writing clickbait [1], or anything else. I am free.

Having Money as Existential Crisis

I’m kidding, but only a little bit.

Obviously getting featured on MR or winning an EV grant produces its own incentives, but they’re largely ones I already agree with.

I suppose the expectation is that I’ll just save the money or spend it on rent. The implicit assumption being that I have a burn rate, this defines my runway, and money is used to extend the time I have to continue to do what I’m already doing.

But that’s crazy. This isn’t how any ambitious person spends money, nor is it a path to long term growth. Surely there’s some way I can spend this money to actually do better work, earn more money, and grow exponentially?

As Peter Thiel once said:

In a definite world, money is a means to an end. Because there are specific things you want to do with money. In an indefinite world, you have no idea what to do with money, and so money simply becomes an end in itself. Which seems always a little bit perverse, you just accumulate money, and you have no idea what to do with it, that seems like sort of a crazy thing to do.

I’m also reminded of Auren Hoffman’s post about the notable lack of ambition in venture capital. Firms seem largely incapable of deploying their cash reserves to win a competitive advantage. A large firm might hire a few more analysts, but mostly they just take profits, pay them out to partners, and call it a day.

This is held up in contrast to startups who are encouraged to aggressively reinvest their capital as quickly as possible to accelerate growth. [2]

So surely there’s something I can do with the money other than give it to my landlord?

This ends up being a fundamental question about the purpose of this blog, and about my life more generally.

If I care about getting more page views, I could spend the money on ads. If I’m specifically targeting high-status people, I could advertise in The Diff.

If I care about writing output, I could hire a research assistant or an editor.

If I wanted to be a snappier writer, I could pay $150 for David Perell’s How to Crush it on Twitter or $6000 for David Perell’s Write of Passage or $2000 for OnDeck’s Writer Fellowship.

The problem is that I don’t actually want any of this.

Startups often claim to be mission driven, meaning that they exist to fulfil some external purpose. In stark contrast, my inability to spend this money productively suggests that I have no external goal. [3] As it turns out, I am writing this blog entirely for my own pleasure.

Be Nice Until You Can Coordinate Meanness

Either you toil in obscurity until you die, or you become popular enough to get doxxed by the New York Times.

If you’re incredibly lucky, maybe you become popular enough to sell out, work on Substack and become a boring person catering to an analytics-mediated audience. [4]

This isn’t really a complaint about money though, the point is just that if you try to do anything important, people will try to kill you.

If you try to introduce any minimal amount of nuance into a conversation, they’ll slander you as a pedophile apologist. If you invent something great, they’ll sue you until you die of exhaustion. If you try to open access to scientific knowledge, they’ll sue you until you kill yourself.

If I have persecution anxiety, maybe it’s because they’ve persecuted all my heroes. [6]

So I don’t really see the point in trying to be the second coming of SlateStartCodex. I’m not a coward, but I’m also not a martyr.

As Scott once wrote, it is not worth acting unilaterally. Instead, you should act in the shadows, build support for your cause, play respectability politics, and maybe a few generations later you can declare victory.

But wait, isn’t now the time? We have coordinated! Scott’s pseudo-death was a rallying cry, and now the community is united unequivocally in favor of weird internet bloggers. Not only are we aligned, we have powerful voices behind us.

And yet, the attempted boycott has had no discernable effect whatsoever. Scott still lurks in the shadows. Substack tried to fund him (kudos), but whatever the amount, he’s decided it’s not worth it. Apparently, his view from the mount of success has been horrific, and he’s climbing back down.

Even if the undertaking was risky, it could be worth it for sufficient upside. Academic researchers take on risk, but if successful, their work is widely lauded. Startup founders have a high failure rate, but get rich if they succeed.

For weird internet bloggers, there is no throne.

The Unrivaled Joy of Scholarship and Unrivaled Pain of Existence

There’s a wonderful moment from Agnes Callard’s interview with Tyler:

COWEN: …I’m skeptical, but let’s just say I were to live forever. How bored would I end up, and how do you think about this question?

CALLARD: [laughs] I think it depends on how good of a person you are.

COWEN: And the good people are more or less bored?

CALLARD: Oh, they’re less bored… By bad, I don’t just mean sort of, let’s say, cruel to people or unjust. I also mean not attuned to things of eternal significance.

…if we’re talking about eternity, or even thousands of years, you’d better find something to occupy you that is really riveting in the way that I think only eternal things are.

I think that what you’re really asking is something like, “Could I be a god?” And I think, “Well, if you became godlike, you could, and then it would be OK.” [emphasis mine]

I’m not concerned with biological immortality, but even this life is long enough to make me worry. I am already living in that eternity. I can summon any food in the world to arrive at my doorstep, watch any movie, listen to any music. Upper-middle class Americans are already as gods, we just aren’t godlike.

And of course, this is accentuated by social isolation. [7] The vast majority of my time is spent either asleep, or at my computer. I am already in the eternal deathless state Callard describes.

Having lived there for almost a year now, I’ve found that all I really want is to participate in the unrivaled joy of scholarship without the well-documented burdens of institutional academia. To the extent that I have a mission at all, it is merely to prove that this way of living is even viable.

Unfortunately, there’s a danger to this kind of triviality as well.

In What is an Explanation?, I give a two sentence summary of meta-rationality. Language is subjective, but not arbitrarily. If you ask a question, first clarify what kind of answer would even be satisfying to you.

The purpose-dependence of truth also happens to be the central failure of internet blogging.

If a “satisfying answer” is just the hottest take or most controversial justifiable opinion, you won’t do good work. A real martyr needs a cause, and dying without one means an afterlife excluded from intellectual Valhalla.

In my defense, my own valuelessness is largely a product of our collective lack of compelling narratives. Everything that was once great is now problematic, everything once eternal now lacking in foundations. As far as I can tell, there are really only three compelling visions for the future:

  • Various Authoritarian Dystopias: Right-wing fascism, surveillance states, left-wing censorship. Things are arbitrarily bad.
  • Left-Wing European Environmentalism: We all bike and recycle. Things are basically the same but with lower consumption.
  • Various Retro-Futuristic Utopias: We have space exploration (but why?), flying cars (but to go where?), higher GDP (to consume what exactly?) and the iPhone 12 (but again, why?) Things are better, but not in a way that matters.

For some reason, we seem to have confused “compelling vision” with “vague overconfident manifestos”, but I don’t think that’s how this works at all. I don’t want someone to tell me what 2030 could look like.

I just want the US to be able to build trains, approve the vaccine, not have the world’s highest incarceration rate, allow high-skilled immigrants to get indefinite work visas, and do more to prevent a Uyghur genocide.

So the joy of scholarship is a good start, but it can’t be the whole thing. As Elon should have said, “Our existence cannot just be about reasons to live. There need to be solutions to one miserable problem after another.”

Hiatus

Looking back, I’m happy with what I’ve written, but it’s also clear to me that the important work lies ahead. I need more time to read, do proper research, and contribute actual knowledge instead of quick takes.

That might sound self-deprecating, but it shouldn’t imply that any of this has been a waste. It is only thanks to these last 3 months and their success that I have the confidence, willpower and practice to undertake more daunting projects.

Thanks for all your support, and I hope you’ll look forward to reading Applied Divinity Studies in 2021.


[1] You might argue that posts only get featured on MR because Tyler hears about them, and so it is still worth growing virally. This is empirically true (Agnes Callard tweeted Beware the Casual Polymath before it was featured on MR), but there’s no reason it has to be. Tyler’s email address is public, and it is still free for you or I to inject content directly into his brain.

For what it’s worth, there is some organic growth, but I have no idea how much. Since the “see mail => get annoyed” loop is faster than the “see mail => read mail => forward mail” loop, the short term effect of every email I send is a net loss in subscribers.

[2] Though if you believe Chamath, “Startups spend almost 40 cents of every VC dollar on Google, Facebook, and Amazon”, supposedly in the form of ad spend or AWS costs. And then you have to include the cost of user acquisition through “competitive pricing”, i.e. selling at a loss.

[3] There are basically two reasons someone might start a blog, neither of them good. Either you’re so arrogant that you feel your every thought must be shared with the world, or you just don’t value your time at all, and there’s no next best alternative.

As far as I can tell, this is why people use Substack. Even if you don’t monetize, writing on Substack at least allows you to pretend that you’re doing it for the sake of networking, improving your personal brand, or one day making money. And so these selfish motivations end up being far more trustworthy than the motivations of a blogger, and Substack ends up being a higher-status option.

[4] The most successful writer I’m actually interested in is Gwern, who makes $1300/month on Patreon. That’s fine, but he’s much more popular and prolific, so this feels like an upper bound. At the rate of $15,600 annually, I would be better off working a corporate tech for 2 years and living off interest. (EDIT: someone else who’s writing I respect tells me they make significantly more on Patreon, and Gwern is probably just not trying very hard.)

[6] Or maybe I’m post-hoc attracted to martyrs? That’s true for Wilbur Wright, but the other three I followed pre-prosecution.

[7] This is also a good time to note that this blog would never have been possible before COVID. Without social isolation, it would have been too hard for me to write without FOMO and too hard to embark on something as inherently silly as blogging without immediate and constant validation.

More practically, starting a blog anonymously would have meant lying to all of my friends pretty much constantly. It’s easy to say I’m “just hanging out” during the pandemic. Much harder to constantly make up stories about what I’ve been doing all weekend.

Every Post I Couldn't Write This Year

Here’s everything I wish I could write, but never got to, or never figured out how to express properly.

Why don’t I just save the drafts and write these later? I could, but I would rather start fresh with a clean slate.

That also means that these are raw notes, and below my regular bar for quality. Read at your own risk:

  • The Use of Patent Data in Meta-Science
  • Notes on Penguin Highway
  • The Unreasonable Effectiveness of Starting Over
  • Learned Autism
  • The Cost of Criticism

Patent Data

A bunch of papers in the meta-science/economics of research/theory of innovation space cite patent data as a metric for good research.

As far as I can tell, the US patent office is just total garbage. Patent trolls run amok, anyone can get a patent for anything if they have a lawyer and some money, patents are constantly awarded on the basis of totally ridiculous things. Patents serve more as a excuse to sue other people than as an actual working mechanism for publicizing research and getting paid royalties when it’s monetized.

I realize that the patent data is just a proxy, but do we have any reason to think it’s a good one? Or even remotely acceptable? Is everyone already aware of this but choosing to ignore it?

Related.

Review of Penguin Highway

Penguin Highway is the best movie about independent research. It is also possibly the best portrayal of autism.

The protagonist often says “I did the math yesterday, and there are still 3,888 days before I turn twenty.” This is a silly autistic savant stereotype but then he continues “By then, I’ll be 3,888 days better than I am now. I can barely imagine how great that will make me.”

This was once true for all of us, but it doesn’t feel like I’m 3,888 days better. It doesn’t even feel like I’m 365 days better than I was at the beginning of the year. What happened? How did we all become such shitty adults?

In one scene, his sister comes into his room crying that their mom is going to die. He gets up, shocked and eager to help, but she shakes her head. Their mom isn’t in any immediate danger. What his sister has realized is that mortality exists at all. That their mom will die, far off in the future, but inevitably. This was pretty much exactly my own experience at the same age. On a related note, the film also features a somewhat Parfitian view of identity.

My only complaint is that the protagonist gets sick after a single day of fasting, which is your typical 3-meal-a-day propaganda. But at least he runs the self-experiment.

Because we live in an era of incredible abundance, this movie is just $3 to rent on Youtube.

The Unreasonable Effectiveness of Starting Over

US GDP growth is around 2% a year, with total GDP at 20 trillion in 2018. We’ve made immense progress.

What follows is a mathematical tautology, but it’s still worth saying out loud.

Let’s say you were able to restart a new country, build institutions from scratch, learn from our mistakes and do better the next time around. Let’s say you start with just 1% of current US GDP, but were able to increase the growth rate to 3%.

Under these conditions, your new country will overtake the US in just 163 years.

What if you’re able to double growth to 4%? 120 years. 8%? 60.

Is it really that hard to believe that we could grow at 8% if we were willing to abandon 99% of the economy? I’m not suggesting there’s any specific way to do this, but there are some obvious inefficiencies, and plenty of local maxima.

Take this blog for example. I started anonymously, which meant abandoning whatever status and credibility I previously had. It was painful to give that up, and a difficult decision at the time. But now subscribers are approximately doubling everything month, for 409600% annualized growth.

Obviously this is unsustainable, but I have no doubts I’ll quickly overtake what I could have had with a higher starting point (real life status) but lower growth rate (unwillingness to write on contentious topics).

What local maxima are you trapped in? Do you hate your career, but feel like you’re in too deep? You’ll advance faster if you actually care about what you’re doing. Do feel unfulfilled by your social group but worry if you move you won’t make new friends? You’ll form much deeper connections if you actually care about the people you hang out with.

In general humans have a horrible intuition for exponential growth. You are likely underestimating the value of making the leap and starting over.

Learned Autistic Deficiencies

Mary Wollstonecraft’s seminal A Vindication of the Rights of Woman broadly argues that women appear inferior only because they are raised this way. I’m sure this was subversive at the time, but (hopefully) seems obvious now. If you don’t let women go to the best schools, you’ll find that very few of them end up well educated.

I wonder how much of autism is like this. In her interview with Tyler, Michelle Dawson says:

Well, there’s a huge literature in autism about how autistics judge facial expressions of emotion in other people. And what you have in the autism literature is, you haven’t only just turned autistic people into stereotypes and cartoons, you’ve done that to the typical population.

This is really at odds with the nonautism literature on facial expressions, which is much more complicated. In the autism literature, it’s assumed that you can just read people’s inner emotions and mental states. Mental states are not necessarily well defined, that it’s a simple matter, that it is sort of written all over somebody’s face, or even you can read it just from looking at a photo of their eyes.

And things are far more complicated than that in the literature, in the nonautism literature. For example, MIT — their affective computing group, Rosalind Picard did these fantastic studies showing that people smile in frustration, and those are real honest-to-goodness Ekman-type smiles. You have the whole facial action coding thing going on. Those are real, genuine smiles that people smile in frustration when they are genuinely frustrated. They don’t do it when they’re acting out frustration. And there are many other examples like that.

People smile for many different reasons, and that is acknowledged to some degree in the literature in the typical population, not in the autism literature, where things are completely simple. They’re just very caricatured and cartoonish. Now, what you find is that the typical population can decipher their way through this. They know what these facial expressions are supposed to represent, even if they don’t look like that in real life.

Autistics are — maybe because their experiences are quite complex with how people respond to them starting early in life, and I’m just wildly speculating here — but autistics are going to notice that things are more complex and uncertain than that. Again, it’s the considering more possibilities, and that will very much hamper their task performance if what you are looking for is this automatic certainty that these acted expressions are all there is, which is not accurate.

And that leads to many problems because we’re actually training autistic people to ignore the complex, real, important information in favor of the caricatured, stereotyped, simplified, probably wrong information, and we should really think about that. But that gives you an idea of looking at social deficits, thinking about how autistics process information, and also actually looking at the literature itself. [emphasis mine]

Dawson caveats this theory by noting that it’s speculative, but it certainly feels true to my experience. As I recall, early childhood education revolved entirely around a set of social rules that turned out to be totally counter productive. For example:

  • From ages 2-6, the importance of sharing and fairness was seemingly constantly impressed upon me
  • Then at some point, adults started saying things like “life is unfair”
  • Subsequently, I spent the rest of my life very confused about norms around fairness

Similarly:

  • I used to be very bad at making eye contact
  • I was specifically taught the importance of making eye contact
  • I was later often accused of staring
  • Subsequently, I spent the rest of my life very confused about norms around eye contact

Which seriously, is just an unbelievably difficult thing if you don’t have an existing intuition. Let’s say you’re at dinner with 4 people, you’re telling a story that isn’t directed at any one of them in particular. Who do you look at? Do you go around and make sure you’re making eye contact with each of them at least once? Do you pick one person and stare at them the whole time? Do you glance around as if making sure that everyone is still paying attention?

I think I’m somewhat good at this, but it’s also a very conscious process. I sometimes think that if I was never taught to do it, I would have picked it up intuitively, and eventually learned how to do it “naturally” without thinking. But now that I have been taught, and now that I am thinking, it is pretty much impossible to ignore that and just subconsciously do the right thing.

And to be totally clear, I’m not totally against obeying arbitrary social rules. If you told me “starting tomorrow, we’ve all decided to wear hats, and if you aren’t wearing a hat, it’s like being naked in public”, I would be totally fine! I mean seems dumb, but it imposes pretty much zero cost to me, and I’m happy to comply.

But if you said “some hats are cool, but others are like being naked in public, and we won’t tell you which is which”, I’m just never going to wear a hat ever again for fear of picking the wrong one.

The Costs of Criticism

Writing criticism just makes me really really unhappy. If you’re right, the world is a worse place than you thought, and it’s very unlikely that you’ll be able to change anything. Matthew Walker still has a job, most papers identified as fraudulent by Elizabeth Bik don’t get retracted.

Meanwhile, you spend your entire time terrified that you’re wrong, and going to make a fool of yourself. Or even worse, that you’re wrong but people will think you’re right, and you’ll have harmed innocent people.

Then there’s the possibility that you’re right, but your truth isn’t worth telling. Maybe Lambda school is dishonest, but if the cost of pointing that out is that fewer students get a good education and we all have to keep going to 4 year colleges and accruing student debt… it’s unclear what good I’ve actually done.

In the meantime, you’re eviscerating your own credibility because you so clearly have an axe to grind. Either you have a conflict of interest, and are thus motivated to exaggerate, or you don’t and it’s even worse. If there’s nothing really in it for you in terms of upside, you’re not even a deceptive mercenary, just a crazy person with an irrational vendetta.

Note that this is all really different from friendly disagreements. Debate is important. I sent drafts of my posts to friends, and they consistently eviscerate me, prompting arduous rewrites before I can finally publish. This kind of exchange makes all parties better in a way that my rant against Lambda School does not.

To be clear, none of this is to say that criticism is bad. The job of criticism is to better the world, not the criticized. Even in the absence of direct positive impact, it’s role is to elevate epistemic standards.

Some people read Guzey’s Why We Sleep and felt sad that some popular science was fraudulent. Instead, I felt hopeful that we still had a functioning culture of criticism, and felt more confident believing other work that had not received the same treatment.

In this sense, criticism is a prerequisite for truth. Without the ability to be cynical, our belief is incoherent.

So someone has to take on this mantle, but it won’t be me.