Base Rates on Secession

Here’s the scenario painted by The Atlantic, NYT and others:

  • On election day, Trump appears to win
  • In the following days, mail-in ballots are counted, leading to a reversal
  • Trump does not accept the result
  • All hell breaks loose

But rather than ask “is this the most contentious election ever?”, it would be better to ask “how many elections have been similarly contentious, and not led to secession”.

(I use secession interchangeably with civil war, revolution and independence.)

In a year of increasingly dramatic reporting, I want to make the basic claim that historical data can generate useful intuitions about the future.

For example: Trump’s approval rating in California is 29%. By comparison, 5 past presidents have had similar national approval ratings, including Truman (22%), Nixon (24%), G.W. Bush (25%), Carter (28%) and G. H. W. Bush (29%) . Nixon was forced to resign, but in other cases there was no remarkable outburst. So at first approximation, it seems unlikely that California will take any kind of momentous action.

Of course, we can always make claims about why this time is different. Perhaps there will be a big swing left if the election goes poorly. But whatever those claims are, they should be expressed as updates to the base rate.

To be clear, none of this should be taken too literally, but it at least provides a point of reference for more dramatic and speculative arguments.

US Presidential Elections
There have been 58 presidential elections in US history. Only the 1860 election led to secession.
Rate: 1/58, 1.7%

Civil War in Any Country
Stanford University Professor Stephen D. Krasner writes “There are some thirty ongoing civil wars”. Wikipedia lists 32 ongoing civil wars as of April 2020. There are 195 countries total, giving:
Rate: 32/195, 16%

New Sovereign States since 2000
From Wikipedia, there have been 24 new sovereign states since 2000. Looking at a 4 year period, that’s 6 out of 195 countries.
Rate: 6/195, 3%

Independence Referendums since 1900
From Wikipedia, there have been 114 Independence Referendums since 1900. Notably, many of these were the former colonies of France breaking apart in the May 1958 Crisis and the breakup of the Soviet Union in 1991. In a 4 year period we have 114/195/(120/4):
Rate: 1.9%

4 Year Time Span Since the Establishment of the 13 Colonies
Including the American Revolution, and counting since the Georgia’s 1732 establishment:
Rate: 2/(288/4), 2.8%

Scenario Rate
US Presidential Elections 1.7%
Civil War in any Country 16%
New Sovereign States since 2000 3%
Independence Referendums since 1900 1.9%
4 Year Time Span Since the Establishment of the 13 Colonies 2.8%
Simple Average 5.08%

Many of these rates beg for further qualification. I’ve provided a non-comprehensive list of corrections in the appendices addressing sensitivity analysis, the United States as a non-generic country, and so on.

I’m not going to rehash all the object-level arguments clearly detailed elsewhere. Yes, Trump has said he won’t step down. Yes, there are logistical concerns with mail-in ballots. Yes, there is currently an even number of supreme court justices.

But it’s hard to reason your way from “things seem very bad” to “shit will actually hit the fan”.

Intuitions on the question appear totally split. On one hand, it’s easy to say “there are already mass protests and this would tip the scales.” On the other hand, your uncle says he’s moving to Canada every time a republican is elected, Texas says it’s seceding every time a democrat is elected, neither ever happens. Why should this year be different?

In these scenarios, (high drama, limited information, bi-modal outcomes), it’s useful to turn to base rates.

Appendix A: Temporal Framing

I counted Independence Referendums since 1900, but there’s no good reason to pick this date over any other.

Here’s the rates we would get if we started counting at other dates. (Data and analysis at this Google Sheet)

There’s a good bit of variance depending on when we start counting, ranging from 1.1% if we count every listed referendum, to 4.1% if we start counting right before the 1991 collapse of the Soviet Union.

There isn’t a clear advantage to any one start date. If we start counting from 1810, we’re capturing all the available data, but you might argue that the conditions allowing for civil war have changed, and it makes more sense to only count post-WWII, or post Soviet Collapse.

Working with parameterized models, you can pick a Schelling point to reduce the possibility of intentional manipulation, but when possible, it’s better to do the sensitivity analysis.

I’m lucky in that there’s really only one parameter here, so it’s easy to communicate these results as a simple bar graph and get a good intuitive feel for how much our start date matters. In more complex models, this might not be possible.

Appendix B: American Uniqueness

Since the future is always in some sense unprecedented, it’s not always clear which statistics to use. This is aggravated by American uniqueness. Whatever you think of American exceptionalism as a doctrine, there’s no denying that America is at least odd. Is it in the class of all countries? Democratic countries? Wealthy democratic countries? The reference class we choose will have huge implications for identifying relevant historical data.

But base rates are still good background information. Claims to uniqueness should take the form of a bayesian update, not a total disregard for priors.

Many of these base rates scream for further qualification. The 32 countries listed as engaged in civil war are generally much less wealthy and democratic than the US. But since the US is literally the world’s wealthiest country, it’s not clear where we should draw the line.

Appendix C: Fragile States Index

The 4.1% aggregate measure is heavily skewed by the 16% rate of Civil Wars in any country. An obvious objection to this point is that the US is not like those other countries. We’re richer, more democratic, or in some broader sense, more stable.

The Fragile States Index attempts to capture this idea, looking at corruption, political stability, economic inequality and more.

We can’t look at 2020 data since it will account for the existence of ongoing civil wars. Instead, we’ll take the oldest historical data from 2006, and join it with the list of countries with civil wars that started after 2006:

Country (2006) FSI Rating (lower is better)
Syria 89
Mali 75
Central African Republic 98
Egypt 90
Libya 69
Ukraine 73
Yemen 97
Cameroon 88
Mozambique 75
United States (2020) 38

We should still take into account the dangers listed in Appendix A, but since FSI is actually a sensible leading indicator, it feels reasonable to suggest that coutries with low FSI actually are very unlikely to start a civil war, and the base rate for the population of countries relevant to the US is actually 0%.

Appendix D: Predictive Theories

It’s tempting to claim a different reference class by drawing up a broad theory, such as: “No western democratic country has had a civil war since WWI”. Except that isn’t actually true. Manuel Azaña was elected Prime Minister of Spain in 1931, leading up to the 1932 Coup, 1934 Revolution and 1936 Civil War.

So okay, maybe the real temptation is to claim a different reference class by drawing up a broad theory that backtests well. The broader danger is in finding a theory that backtests well, but isn’t actually predictive, and falling into a kind of Garden of Forking Paths. As usual, there’s a relevant XKCD.

If you go on Wikipedia and read about societal collapse, you’ll find a near-perfect description of America today:

factors such as environmental change, depletion of resources, unsustainable complexity, decay of social cohesion, rising inequality, secular decline of cognitive abilities, loss of creativity, and bad luck.

And if you read Joseph Tainter’s foundational work The Collapse of Complex Societies, you’ll find more of the same, with chapter headings ranging from resource depletion, to catastrophes and social dysfunction.

But note that Tainter is not trying to predict future collapse, merely explain past ones:
Societies do encounter resource shortages, class interests do conflict, catastrophes do happen, and not uncommonly the response does not resolve such problems. A general explanation of collapse should be able to take what is best in these themes and incorporate it. It should provide a framework under which these explanatory themes can be subsumed.

As a result, he doesn’t look at any civilizations which did not collapse in order to determine whether or not these factors are causal. Reading Tainter does not tell us anything about the relative proportion of civilizations who experienced resources shortages, class conflict, etc, and did not fall apart.

Appendix E: Asymmetric Returns

I intended all this as a remedy to otherwise alarmist rhetoric, but don’t take a low base rate of succession to mean that we shouldn’t worry about it. A game of Russian Roulette with a 50-chamber cylinder would still be terrifying.

At first approximation, you might think a ~2% chance of collapse gives you an expected value of 98% of your current life, however you may choose to measure it.

But the real harm isn’t in losing your existing health, property, net worth or other assets, it’s the damage to your long term wellbeing. In the Russian Roulette example, you should obviously be willing to pay much more than 2% of your current assets to avoid playing. Instead, it should be something closer to 2% of your expected assets over the rest of your life, subject to an appropriate temporal discount.

This is further aggravated by diminishing returns from assets to wellbeing. A 2% chance of death is much worse than a 2% tax levied on all future income.

I’m not saying you should rush out today and buy guns, or even canned beans. Just that as silly as it feels to prepare for unlikely events, it is sometimes better to pay for insurance than to pick up pennies in front of a steamroller.

Appendix F: Simple Averages

If the base rates are already hand wavy, taking the simple average is a frenetic shake. We can easily create semi-redundant rates to increase the weight given to some statistics. US Presidential Elections and 4 Year Span of US History turn up slightly different results, but essentially encode the same information.

You could do sensitivity analysis as before, or report two different averages, with and without the outlier. I’m not sure what the proper adjustment is other than to again, not take any of this at face value.

In the service of that aim, it’s worth noting that models are always garbage in, garbage out. Using overly sophisticated analytical tools can create the illusion of precision where it doesn’t exist. So yes, it might be worth doing the sensitivity analysis in this case, but it risks lending too much credibility while adding too little information.

Appendix G: Secession, Independence, Civil War, Revolution

I’ve conflated these forms of conflict throughout the post. Partially because it’s messy (the Civil War started out as mere secession), but also because I think this is what readers actually care about.

If I told you the odds of Civil War were x%, and then we got a revolution or military coup instead, you would probably feel cheated.

Appendix H: Metaculus Predictions

Metaculus is similar to a prediction market and weighs user predictions by their historical correctness. It has a Brier score of 0.095 and a Log score of 0.120, indications that it has been broadly reliable. Of course, this is partially a function of question popularity, with the most popular questions receiving thousands of predictions.

Will the USA enter a second civil war before July 2021?:
1%, 251 predictions

Will at least one US state secede from the Union before 31 December, 2030?:
5%, 51 predictions

[EDIT 10/08/2020]

There’s a systematic bias against predicting the apocalypse.

If you predict the world will end and then it does, you receive no benefit. Equivalently, if you predict the world won’t end and then it does, you receive no harm.

This is less true for only mildly-catastrophic risks, but still applies. You’ll survive, but the institutions set up to reward you for correctness may not.

So the epistemic correction is to tilt a bit more heavily towards apocalyptic thinking, and the behavioral correction is to avoid shaming people who incorrectly predict apocalypse.

In other news:

None of this affects the base rates, but it sure is interesting.

[EDIT 10/13/2020]

Dan Coats, formerly the Director of National Intelligence, issued a call last month to establish an bi-partisan election commision.

Again, not good evidence in any direction, but it’s a good reminder that some predictions are self-defeating. The more apocalypse-prophets provide warnings, the more likely they are to be wrong. Another good reason to not be overly harsh in our criticism.

It’s also a reminder that apocalypse is often averted only through heroic effort. Y2K seems like a joke in retrospect, but only because the U.S. spent $150B (inflation adjusted) to prevent it.

The lack of nuclear war is similarly the result of massive ongoing efforts. As is the less-catastrophic-than-possible pace of climate change, the relative lack of antibiotic resistance and so forth.

Beware the Casual Polymath

We live in times of great disaggregation, and yet, seem to learn increasingly from generalists.

In the past, an expert in one field of Psychology might have been forced to teach a broad survey class. Today, you could have each lecture delivered by the world’s leading expert.

Outside of academia, you might follow one writer’s account to learn about SaaS pricing, another to understand the intricacies of the electoral college, and yet another to understand personal finance. In economic terms, content disaggregation enabled by digital platforms ought to create efficiencies through intellectual hyper-specialization.

Instead, we have the endless hellscape of the casual polymath. A newsletter about venture capital will find time to opine on herd immunity. The tech blog you visit to learn about data science is also your source of financial strategies for early retirement. The Twitter account you followed to understand politics now seems more focused on their mindfulness practice. We have maxed out variety of interests within people, at the cost of diversity across them.

It’s not difficult to imagine how this happened. The flip side of disaggregation is that each would-be expert is able to read broadly as well. The world of atomized content through hyper-specialization isn’t a stable equilibrium. We are all casual polymaths now.

As romantic as the idea seems, I worry it’s grossly suboptimal. Sure, there are cases where combining ideas from disparate fields can lead to new insight, but today’s generalists are not curating a portfolio of skills so much as they are stumbling about. Behavioral Econ is the love child of economics and psychology, early AI researchers maintained a serious interest in cognitive science. What exactly are your cursory interests in space exploration, meta-science and bayesian statistics preparing you for?

I understand that we can sometimes only “connect the dots looking backwards”. Perhaps there is valuable work in a statistical meta-analysis of aerospace research. But that’s only true if you’re going into some degree of depth in each field.

None of this is meant to dissuade you from becoming a polymath, just to be a little more distrustful of others who claim to be.

1. Polymaths use status in one field to gain capital in another

In 1947, with financial support from his father, JFK won a seat in the US House of Representatives with no political experience. As his father would later remark: “With the money I spent, I could have elected my chauffeur.”

If you saw JFK in 1947, you might have thought “wow, he’s rich, his father was the Chairman of the SEC, and he’s a member of the US House of Representatives, what an impressive guy!” A decade later, you could have added “Pulitzer Prize winning author” to that list.

But this reasoning is totally backwards. JFK was only able to become a politician because of his wealth. In fact, his father only became SEC Chairman after extensive political donations to FDR. And obviously, his book was ghost-written by his speechwriter.

So you’re justified in being impressed by exactly one accomplishment, and everything else ought to be discounted.

We already understand this intuitively, but only in a limited set of cases. If a pop star becomes an actor, we are not impressed by their wide range of talents. Instead, we understand that popularity is a semi-fungible good.

2. Polymaths are abusing Gell-Mann Amnesia

As I wrote earlier:
You go on Twitter, you read someone’s tweet on a subject you know something about, and see that the author has no understanding of the facts. So you keep scrolling and read their tweets about cancel culture, space exploration and criminal justice reform, totally forgetting how wrong they were before.

In this sense, every tweet is an option with asymmetric returns. If you’re right, you cash out; if you’re wrong, everyone forgets and you lose nothing. The incentive is to ramp up variance, make bold claims in a variety of areas, and hope you’re right some of the time.

Accordingly, authors will make bold claims in a variety of areas, and you may be inclined to believe them even after seeing how wrong they are. Unless you are also a polymath, and a polymath in the same domains, you will probably not be capable of evaluating their competence.

Of course, you might rely on external opinions, which brings us to the last point.

3. Polymaths are evaluated by non-polymaths

Leonardo da Vinci is the most famous polymath of all time and the model omni-competent Renaissance Man.

Leonardo da Vinci also didn’t know math. Issacson’s book details numerous episodes in which:

  • Leonardo comes up with a million dollar business idea, later realizes his basic arithmetic was off by more than an order of magnitude.
  • Leonardo claims to be a military engineer to gain acceptance at the Milanese court. In fact, he has never built any kind of weapon or siege device.
  • Leonardo claims to have solved the ancient puzzle of doubling the cube. Except his “solution” only works if you can’t tell the difference between the square root of 3 and cube root of 2.

This last example is especially notable because Issacson himself doesn’t seem to catch it, instead uncritically praising Leonardo’s discovery. [Details in Appendix]

And yet Wikipedia writes:
…many historians and scholars regard Leonardo as the prime exemplar of the “Renaissance Man” or “Universal Genius”, an individual of “unquenchable curiosity” and “feverishly inventive imagination.”[6] He is widely considered one of the most diversely talented individuals ever to have lived.[10] According to art historian Helen Gardner, the scope and depth of his interests were without precedent in recorded history

Of course the inflation of his mathematical and engineering ability makes sense when you consider that the judges in question are predominantly art historians. Rather than as a Renaissance Man, Leonardo would be better regarded as an exceptional painter with various hobbies.


While an expert in one domain may just be a savant with a “kooky knack”, mastering multiple unrelated skills feels like evidence of general intelligence, or in Leonardo’s case, “Universal Genius”. If someone is good at computer science, epidemiology and finance, surely we can trust their opinion on politics as well?

Except what’s really happening is that we’ve chosen to privilege certain combinations of skills as impressive, while taking others for granted.

A physicist who studies math, can write code for analysis and understand complex systems is not hailed as a polymath. They’re just seen as obtaining the basic set of skills required for their profession. Similarly, a basketball player who can run, shoot and block is not any kind of “polymath”.

You might object that this is because physics and basketball are specific clusters of skills. Running quickly is more closely related to throwing a ball than software engineering is to epidemiology.

This might be true in specific cases, but in general, it’s a coincidence of which skills cluster into occupations. A small business owner who manages their own books, handles sales and manufactures their product is not considered a polymath, no matter how distinct those fields might be. Computational social scientists are not considered polymaths, neither is an OnlyFans creator who single handedly runs everything from marketing to modeling, nor a translator who has to master ancient greek, dive deeply into historical context, and also be a great poet in their own right.


To be clear, there are still good reasons to learn a variety of skills. As Marc Andreessen put it:
All successful CEO’s [sic] are like this. They are almost never the best product visionaries, or the best salespeople, or the best marketing people, or the best finance people, or even the best managers, but they are top 25% in some set of those skills, and then all of a sudden they’re qualified to actually run something important.

He goes on to provide examples, listing Communication, Management, Sales, Finance and International Experience.

I’m broadly in agreement. Learning these skills will probably benefit your career. Just understand that no one has ever been hailed as a polymath because they’re good at both communication and management. They’re just considered basically competent at their job.

I don’t want to dissuade anyone from learning broadly and reading widely. Of course athletes should cross train, and intellectuals should read outside their domain, and software engineers might benefit from public speaking classes.

My point is that we should not trust or glorify people on the basis of their apparent “Universal Genius”. Having a variety of interests is no more a sign of generalized intelligence than being able to walk and chew gum. And if someone does appear to have accomplishments in a variety of domains with fungible currency, their total status should not be a sum or multiple, but merely the status of their single most impressive feat.

So go read your SaaS/Meta-Science/Aerospace blog and revel in the genuine joy of intellectual curiosity. As Tyler Cowen would say, I’m just here to lower the status of polymaths.

Appendix: Doubling the Cube

Here’s the full quote from Issacson:
These obsessions led Leonardo to an ancient riddle described by Vitruvius, Euripides, and others. Faced with a plague in the fifth century BC, the citizens of Delos consulted the oracle of Delphi. They were told that the plague would end if they found a mathematical way to precisely double the size of the altar to Apollo, which was shaped as a cube. When they doubled the length of each side, the plague worsened; the Oracle explained that by doing so they had increased the size of the cube eightfold rather than doubling it. (For example, a cube with two-foot sizes has eight times the volume of a cube with one-foot sides.) To solve the problem geometrically required multiplying the length of each side by the cube root of 2.

Despite his note to himself to “learn the multiplication of roots from Maestro Luca,” Leonardo was never good at square roots, much less cube roots. Even if he had been, however, neither he nor the plague-stricken Greeks had the tools to solve the problem with numerical calculations, because the cube root of 2 is an irrational number. But Leonardo was able to come up with a visual solution. The answer can be found by drawing a cube that is constructed on a plane that cuts diagonally through the original cube, just as a square can be doubled in size by constructing a new square on a line cutting it in half diagonally, thus squaring the hypotenuse.

To be clear, this was not an acceptable failing indicative of his time. From Wikipedia, cube roots date back to 1800 BCE, a method for calulating cube roots was given in the 1st century BCE.

The first part is right. The hypotenuse of a unit square has length SquareRoot(2), and the square constructed with side length SquareRoot(2) has area 2.

But this method doesn’t work with cubes. To get a cube with volume 2, each side needs to have length CubeRoot(2). But the hypotenuse of one side of the unit cube is still SquareRoot(2), and the diagonal is SquareRoot(3).

From Wikipedia:
doubling the cube is now known to be impossible using only a compass and straightedge

Issacson didn’t explicitly say that Leonardo was limiting himself to compass/straightedge constructions. Wikipedia does list several Solutions via means other than compass and straightedge that had already been discovered in ancient Greece. None of these solutions fit Issacson’s description of Leonardo’s method.

Searching for “Leonardo da Vinci doubling the cube” or “Leonardo da Vinci Delian Problem” turns up a few results:

A pamphlet for an exhibition at the Louvre:
Various Attempts at Doubling the Cube. The Extension of the Pythagorean Theorem to the Power of 3. Attempt at the Geometrical Construction of Square Roots from 1 to 9 Pen and brown ink About 1505 Here again, Leonardo endeavours to double the cube by various means – including applying the Pythagorean theorem to volumes rather than surface areas. Biblioteca Ambrosiana, Milan, Codex Atlanticus, fol. 428R

Which again, is not a solution since the Pythagorean theorem, even extended to n-dimensions, only results in square roots.

A blog matches up the Louvre exhibit with scans of Leonardo’s notebook:
Highlights a different section from the Louvre exhibit, called an “Doubling the Cube; An Empirical Solution”, which is just an approximation using “edge length very slightly greater than 5”.

An article titled Leonardo and Theoretical Mathematics:
Discussion of various attempts by Leonardo to double the cube, which writes:
If the diagonal (or diameter) of a square with a side of 1 is the graphic visualisation of
the incommensurable quantity of the square root of two, is the diagonal of a cube with a
side of 1 the graphic answer to the irrational number equal to the cube root of 2?
The answers are no. The diagonal of the cube is equal to the square root of 3, and not
the cube root of 2, which is a smaller number than the square root of 2.

Then goes on to discuss several other attempts that also proved fruitless:
In addition to looking for his own solutions to the duplication of the cube, Leonardo also studied the classical solutions of the ancient Greek mathematicians

Described in the same Wikipedia article, there is an elegant geometric solution, so long as you are able to mark the straightedge.

This is all to say, I’m moderately confident Issacson was not describing a valid solution.

Funny enough, when Plato originally proposed the problem, he was similarly frustrated. Described in Plutarch’s Quaestiones Convivales from Moralia:
And therefore Plato himself dislikes Eudoxus, Archytas, and Menaechmus for endeavoring to bring down the doubling the cube to mechanical operations; for by this means all that was good in geometry would be lost and corrupted, it falling back again to sensible things, and not rising upward and considering immaterial and immortal images, in which God being versed is always God.

[EDIT 10/15/2020]

Discussion of this post on Marginal Revolution.

Discussion of this post on Hacker News, mostly people accusing me of being a bot.

Markets Are about Information, Not Incentives

If you spend any time on libertarian Twitter, you’ll hear endless praise for “incentive alignment”, “mechanism design” and of course the king of it all: “markets”.

For the most part, these genuinely interesting and complex ideas have basically been dumbed down to “pay people to do things”.

Consider Eli’s tweet about a “market” for Lunar Regolith:

Yes, it’s true that NASA is paying companies for soil, and it’s true that this “incentivizes” missions to the moon, but this is neither a market, nor NASA’s intention for the project.

As with any language debate, the question is “defined as X for what purpose”. So let me be more precise. NASA’s project is not a market, in the sense of markets as a mechanism for the efficient allocation of resources.

This is the view laid out by Ludwig von Mises’s foundational Economic Calculation in the Socialist Commonwealth, and Friedrich A. Hayek’s later Economics and Knowledge and The Use of Knowledge in Society which together form the basis of the economic calculation problem.

Quoting Hayek’s TUKS:
We cannot expect that this problem will be solved by first communicating all this knowledge to a central board which, after integrating all knowledge, issues its orders. We must solve it by some form of decentralization.

I fear that our theoretical habits of approaching the problem with the assumption of more or less perfect knowledge on the part of almost everyone has made us somewhat blind to the true function of the price mechanism…

Are any of these ideals fulfilled by NASA’s project?

Of course not. This is merely a central agency setting an arbitrary price determined by technocratic calculus. From a mechanism design perspective, it’s equivalent to a prize.


Except that  NASA’s project is not even an incentive. Eli neglected to mention this, but the actual price range set by NASA is just $15,000 to $25,000.

This is a tiny fraction of the millions or billions required for a mission. As a lower bound, a SpaceX launch costs $62 million and Jim Bridenstine (who announced the project and runs NASA) estimates a cost of $30 billion to develop a “sustainable presence on the moon”.

So no, NASA is not “incentivizing” missions to the moon, at least not in any meaningful sense.

If the Lunar Regolith project doesn’t aggregation information, and doesn’t provide an incentive, what’s it for?

From Jim’s remarks at the Secure World Foundation’s Summit for Space Sustainability.
What we’re trying to do is make sure that there is a norm of behavior that says that resources can be extracted and that we’re doing it in a way that is in compliance with the Outer Space Treaty,

And from the launch announcement:
The ability to conduct in-situ resources utilization (ISRU) will be incredibly important on Mars, which is why we must proceed with alacrity to develop techniques and gain experience with ISRU on the surface of the Moon.

If Eli had spent any time at all reading about this before shitposting on Twitter, it would be immediately obvious that NASA’s purpose is primarily in setting a regulatory precedent, not creating a market.


To his credit, Eli tacks this on as “bonus”:

But this strikes me as even more disturbing than a simple omission. The fact that he’s aware of the project’s role as regulatory precedent but not of that role’s relative importance suggests a kind of willful ignorance. Eli is so fixated on promoting the “market” narrative that he’ll reframe available evidence to support it.

More cynically, I wonder if our obsession with “bold polyglot intellectuals” is just a systematized abuse of Gell-Mann Amnesia.

You go on Twitter, you read someone’s tweet on a subject you know something about, and see that the author has no understanding of the facts. So you keep scrolling and read their tweets about cancel culture, space exploration and criminal justice reform, totally forgetting how wrong they were before.

In this sense, every tweet is an option with asymmetric returns. If you’re right, you cash out; if you’re wrong, everyone forgets and you lose nothing. The incentive is to ramp up variance, make bold claims in a variety of areas, and hope you’re right some of the time.

Unlike the regolith project, Twitter is an actual market for ideas, albeit one without a price.