Peter Thiel and the Antichrist

If someone says “because Jesus is real, we should overthrow the US government”, they would get dismissed as a religious nut. That sounds bad, but it is strongly preferable to the alternative where they only say the second half of the sentence.

Peter Thiel’s recent Antichrist lectures have attracted a lot of attention, nearly none of it related to the “second half” of his sentence. Peter is a Straussian, so this is not a coincidence. [1]

It is nothing new for an intellectual’s ideological opponents to strawman his views. [2] What I find interesting is how little even his proponents seem to care about what is actually being said. [3] This could be fine if Peter’s recent lectures were a kind of call to arms, and this was now merely a matter of true believers executing on a vision. But you don’t spend four hours lecturing on an idea that’s simple or easy.

And in fact, Peter’s tone is more troubled and contemplative than triumphant, and most of what he poses is closer to a problem than to a solution. Perhaps we can say that Armageddon is preferable to Antichrist, but both are obviously just so so bad. And that would be clear if his “fans” weren’t busy trying to pick sides. [4]

So what are the questions? I see basically two tensions, one you could think of as “Thiel v1” and the other as v2, but they’re connected to the same undercurrent:

  1. Without a monopoly, you are constrained by competition and can’t really be free. But with a monopoly, you have the freedom to become arbitrarily dysfunctional. An immortal being can become infinitely decrepit.
  2. Since technology can progress, it may eventually become powerful enough to kill everyone. But a force capable of regulating this technology would be at least as powerful [5], and stagnation comes with its own dangers.

I see these are more like “themes” than paradoxes, with many “middle ways” through. But finding those paths in any particular instance remains up to us. Thiel sees our trajectory through history as undetermined, and our choices as consequential. Ignoring these issues is thus a tempting way to shirk responsibility, but it is not a viable path forward.

In these matters, as in all others, we do have a choice. And that is the far scarier reality to contemplate.


[1] As I wrote earlier:

the truth is more mundane. It turns out that the art of Straussian writing isn’t difficult at all… you can espouse over and over again in perfectly literal terms… and still no one will listen.

Or from Nadia years ago:

Ideas are fascinators that sparkle and dangle in front of the creator, distracting an eager audience from the person behind the curtain.

Peter is a billionaire and lives in America, and it would be reasonable to ask why he needs any kind of protection. We are reportedly “post” cancel culture, and Peter has said enough to get canceled 10 times over anyway, so why hedge or distract?

I don’t think there’s anything complex happening here. Peter is a Christian and a Girardian and is just super sensitive to prosecution. Or more generously, he’s attuned to the way that things can always take a sudden turn south.

For what it’s worth, this is my answer to most questions about Peter. His beliefs are basically what’s on the tin. It’s just that you actually have to read the tin. This is why I am unlikely to write any kind of “Peter Thiel Exegesis”. It would just be a list of quotes taken literally and in context, which at the limit is just called reading the source material.

[2] Ben West on EA Forum somewhat attempts to engage with Peter’s ideas, but then jumps into some pretty serious errors.

Part of this is just Ben wanting Peter’s claims to be about EA specifically, rather than merely in the abstract. It is always fun to be named, even as an enemy.

The more substantial mistake Ben makes is in thinking that Peter sees AI as the only way out of stagnation. Quoting his NYT interview:

If you don’t have A.I., wow, there’s just nothing going on.

But Peter means this as a criticism, not an endorsement. I.e. That it’s a really bad state of affairs that without AI there seems to be nothing else going on, and we should instead try to move towards a more dynamic world where there are other things also moving forward.

More explicitly, here’s Thiel from his conversation with Tyler on Political Theology:

Maybe the premise of your question is what I’d challenge. It’s, why is AI going to be the only technology that matters? If we say there’s only this one big technology that’s going to be developed, and it is going to dominate everything else, that’s already, in a way, conceding a version of the centralization point. So, yes, if we say that it’s all around the next generation of large language models, nothing else matters, then you’ve probably collapsed it to a small number of players. And that’s a future that I find somewhat uncomfortably centralizing, probably.

The definition of technology — in the 1960s, technology meant computers, but it also meant new medicines, and it meant spaceships and supersonic planes and the Green Revolution in agriculture. Then, at some point, technology today just means IT. Maybe we’re going to narrow it even further to AI. And it seems to be that this narrowing is a manifestation of the centralizing stagnation that we should be trying to get out of.

[3] One possible explanation is that Thiel’s supporters are merely respecting his privacy given the invite-only nature of the recent SF events. But other lectures on the same topic were given publicly and have been available online going back a year.

For that matter, I note that it is only after the most recent lectures that Thiel’s critics have really gotten excited about the whole Antichrist thing. When he spoke openly, this was not news. The news is merely about the fact of the event rather than the content itself.

[4] On several occasions, Thiel has evoked the image of a “Scylla of all these existential risks and the Charybdis of this political totalitarian catastrophe.” And while he’s clearly anti-Charbydis, it would be a huge mistake to frame him as pro-Scylla.

One of his talks carried the notable title “'Anti-Anti-Anti-Anti Classical Liberalism”, highlighting the extent to which he does not see a simple straightforward solution, merely objections to objections. Or from from his conversation with Tyler:

it’s not a pro-tech argument — this is sort of an anti-anti-tech argument — is that if we, again, talk about all these existential risks today … one other existential risk is a one-world totalitarian government.

Or just explicitly and object level from the same interview:

There are apocalyptic fears around AI that I think deserve to be taken seriously.

[5] That this is clearly not true in general only somewhat diminishes the importance of the theme. And in fact, figuring out how to make it even less true feels to me one of the most promising resolutions.