A Subconscious Dilemma

The reintegration into society has not been easy. Long story short, I’m taking Buspirone for anxiety, and somewhat surprisingly, it’s working very well. That’s not the issue.

The issue is that like most psychoactive prescription drugs, Buspirone comes with side effects.

In my case, persistent and disturbing nightmares.

Of course, once I wake up that all goes away. I feel refreshed, I can go about my day. But there’s a question nagging in the back of my mind: how do you weigh the experiences of your unconscious self?

It’s not unconsciousness exactly. DreamMe is having experiences, and is thus a conscious being. But I can’t relate to their struggles. That’s in part the standard alienation associated with a being you’ve never met, can’t communicate with, and don’t have empathy for, but it’s also alienation caused by the fact that the entirety of their “struggles” are made up!

That’s not even the weirdest part of this whole saga. Dreamers–that’s to say, me when I’m dreaming–experience severe time dilation. It feels like you’ve experienced an entire lifetime, but in actuality, each dream only lasts 5 to 20 minutes [1]. So DreamMe thinks they’ve experienced an eternity of terror, but once I’m awake, I can’t really bring myself to care.


Once you start thinking about this too hard, it begins to raise all sorts of very thorny questions. So let’s put that aside for a second, bite the bullet, accept the premise that DreamMe really does have moral patienthood on par with the rest of us.

Though, first, we have to disambiguate two separate dilemmas. Am I worried about the suffering of DreamMe for altruistic reasons, or for egoist ones? In some sense, the egoist concerns obviate the altruistic ones. DreamMe is, after all, myself. And–pardon the grammatical confusion–if I don’t care about their suffering as something happening to myself, why should I care about it impartially?

But fine, acknowledge that a subconscious being is still conscious, accept their reality as legitimate, and bridge the ought-is dilemma to convince ourselves that the moral harm occurring is worth preventing.

What then? There’s a simple question of weights: the cost of DreamMe’s psychological panic versus the benefits of AwakeMe’s decreased anxiety. But then there’s a more serious question, the one that stops me from accepting this line of reasoning outright: where does it all end?

Once I accept the moral importance of a subconscious being and let it become a decision-relevant factor, one that has the potential to outweigh even my own mental wellbeing, what else do I have to buy into? Should I accept that when I take (some) painkillers, I’m letting my mind off the hook while allowing my body to suffer? That when I take anti-anxiety meds, I’m silencing a part of my consciousness, even while knowing that it continues to exist and run in the background? [2]

You might object that this proves too much. That in the case of DreamMe, I at least know that there is suffering taking place, and a (sub)conscious being experiencing it all. But look, we’re talking about experiences I can barely remember more than shreds of, and a recollection based on a few seconds of offhand impressions before I get up, turn off the alarm, and begin my day. Recollections that we know are seriously skewed, not to mention at odds with consensus reality.

So if this all sounds ridiculous and overly scrupulous and outright silly, then fine, I agree. I don’t like where this road is going either.


If you’ve ever had anxiety, you’ll find this whole framing deeply ironic.

Anxiety is, at its heart, all about made up struggles. You’re worried strangers on the street are judging you, even though they couldn’t care less. You’re worried your friends don’t like you, even though they actually do. And so on.

So what makes nightmares, another form of worrying about totally made up things, any different?

One answer is that they lack even a referent. Anxiety is at least about something happening in the real physical world, whereas nightmares are conjured purely out of the ether.

Except that’s not quite right. My nightmares are surreal, unreal even, but they’re definitely fixed in reality. They are, albeit in a very loose sense, about “real” things.

Towards the end of Harry Potter, Dumbledore tells our protagonist “Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?” On first read, it’s a noble nod to the inherently intersubjective nature of our constructed reality. Government currency is just fiat. Scientific truths are just expert consensus [3]. The “text” does not exist, but is rather created through interpretive dance between author and reader.

On the other hand, intersubjectivity also acknowledges that not all realities are created equal. If you told me that actually, funding your vacation is the world’s most impactful cause because it would make you really immensely happy, I’m under no obligation to believe you. Similarly, if you told me that in a matter of 5 minutes, you experienced hours of disturbing hallucinations, I would have no need to treat your problems with much gravity.

Since at least the 3rd century BC, when Master Zhuang wrote about being a butterfly, the good, sober, conscious, awake, and unscrupulous among us have been telling those puerile airheads to get off the grass [4] and come back to reality.


That would be a nice conclusion, but it’s the wrong place to stop.

Unfortunately for the sober among us, it turns out that society owes great debts to altered states of consciousness. As SlimeMoldTimeMold recounts in tremendous detail:

Mullis himself makes it pretty clear that LSD deserves a lot of the credit for his discovery. “Would I have invented PCR if I hadn’t taken LSD? I seriously doubt it,” said Mullis. “I could sit on a DNA molecule and watch the polymers go by. I learnt that partly on psychedelic drugs.” If this is even partially true, most progress in bioscience in the past 40 years was made possible by LSD.

Mullis went on to win the Nobel prize for the invention of PCR, described in no humble terms by Wikipedia as “fundamental to many of the procedures used in genetic testing and research… now a common and often indispensable technique used in medical laboratory research”.

As if that wouldn’t, on its own, be more than enough, SMTM goes on to credit drugs for work by Newton (caffeine), Sigmund Freud (Cocaine), Jules Verne and Thomas Edison (cocaine in wine), alongside Steve Jobs, The Beatles, and Douglas Englebart (LSD).

Dreaming is not quite the same as being on drugs, but a relevant point stands. Even if these experiences, and the worlds they take place in, appear divorced from our own, we cannot simply write them off as fanciful diversions.

That’s to say: I certainly don’t believe in Machine Elves as literal physical entities in the material world, but if they can factor primes, who cares?

Again, dream states are not exactly drug-induced, but there are similar reasons to believe they bear some meaningful connection to consensus reality. Larry Page (reportedly) came up with the idea for Google in his sleep. James Watson (again, reportedly) had a key insight about the structure of DNA while dreaming (Crick was, for his part, (reportedly) on LSD). Dmitri Mendeleev, who created the modern periodic table of elements, once credited his work to dream-space:

I saw in a dream a table where all the elements fell into place as required. Awakening, I immediately wrote it down on a piece of paper only in one place did a correction later seem necessary.

Equally anecdotally, I wrote the entirety of Become a Billionaire on my phone at 3am after a series of vivid dreams. I don’t track page views, but at least on the basis of Twitter hype, it’s easily my most successful post.


At some point, I’m obligated to tell you that none of this is either an endorsement or criticism of Buspirone. If you have anxiety, you should consider seeing a psychiatrist. If you don’t have anxiety, I can’t recommend taking it for the dreams, since they’re mostly unpleasant. But hey, I’m not a doctor, and it’s your (or at least DreamYou’s) life.

That leaves us with a dilemma “does my subconscious pain matter?”, followed by an attempt at resolution “screw the immaterial reality”, and finally an oddly utilitarian counterpoint “the immaterial reality seems to have an awfully good track record”.

Throughout, I’ve been comparing the pain of nightmares to the pain of anxiety. That the cure for the latter causes the former is perhaps no coincidence. The cliche psychoanalytic interpretation is that the feelings are not erased, merely repressed.

So this whole piece has been an exercise in taking bizarre ideas too seriously, but it’s also an attempt at expression. A year of lockdown was interesting, and, in many ways, can be credited for the creation of this blog and everything that’s come since. But it was also not, in the most conventional sense of the phrase, mentally healthy.

It’s not really that I’m having trouble reintegrating into society. It’s that on some level, I don’t even want to anymore. [5] Which makes the problem not only moral, but proleptic. It’s not a question of wants, but of self-transformation. Should I turn myself into a person who spends less time on the internet and more time in meatspace? Should I stop taking Buspirone and experience more anxiety? It’s rare to have such clear decisions and axes on which to affect substantial questions of personality, but this is a transformative moment after all.

Or as John Green should have said “You don’t get to choose if you get hurt in this world, but you do have some say in which you exists to feel it.”

I hope you like your choices.


Footnotes
[1] The moral importance of time-dilation turns out to be an absolutely fascinating question. For a more serious discussion, see Jason Schukraft on the subjective experience of time, critical flicker-fusion frequency and related welfare implications.

[2] That the subconscious anxious self continues to “run in the background” is a matter of some debate, but is common testimony among people taking drugs for a variety of mental health problems. It’s not that the feeling goes away, but that you can choose to ignore it.

[3] Not to mention, the selection of those experts is itself merely the consensus designation of other experts!

[4] Though ironically, the equivalent modern saying is “touch grass”.

[5] That’s not even to mention the specific difficulties of pseudonymous blogging in a newly social world. As you may have noticed, Batman has allies, but Bruce Wayne has no friends.

Which is not to say I’m a superhero in this situation. But we should ask, as long as it’s come up, why so many of the pseudonymous role models tend to be. Don’t ordinary people also need protection? Don’t you also want to be safe? This isn’t about cancel culture, or even Balaji’s Westphalian peace, it’s about trying to hold back at least some small part of yourself, for yourself. Whatever that even means at this point.

How Long Should you Take to Decide on a Career?

According to a recent survey from Rethink Priorities, when asked what best describes their current career respondents replied:

  • Building flexible career capital and will decide later (18.7%)
  • Still deciding what to pursue (17.2%)

I was struck how perfectly this aligns with the optimal solution to the Secretary Problem: given N options, and subject to certain constraints, you should evaluate 37% of them before committing (those two survey responses add up to 35.6%). This is mostly coincidental, but leads to a longer exploration of real-life applications of SP-like dynamics.

Of course, SP is just a toy model. For our use, the two most problematic assumptions are Binary Payoff: the evaluator’s goal is merely to maximize their probability of selecting the best candidate, meaning 2nd best is just as bad as the worst, and No Opportunity Cost: evaluation is treated as free, there’s no cost to making a selection after 100 seeing candidates rather than after 10. I refer to an adjusted SP without these assumptions as the Modified Secretary Problem (MSP).

This post is in two parts:
1. Optimal Stopping Points for the MSP
2. Discussion

All code is available here as a Colab Notebook.

Optimal Stopping Points for the MSP

Setup: To adjust against the disanalogous assumptions, we’ll make the following modifications:

  • Utility is proportional to the quality of the selected candidate. Unlike the regular SP, 2nd best is almost as good as 1st, and much better than worst.
  • Utility is proportional to the number of candidates remaining after selection. This gives an opportunity cost to evaluation, and introduces a kind of explore-exploit tradeoff.

Additionally, we’ll consider possible scenarios where:

  • Candidate quality is uniformly, normally or log-normally distributed
  • Utility is either gained:
    • Purely based on whether or not you picked the best candidate (BINARY)
    • Directly proportional to the quality of the selected candidate (PROPORTIONAL)
    • Directly proportional to the quality of the selected candidate, multiplied by the number of remaining candidates (TIME)
    • Gained at each time step directly proportional to the quality of the candidate currently being evaluated, plus the quality of the selected candidate multiplied by the number of candidates remaining at the end (TIME_CONTINUOUS)

Since BINARY payoff doesn’t depend on the quality distribution, we run a 3x3 matrix of scenarios, plus the initial base condition.

Approach: There is an analytical solution to the Secretary Problem described in Appendix A, but we’ll be focusing on a numerical solution (i.e. simulation). I start by replicating the original result to test the validity of the simulation approach, then find optimal stopping points for the 11 modified scenarios.

Pseudocode

  • Generate n=100 candidates with quality sampled from the scenario’s distribution
  • For each possible stopping point (S), run t=10,000 trials
  • Find the best_initial candidate from the first S candidates
  • Iterate through the remaining candidates in order, selecting the first candidate with quality better than the best_initial candidate

This is an implementation of the optimal strategy for the original SP described in Analysis of heuristic solutions to the best choice problem (Stein, Seale and Rapoport, 2003).

Results
First, we manage to replicate the original result, yielding a stopping point of 34%. This is close to the optimal solution of 37%, but clearly noisy.

Numerical solution (mine, left) compared to analytical solution from Stein et al., 2003

Next, we’ll run the MSP for the matrix of initial conditions.

Table of Results

BINARY       PROPORTIONAL TIME                TIME_CONTINUOUS
UNIFORM 35 9 2 3
NORMAL N/A 8 1 5
LOG_NORMAL N/A 24 7 12

Charts

Sanity Checks
Let’s begin with a quick sanity check of the results. All scenarios remain characterized by a U-shaped curve. Stopping too early results in having a too low bar, and satisfying too early. But stopping too late results in not having enough candidates left to evaluate, and risking going home empty handed (technically, defaulting to the last candidate in the pool).

The time-sensitive conditions all result in substantially earlier stopping points than their non-temporal equivalents. Again, this makes sense. Taking into account opportunity costs will push the decision time earlier, providing more time to “exploit” the benefits of a good candidate selection.

It also makes sense that TIME_CONTINUOUS stops a bit after TIME. Since utility is gained during the evaluation process, the effect of opportunity costs are dulled.

Discussion

The Difficulty of Reasoning from Toy Models, and Ambiguity of Increased Realism
We’ve adapted the secretary problem to resolve a couple key disanalogies to the real-life problem of selecting a career. These tend to push the optimal stopping point much earlier, with one scenario advocating a stopping point of just 1%. This bears two interpretations:

  • Object-level: Take this result literally, and apply it to major life decisions
  • Meta-Level: Since the solution is so highly subject to model parameters, don’t take any of this too literally

Note that the model parameters are not arbitrary, so this is different than just conducting sensitivity analysis and declaring the whole model unreliable. I genuinely feel that the modifications made to the original SP make the model more realistic.

Having said that, it is not necessarily true that a more realistic model will yield better results. Going from UNIFORM+PROPORTIONAL to NORMAL+PROPORTIONAL, you are arguably getting a more realistic model, but the stopping point goes from 7% to 19%. NORMAL+TIME_CONTINUOUS is perhaps even more realistic, but results in a subsequent drop of the stopping point from 19% down to 4%.

So the fact that this discussion is more nuanced than the original problem has some benefits, but doesn’t necessarily indicate that results are more applicable.

This is reminiscent of The Atlantic’s “The Curse of Econ 101”, arguing that too naively applied, economic reasoning can be more misleading than useful. Econ-101 is an important step on the path to reasoning rigorously about difficult problems, but there’s no guarantee that taking the step will make your decisions better in the short-run.

Intuitions are Arbitrarily Bad
Given the flaws of formal models, we might wish to retreat to a more intuitive stance. Abstractly, you could even extend this to a broader critique against rationalism, or against modernism, or against planning and so forth.

Tanner Greer disagrees. Although intuition, and it’s cousin cultural tradition, were helpful in the past, our current world is too bizarre. He goes on to conclude:

The trouble with our world is that it is changing… What traditions could their grandparents give them that might prepare them for this new world? By the time any new tradition might arise, the conditions that made it adaptive have already changed… This may be why the rationalist impulse wrests so strong a hold on the modern mind. The traditions are gone; custom is dying. In the search for happiness, rationalism is the only tool we have left.

Intuition is not quite the same as custom, but it’s related. Your intuitions might stem from an evolutionary background, or advice from your parents, or a broader set of cultural norms. But these are all maladapted to the current moment, and to your current circumstances.

At the extremes, intuition can easily veer into neurosis. It’s easy to feel “I can’t commit to a career path until I’ve seen more of them, I’m always afraid that there’s a better opportunity around the corner.” Or alternatively, “What I have now is good enough. I should be grateful for this opportunity, and not try too hard to improve my life.”

Formal models might not be right, but they can at least help disabuse us of even worse mental models.

Further Disanalogies and Alternative Strategies
So far, we’ve analyzed different optimal stopping points, but only for a single strategy (look at the first K candidates, identify the best_initial, then pick the best candidate from the remaining pool better than best_initial). This is one reasonable approach, but it’s not the only one. From Stein et al. on alternative strategies:

The Cutoff Rule is the one we’re familiar with, and results in the highest peak, making it superior for the original SP. However, the Successive Non-Candidate Rule peaks earlier, making it potentially superior for scenarios that incorporate opportunity cost. Modifying the simulation code to incorporate this strategy is a promising avenue for future work.

In real life, there are all sorts of other strategies we can imagine. The candidate pool is not just an ordered list of opportunities to step through linearly, it’s a huge and dynamic space of possibilities you can jump to in any order.

Job searches in particular are not nearly as blind. You can take a job, consider the particular features that would have improved it, and then seek out subsequent opportunities on the basis of that knowledge. You might also ask friends, read about other people’s careers, take some kind of career aptitude evaluation, and so on.

Additionally, the SP and MSP consider only relative knowledge. As the evaluator, all you know about an candidate is how it ranks compared to previous candidates. In real life, there is some capacity, albeit limited, for more absolute evaluations. A job in a coal mine would (probably) not just be worse than any job I’ve had before, it would be clearly and dramatically so.

Finally, job searches are highly path-dependent. You don’t just “try out” being a PhD student to see if you like it. You pursue that particular credential in the service of gaining access to specific further opportunities, some of which you might not even get to evaluate before taking on a massive commitment. Similarly, you don’t get to “try out” being a billionaire startup CEO until you spend years in other “jobs” on the path to get there.

Empirical Data from the Effective Altruism Community
According to a recent survey from Rethink Priorities, when asked what best describes their current career, Effective Altruists replies included:

  • Building flexible career capital and will decide later (18.7%)
  • Still deciding what to pursue (17.2%)

These percentages add up to 35.6%, which is surprisingly close to the optimal solution to the original SP (~36.8%), but very far from the solutions I propose here.

It’s worth acknowledging that this is not a uniform sample of people at random points in their career. The survey also notes that the mean age is just 30 (median 27). So the respondents are largely early-career, and precisely in the period of life where “exploration” takes precedence over “exploitation”.

There are two additional dynamics to consider.

First, many Effective Altruists view causes as having incredibly high variance, on a very skewed distribution. This results in a very high “Moral Value of Information”. Per Bykvist, Ord, and MacAskill in Moral Uncertainty:

it’s plausible that the most important problem really lies on the meta-level: that the greatest priority for humanity, now, is to work out what matters most, in order to be able to truly know what are the most important problems we face.

Analogously, career choice may involve various meta-strategies, including:

  • Spending (relatively) a lot of time evaluating different paths before committing
  • Working directly on cause prioritization
  • Building flexible career capital while we (collectively) make progress on identifying the most important problems

This tendency is further encouraged by the EA community’s appreciation for exponential growth curves. Rather than more linear views where it’s important to take advantage of known and proxiomate opportunities, the exponential view broadly encourages investment on the meta-level, or investments in the rate of growth itself.

Second, and in stark contrast, Effective Altruists might face an increased sense of urgency, and an need to begin doing direct work as soon as possible. As I argued earlier:

According to an Open Philanthropy estimate and AI Expert Surveys, there’s a 50% chance of transformative Artificial Intelligence emerging by around 2050… If you take this idea seriously, we should be obsessed with the short term to the exclusion of all other timescales.

So while a human lifespan (in the UK) is 81 years, with a retirement age of 65, timelines might be aggressively compressed if everything changes in 29 years. For the median EA at age 27, working life might only last until age 56.

Proleptic Career Choice
So far, we’ve assumed that the perceived quality of applicants is stable. In fact, the process of exploration may itself entail a shift in the evaluator’s desiderata. Perhaps working a job causes them to change their beliefs about the quality of subsequent jobs, altered their personal circumstances, or even affected a deep transformation on the level of values. Consider:

  • Carol, an ambitious young Stanford grad, initially ascribes high value to Venture Capital and Entrepreneurship. After taking a job as an associate at an investment firm and seeing hundreds of failed startups, she becomes more hesitant to start a company herself.
  • Seeking financial stability, Peter initially places the greatest value in investment banking, followed by software engineering. After a stint in software, he’s earned enough money to retire, and now place more weight on non-financial aspects of future jobs, causing i-banking to fall in relative rank.
  • After moving to Chicago and experiencing frigid winters, Eve starts to value warmth more heavily and places higher value on future jobs in California and Florida.

Particularly savvy agents may actually take a job they don’t value, expecting it to change their values for the better:

  • A burgeoning Effective Altruist from London has no first hand experience with direct aid, and can’t really relate to the plight of the very poor. Nevertheless, they take a job in global development, hoping that they’ll develop a better appreciation for the role once they’re already doing it.

Agnes Callard describes this internal tension at length:

One characteristic of someone motivated by these complex reasons… is some form of embarrassment or dissatisfaction with oneself. She is pained to admit, to herself or others, that she can “get herself” to listen to music only through those various stratagems. She sees her own motivational condition as in some way imperfectly responsive to the reasons that are out there. Nonetheless, her self-acknowledged rational imperfection does not amount to akrasia, wrongdoing, error, or, more generally, any form of irrationality. Something can be imperfect in virtue of being undeveloped or immature, as distinct from wrong or bad or erroneous. (There is something wrong with a lion that cannot run fast, but there is nothing wrong with a baby lion that cannot run fast.) When the good student of music actively tries to listen, she exhibits not irrationality but a distinctive form of rationality.

…Thus I will defend the view that you can act rationally even if your antecedent conception of the good for the sake of which you act is not quite on target—and you know that. In these cases, you do not demand that the end result of your agency match a preconceived schema, for you hope, eventually, to get more out of what you are doing than you can yet conceive of. I call this kind of rationality “proleptic.”

In some version of this view, career choice is not merely a matter of evaluation and selection, but of active exploration, information-seeking, and intentional self-modification.

It’s important to understand this as a dynamic process. Leaving behind the toy model, I’m suggesting that career choice takes part in a self-modulating cycle of:

  • Trying out a jobs
  • Updating your beliefs and values as a result
  • Imposing a new ranking function on the basis of those changes
  • Seeking out a next job on the basis of that novel ranking
  • …and so on

This is not merely path-dependence. It is a kind of profound illegibility. If the loss function is itself updating in real-time, all optimization techniques fail.

I can think of one avenue for salvation. Earlier, we discussed the case of an Effective Altruist trying to “change their values for the better.” Rather than “values” on the level of “care for animals” or “financial stability”, agents could be modeled as having “meta-values” on the level of, for example:

  • Taking on values that lead to long-term satisfaction.
  • Aligning emotional motivations with cognitive beliefs about what is right.
  • Better approximating a “correct” moral view.

If at least these meta-values were stable, the problem would be at least partially resolved.

See also
Robert Wiblin – How replaceable are the top candidates in large hiring rounds?
Stein, Seale and Rapoport, 2003 – Analysis of heuristic solutions to the best choice problem
Chapter 1 of Algorithms to Live By.
Robert Wiblin – The ‘secretary problem’ is too bad a match for real life to usefully inform our decisions — so please stop citing it

Varieties of Deterministic Experience

What determines the fate of our world? Depending on your views, there may be few different macro narratives:

Mimetic Determinism

There are ideas floating around with various evolutionary properties. Some of them are really good at embedding in our minds, some are really good at making their hosts spread them further. The best memes get stuck in your brain, compel you to attain a position of great prestige, credibility and power, then insist that you spread them as wide and far as possible.

There are also meme-complexes (Memeplexes) with symbiotic or parasitic relationships. For example, some people are compelled to become Venture Capitalists, but it’s useless without the corresponding meme that compels people to become startup founders.

From Nadia’s The tyranny of ideas:

Rather than viewing people as agents of change, I think of them as intermediaries, voice boxes for some persistent idea-virus that’s seized upon them and is speaking through their corporeal form. You might think of this as “great prophet theory”.

Ideas ride us into battle like warhorses. We can witness, participate in, and even lead these battles, but their true meaning eludes us. We don’t really know where ideas come from, nor how to control them.

See Also
Wikipedia – Memeplex
Dawkins – The Selfish Gene
Joe Carlsmith – The innocent gene
Bernard Beckett – Genesis
Creanza et al. – Cultural evolutionary theory

Financial / Economic Determinism

There are incentives which necessitate certain consequences. If it’s profitable, it will be built. From Scott’s Meditations on Moloch

Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives… Just as the course of a river is latent in a terrain even before the first rain falls on it – so the existence of Caesar’s Palace was latent in neurobiology, economics, and regulatory regimes even before it existed. The entrepreneur who built it was just filling in the ghostly lines with real concrete.

More specific arguments can apply in local contexts. For example:

  • There are different dating apps.
  • Ironically, the ones that fail to create stable matchings will see more repeat use and popularity.
  • Thus, the dominant dating app will inevitably be designed to alienate.

More broadly, organizations may promote certain values, scientific institutions may promote the construction of certain types of knowledge. Platforms may promote certain kinds of content.

See Also
Wikipedia – Base and superstructure
Wikipedia – Economic Determinism
Applied Divinity Studies – How Substack Became Milquetoast

Game Theoretic Determinism

A perspective frequently utilized in super long term forecasting, and reasoning about alien civilizations or superintelligent agents. For example, an argument might take the form:

  • Some civilizations will colonize the universe
  • Thus, the far future universe will be populated mainly with agents who are pro-expansion and pro-progress

Or more mundanely:

  • Some religious groups on earth promote high-fertility rates
  • Thus, the near future will be populated mainly with agents who are pro-population growth

There are more nuanced and complex arguments of this general shape. In Liu Cixin’s Three Body Problem Trilogy:

  • Civilizations can’t trust each other
  • Technological progress can proceed exponentially, such that observing that a civilization is harmless from thousands of lightyears away is no guarantee that they’ll be harmless by the time you arrive
  • Thus, any civilization that learns about the existence of another civilization will immediately act forcefully to destroy it

See also:
Scott Alexander – The Hour I First Believed
Philip Trammell – Which World Gets Saved
Ben West – An Argument for Why the Future May Be Good

Physical Determinism

There are particles in the universe (including those in our brains) subject to physical laws which determine their behavior.

Discussion

This is a limited typology, there are likely many other views, both reasonable and not. Note that these views are non-exhaustive. You could say that things are determined by physics at the level of individual particles, but by memes at the level of human behavior. Or that it’s all incentives, but that those incentives are modulated by facts about human psychology.

Note as well that these views should be neither horrifying nor necessarily comforting. To take another example, evolution does provide a guarantee that agents are well-adapted to some set of circumstances, but provides no guarantee that they remain well-adapted. Or in capitalism, there is a strong likelihood that sufficiently profitable business will be created by someone, but no guarantee that profit proxies well for human values.

Finally, these views all (perhaps with the exception of physical determinism) are not really absolute. You are perhaps a slave to memes as brain-parasites, but have some control over which memes you choose to take on. Through will-power may be subjugated to habit in general, and occasional breakthroughs of will-power might allow you to, for example, delete Twitter, sign up for therapy and install a Chrome extension that blocks content recommendations.

Identifying more of these views, mapping their influence in various social spheres, and better reasoning through their consequences is a promising avenue for future work. In particular, would love to see follow-up posts:

See Also
Katja Grace – What is going on in the world?
Yudkowsky – An Alien God
Yudkowsky – Inadequate Equilibria