1. The Paradoxes

The following is a very brief introduction to various anthropic paradoxes I will discuss. Besides the paradoxes, the concept of Self-Sampling Assumption, Self-Indication Assumption, the reference class problem in anthropic context will also be introduced. If you are already familiar with them feel free to skip this page.

The most exciting phrase to hear in science …, is not “Eureka!” but “That’s funny …”

— Isaac Asimov Probably

Surely these paradoxes are not science, yet it is still fascinating such seemingly simple problems have sparked so much debate over the last two decades. A quick search on philpapers.org about the Sleeping Beauty Problem or Doomsday Argument returns hundreds of papers discussing them. Yet no consensus solution has been reached.

1. Sleeping Beauty Problem

Wikipedia Link

If you have never heard of this problem I recommend watching this video by Julia Galef. She did a great job explaining the experiment setup.

Describing it in words, the problem is as follows. On Sunday night Beauty is taking part in the following experiment. A fair coin would be tossed. If it lands on Heads Beauty is going to be woken up on Monday morning only. If it lands on Tails Beauty is going to be woken up on both Monday and Tuesday. In the case of Tails, just before the second awakening, Beauty’s memory of the previous day would be wiped. So she would not be able to tell if it is Monday or Tuesday. When Beauty wakes up in the experiment what probability should she assign to Heads? What should the probability be when she finds out today is Monday?

There are two major camps regarding the solution to the Sleeping Beauty Problem. Thirders suggest the probability of Heads should be 1/3 when Beauty wakes up. It then changes to 1/2 after learning it is Monday. This is currently the answer with more supporters. Halfers, on the other hand, suggest the probability should be 1/2 when Beauty wakes up. They are divided on what the probability should be after Beauty learns it is Monday. Lewisian Halfers, after David Lewis, think the probability should change to 2/3. Double Halfers believe it should remain at 1/2 unchanged.

The Sleeping Beauty Problem is undoubtedly the most famous anthropic paradox. Yet I would argue it is not the most typical type. The majority of anthropic paradoxes are concerned with one agent among a group of similar observers. Sleeping Beauty is concerned with one moment among two similar moments of the same agent. Nonetheless, according to my argument, it has the same cause as others.

2. Doomsday Argument

Wikipedia Link

Doomsday Argument claims we should have a more pessimistic outlook on the future of the human species once taken our birth rank into consideration. This is because, all else equal, I am more likely to have my actual birth rank if the total number of humans is small. For example, consider a simplified case with two predictions: Scenario A, the total number of humans before extinction is 200 Billion, and Scenario B, 200 Trillion. By the principle of indifference, a prior probability distribution of my birth rank under Scenario A would be uniformly distributed between 1 and 200 Billion. Similarly under Scenario B, a uniform distribution between 1 and 200 Trillion. Then my actual birth rank being around 100 Billion, a case immensely more probable under Scenario A, is evidence suggesting doom-soon is more likely to be true. 

I recommend checking out Nick Bostrom’s page explaining it. Nick Bostrom is, in my opinion, the most knowledgeable person regarding anthropic reasoning. Even though ultimately I do not agree with many of his arguments. You will find me frequently referencing his work on this website.

The Doomsday Argument relies on the Self-Sampling Assumption (SSA) (wiki link) which states that all else equal one should reason as if they are randomly chosen among all actually existing observers (past present or future included) in their reference class. A competing school of thought is the Self-Indication Assumption (SIA). It states all else equal, one should reason as if they are randomly chosen among all possible observers. SIA resolves the Doomsday Argument because the effect of my early birth rank is exactly offset by the fact that I exist at all (wiki link).

What is the appropriate reference class of oneself? Should it include the potentially existing observers? What counts as an observer? These questions don’t seem to have obvious answers. It a major part of the anthropic debate dubbed “the reference class problem”.

Some have suggested SSA is dependent on the reference class while SIA is not. This is actually incorrect. It is only because, for many problems, the effect of reference class is canceled out in calculations, like in the case of Doomsday Argument. Yet there are also problems where the reference class would affect its conclusions.

3. Presumptuous Philosopher

This is a thought experiment by Nick Bostrom as a reduction to absurdity against SIA. His original paper can be found here. In my opinion, this is the most elegant counter to SIA. The setup is as follows:

“It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories: T1 and T2 (using considerations from super-duper symmetry). According to T1, the world is very, very big but finite and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite and there are a trillion trillion trillion observers. The super-duper symmetry considerations are indifferent between these two theories. Physicists are preparing a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: “Hey guys, it is completely unnecessary for you to do the experiment, because I can already show you that T2 is about a trillion times more likely to be true than T1!”

The rationale behind the philosopher’s argument is, of course, SIA: because we exist, we should conclude there are likely to be more observers like us. Most would question the validity of this argument. Which means they should be doubtful about SIA.

4. Simulation Argument

Here I refer to Nick Bostrom’s Simulation Argument specifically. His original paper can be found here. With some assumptions, it argues the following trilemma is true

  1. “The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero”, or
  2. “The fraction of posthuman civilizations that are interested in running simulations of their evolutionary history, or variations thereof, is very close to zero”, or
  3. “The fraction of all people with our kind of experiences that are living in a simulation is very close to one”

It suggests that if 3 is true, then we are almost certainly living in an ancestor simulation, i.e. the probability that we are living in a simulation is extremely close to 1.

5. Dr. Evil and Dub

This is an interesting problem due to Adam Elga. In this paper he presented the following thought experiment:

“Dr. Evil is in his impregnable moon fortress preparing to launch a bomb to destroy Earth. He receives a message from the Philosophy Defense Force (PDF). The PDF claims they have created a duplicate of Dr. Evil (Dub). Dub is in a controlled environment which makes his subjective experience indistinguishable from Dr. Evils. For example, right now both Dr. Evil and Dub are reading this message. Pdf claims if Dub does not surrender immediately he will be tortured. How should Dr. Evil respond to this?”

The conclusion in the paper is: Dr. Evil should assign equal probabilities to himself being Dr. Evil or Dub. Therefore surrender. Elga also stated that he is not entirely satisfied with this conclusion. Because that means Dr. Evil could have prevented this from happening by creating hundreds of brains in vats with subjective states matching his own, and each would be tortured if it surrenders. Of course, his enemy could create thousands of brains in vats that will be tortured if they do not surrender. But Dr. Evil could create millions and so on. It seems strange that the fate of Earth should depend on this kind of numbers game. 

6. Other Paradoxes

There are other interesting problems besides these. It would be a monumental task to make a comprehensive list of all anthropic paradoxes. However, those problems are closely related to one or more of the problems mentioned above. I will only specifically discuss the solutions to the above five paradoxes. It should be noted my argument is general. It works for other anthropic paradoxes as well.

Next I will explain my core argument.