5. Explaining The Paradoxes

This section assumes the reader is familiar with Part 2, 3, and 4. It would be difficult to follow without them.

In this section, I will explain the Doomsday Argument, the Presumptuous Philosopher, the Simulation Argument, and Dr.Evil and Dub. Comparing to the Sleeping Beauty Problem these paradoxes are similar to each other. The indexical involved is “I”, i.e. the agent at the perspective center. Their controversies are mainly caused by the use of self-locating probability while perspective disagreement plays a lesser role.

The Doomsday Argument

The doomsday argument hinges on the prior probability distribution of “my” birth rank among all humans. That probability, unfortunately, is a self-locating probability. As discussed in Part 3, the notion of self-locating probabilities are products of perspective inconsistency. It is invalid.

For example, for simplicity assume the total number of humans before extinction is 200 billion. According to the doomsday argument, the prior distribution of “my” birth rank among all humans would be a uniform distribution from 1 to 200 billion. Here the proposed reference class is all humans, past present, or future. It considers the perspective center “I” as the result of a random selection among all humans. This probability is a misconception caused by mixing the first-person perspective with “objective” reasoning. Without it, the belief shift to a more pessimistic outlook after learning my actual birth rank would not happen.

The Presumptuous Philosopher

Here the purposed reference class is different from the doomsday argument. Instead of all humans, the presumptuous philosopher suggests “I” shall be treated as the result of a random selection among all potentially existing sentient observers in this universe. It still interprets the perspective center as the outcome of a sampling process.

Similar to the doomsday argument, the presumptuous philosopher’s argument also hinges on a self-locating probability: the prior probability that “I” actually exist. According to the philosopher, this probability varies depending on how many observers there actually are. It is why “my” existence can be treated as evidence favoring theories predicting more observers. Without this self-locating probability, the belief shift towards more populous theories would not happen.

Also notable that the philosopher’s argument is similar to the friend’s outsider reasoning in Part 4. Whereas he should reason from the observer’s first-person perspective similar to the subject. This again shows the importance of not switching perspectives in reasoning.

The Simulation Argument

My objection to the simulation argument is not about the trilemma. I think given the outlined assumptions such as substrate-independence and sufficient computing power, the trilemma can be valid. If a typical human-like civilization can and wants to perform “ancestor simulations” during its existence, then the majority observers with similar experience would be simulated. This reasoning can be coherently expressed from one consistent perspective (that of an imaginary impartial observer’s).

The problem is in the argument’s interpretation of the trilemma. It treats the fraction of simulated observers as the probability that “I” am simulated. This requires a principle of indifference to all observers, including “I”. Yet the indifference to all is only valid from the perspectives of impartial outsiders while “I” have to be identified from my natural first-person perspective. The two cannot be used together. The probability that “I” am simulated is just another self-locating probability.

It should be noted this counter-argument states the probability of “me” being simulated is a false concept. It is not merely arguing that the simulation argument has assigned the wrong value to it. This is a stronger claim than other critiques of the argument. For example, Sean Carroll and Brian Eggleston also questioned the validity of indifference towards all observers (including us). However, they did not express any objection to the probability itself.

Dr. Evil and Dub

The problem with Dr. Evil and Dub’s logic is not different from the above paradoxes. Even though the two are undergoing indistinguishable subjective states, from their respective first-person perspectives, “I” am still inherently different from the other copy, not in the same reference class. The probability of “me” being Dr. Evil (or Dub) is a self-locating probability. Since this probability does not exist there is no rational way to make any decision basing on it.

What is unique about this paradox is that it is a decision-making problem. So besides probabilities, it also needs to consider the objective of the decision. In my opinion, the objective implicitly used by the original paper is correct. The subject should try to avoid his own torture while attempting to destroy Earth. Only his personal well being should be considered because that is what the decision affects.

However, as anthropic paradoxes do not reason from one consistent perspective, objectives other than simple selfish goals are often suggested to reflect self-locating probabilities. Common alternatives include the well-being of all members of the purposed reference class (average or total). Some even suggest entirely non-indexical objectives, i.e. they only evaluate the decision outcome as different world states by removing one’s self-importance from the picture. However, these objectives are misaligned as they do not treat the indexical “I” as inherently unique while self-locating probabilities do. Furthermore, such objectives require additional assumptions such as all members of the purposed reference class would make the same decision as “I” do. For more alternatives objectives and their required assumptions, I recommend reading Stuart Armstrong’s Anthropic decision theory for self-locating beliefs.