FAQs

This FAQ is written from the point of view of a doomsayer. My view is that the [LINK]Doomsday argument is inconclusive, for reasons I have explained elsewhere.

Does the Doomsday argument not apply to any point in human history? Should Cro-Magnon man have lived in fear of extinction? The Doomsday argument is equally valid for him. Why, then, is the human race still going strong after all these years?

The answer is, yes, early humans could have used the Doomsday argument and they would have been misled. However, it is in the nature of probabilistic reasoning that in untypical circumstances it will fail. The earliest humans were in highly untypical circumstances so it's not so remarkable that they would have been misled. If you count the total fraction of all people that will have been right or will have been misled, you'll see that the strategy that maximizes the fraction of people who were right is the Doomsday argument. A minority will have been misled but they were just unlucky.

Note that the Doomsday argument doesn't say you should expect to be one of the very last humans, just that you shouldn't expect to be one of the very first. Other things equal, you should expect yourself to be among the middle 95%—after all, 95% of all humans are there so 95% will be right if they all guess that way. You only get the prediction that doom will strike soon if you combine this argument with assumptions about future population figures. Since about 15% of all humans who were ever born are alive today, you would be a typically positioned human if our species went extinct tomorrow. And if population figures remain high, or continue to increase, then it won't be very long before you would turn out to have been extraordinarily early in the human species. Under these assumptions, it could then seem that the Doomsday argument says that doom will likely strike soon. Remember, though, that you also have to take account of standard empirical factors. If you thought annihilation due to nuclear war, germ warfare, warfare based on nanotechnology, a runaway greenhouse effect etc. are extremely small, then you could be fairly confident even after considering the Doomsday argument that the human species will continue to exist for a rather long time.

There is an easy way to beat the DA: If mankind is about to end soon, because there will live probably just as many people before and after me, just let's shoot 90% of the population and our abominable society will live 10 times longer? This reminds me of the strategy to remain in bed, if a car crash is predicted by a soothsayer. So isn't it even so, that there is no way of making a prediction, even on doomsday, since there are always countermeasures against predicted events?

We can take countermeasures and they would lower the risk of doom. Indeed, one of the reasons why philosopher John Leslie wrote a book about the Doomsday argument was that he hoped it could influence people to put more effort into preventing disasters such as nuclear or germ warfare, a runaway greenhouse effect etc. The odds improve if we decide to take effective countermeasures against these and other dangers (or indeed to limit population growth)

In Richard Gott's version of the Doomsday argument (who independently discovered it) [1], the probability that you are in the first 10% of the human species is simply 10%, the probability that you are in the first 1% is 1%, and so on. However, this is an oversimplification. You also have to take account of the empirical prior probability, and the way to do that is by using Bayes' theorem. Failing to do that, you do indeed get the absurd conclusion that there is nothing we can do improve the odds.

[1] Gott III, J. R. 1993. "Implications of the Copernican principle for our future prospects". Nature, vol. 363, 27 May, pp. 315-319.

I have memories of 20th century events, so I cannot have been born earlier than the 20th century.

We have to distinguish two "cannots" here. (1) Given the validity of these memories then I was in fact not born

It is indeed problematic how and in what sense you could be said to be a random sample, and from which class you should consider yourself as having been sampled (this is "the problem of the reference class"). Still, we seem forced by arguments such as Leslie's emerald example (below) or my own amnesia chamber thought experiment (see my "Investigations into the Doomsday argument") to consider ourselves as random samples due to observer self-selection at least in some cases.

A firm plan was formed to rear humans in two batches: the first batch to be of three humans of one sex, the second of five thousand of the other sex. The plan called for rearing the first batch in one century. Many centuries later, the five thousand humans of the other sex would be reared. Imagine that you learn you’re one of the humans in question. You don’t know which centuries the plan specified, but you are aware of being female. You very reasonably conclude that the large batch was to be female, almost certainly. If adopted by every human in the experiment, the policy of betting that the large batch was of the same sex as oneself would yield only three failures and five thousand successes. ... [Y]ou mustn’t say: ‘My genes are female, so I have to observe myself to be female, no matter whether the female batch was to be small or large. Hence I can have no special reason for believing it was to be large.’ (Leslie 1996, pp. 222-23)

An important point that seems to be overlooked in all the discussion, namely, that it's oversimplifying the problem to imagine just two universes—the Few and the Many.

Looking at just two hypothesis is just a convenient way of explaining the underlying principle. We bunch all the more specific possibilities together under two headings "Doom Soon" vs. "Doom Late". (For a more detailed answer, read my reply to Korb and Oliver, who raised a similar objection in a recent paper in the journal Mind: http://www.anthropic-principle.com/preprints/alive.html.)

The two main points are:

1. Even if the objection were correct, it would not refute the Doomsday argument, only perhaps modify the conclusion somewhat. (In any case, the precise prediction you get from the Doomsday argument depends on your empirical prior. The DA can be read as saying that whatever your prior is, hypotheses according to which doom will happen relatively shortly after your birth gain posterior probability vis-a-vis hypotheses according to which doom will happen later. The DA doesn't necessarily say that doom will happen soon.)

2. My personal opinion is that the empirical prior is indeed fairly well modeled by the two-urns example. The reason is that I think that we will develop molecular nanotechnology in the next century. If this doesn't lead to a doomsday then I think we it will lead to large-scale space colonization and after that it looks much harder to annihilate all of us. (But note that none of these assumptions are needed for the Doomsday argument to be sound.)

The simple urn model requires two populations. However, for sure we know only about one human race, so it seems that all the nice Bayesian probability calculations might not be directly usable to link the one human race to the smaller urn.

The urn example uses two urns just as a neat way of getting the prior probabilities. Instead one could assume that there is just one urn and that somebody flipped a coin to determine whether it should be filled with Few or Many balls. This would actually be a better analogy, since if there were two human civilizations in existence, one long-lasting and one short-lasting, then one should expect (a priori) to find oneself in the long-lasting one, because that is where most humans would find themselves. It can be showed (see e.g. http://www.anthropic-principle.com/preprints/alive.html) that this greater prior probability that you are in the bigger race would exactly counterbalance and cancel the probability shift that the DA says you should make when you discover that you were born early (i.e. that you have a birth rank that is compatible with you being in the small race). This would annul the DA, but it only works if we know that there are both long-lasting and short-lasting races out there, and an anthropic argument can be made against that assumption—if there were so many long-lasting races, how come we are not in one of them and having a great birth rank; for most observers would then be in such races and with great birth ranks.

The simple urn model requires that the two populations are finite, but a (not necessarily the human) race might have has a non-vanishing probability of being open-ended in time (i.e. infinite in total number), in which case again the probability calculations might not be directly applicable.

That is true; if we take into account the possibility of infinite populations, then things get more problematic. In an infinite population, everybody would in some sense be born "infinitely early". Every birth rank would be equally improbable given that infinitely many people will exist. Cosmologist Andrei Linde (of inflation theory fame) has suggested this as a way of avoiding the conclusion of the DA. According to this suggestion, the DA will not be applicable to a hypothesis saying that we are in an infinite population.

John Leslie thinks that the DA gives us supremely strong reasons against any hypothesis implying the existence of an infinity of observers. In fact, it seems to follows from Leslie’s view that the probability should be zero. But intuitively that seems too strong: could we really rule out the possibility of an infinite population from our armchair, so to speak, without even knowing our birth rank? (For we can assume that we knew that we had some finite birth rank, and for any finite birth rank we turned out to have, the argument would apply—it would have been "infinitely improbable" that we would have had that low a birth rank if the total population were infinite.

My opinion is that the infinite case has not yet been completely settled and that it might well represent a possible scenario (one of the more desirable ones) that is not ruled out by the DA.

Suppose that SETI [the Search for ExtraTerrestrial Life] will detect signals from a technological civilization, containing the information that THEIR evolution was quite fast and THEIR population is quite small, so that we will find us in the situation of being in the ‘urn with the larger number of balls’. Can we than predict that THEIR doom is upcoming?

There are some complications with this situation. For one, if we detected one alien civilization, it would be quite tempting to infer that there are likely quite a few other alien civilization as well. This would affect our empirical estimates of how likely life is to develop and how long it is likely to last. But for simplicity, let’s assume we know that this is the only non-human civilization that exists.

To answer the question briefly, we would not have reason to think that their doom was impending, assuming that they were equally easy for us to detect at any given stage in their evolution. For the longer they were to last, the greater the probability that we would have detected them. If we detect them at an early stage of their civilization, that would have been more likely the shorter the time they were going to last (conditional on our detecting them at all). Putting these two considerations together, we see that they cancel each other. So that line of anthropic reasoning would not tell us how long they would last.

However, we could say that if they were to last for very long, then you would likely not have been a humans (or at least not an early human) but rather you probably have been one of them. Since we are in fact early humans, this gives you reason to assume that they will likely not last very long (i.e. not very much longer than the human species has lasted, where duration is measured by the number of individuals).

Is a really big (in numbers) race not exhausting its resources for life (whatever that may be) faster than a small one, and therefore has a larger probability for doom soon?

How fast we are using up resources and how that influences our risk profile is an empirical question, and it is only relevant for determining the empirical prior probability that the DA uses; the DA itself is neutral as regards the prior.

Empirically speaking, it seems much harder to extinguish the human species once it has begun to colonize the galaxy. If we survive long enough that there have existed many trillions of humans then we will in all likelihood have begun to colonize the galaxy, and that would seem to brighten our prospects considerably. So such late humans could be more optimistic. Also notice, that when they apply the DA, they get a different result than we do; for they feed in different birth ranks (their own) and so will get out a different set of predictions.

Is the ‘transformation’ of a race by evolution, e.g. the development from apes to man, also to be viewed as doom?

There are two distinct questions here, depending on whether we take "doom" in the emotional sense or in its technical sense.

In the emotional sense, an event consisting of the human species evolving into a much more advanced species would not necessarily count as a "doomsday". Indeed, many (myself included) would see such outcome as highly desirable.

In the technical sense, "doomsday" means the point in time where no more beings exist that belong to the same reference class as you and me. Exactly how widely or how narrowly the reference class should be defined for the purposes of the DA is an unsolved problem. A narrow definition is what we should hope for, because then we might interpret the DA as showing that humans will soon evolve into a more advanced life form. On a wide definition, on the other hand, these advanced life forms would also be in the reference class, implying that there likely wouldn’t be very many of them. This would imply an impending doom in the worst sense—extinction of all intelligent life.

Suppose a one-year old ‘whiz-kid’ knows about the Doomsday argument and decides to apply it to her own life span. She reasons that she will almost certainly not live beyond her 40th birthday, although the ‘mean age expectation value’ for a one year old child is certainly higher than 60 years.

You have rediscovered the Baby paradox! It was first noticed by the French mathematician J-P Delahaye. I discuss it in my [LINK]reply to Korb and Oliver (at http://www.anthropic-principle.com/preprints/alive.html). The key to the solution is that a reference class that only contains time-segments of one person would be impermissibly narrow; it would have to contain time-segments of other people as well—and then the Baby argument doesn't work.