> Z\Yq lDbjbjt+t+ gAA9>-]222N8&$J47#((?6u"""""""$M%A'D"#@?"#fffR(!ffff"Q."!~fV!@The Epistemological Mystique of Self-Locating Belief
Nick Bostrom
Dept. Philosophy, Logic and Scientific method
London School of Economics
Email: n.bostrom@lse.ac.uk
1.
Consider the following thought experiment:
Hundred Cubicles
Imagine a world that consists of one hundred cubicles. In each cubicle there is one person. Ninety of the cubicles are painted blue on the outside and the other ten red. Each person is asked to guess whether she is in a blue or a red cubicle. (And everybody knows all this.) Suppose you find yourself in one of these cubicles. What color should you think it has? Answer: Blue, with 90% probability.
Since 90% of all people are in blue cubicles, and as you dont have any other relevant information, it seems you should set your credence of being in a blue cubicle to 90%. Most people I have talked to agree that this is the correct answer. Since the example does not depend on the exact numbers involved, we have the more general principle that in cases like this, your credence of having property P should be equal to the fraction of observers who have P. You reason as if you were a randomly selected sample from the set of observers. I call this the Self sampling assumption:
(SSA) Every observer should reason as if she were a random sample drawn from the set of all observers.
While many accept that SSA is applicable to the Hundred Cubicles without further argument, lets very briefly consider how one might seek to defend this if challenged.
One argument one can advance is the following. Suppose everyone accepts SSA and everyone has to bet on whether they are in a blue or a red cubicle. Then 90% of all persons will win their bets and 10% will lose. Suppose, on the other hand, that SSA is rejected and people think that one is no more likely to be in a blue cubicle; so they bet by flipping a coin. Then, on average, 50% of the people will win and 50% will lose. It seems better to accept SSA.
This argument is incomplete as it stands. That one pattern A of betting leads more people to win their bets than another pattern B does not imply that it is rational for anybody to bet in accordance with A rather than B. In Hundred Cubicles, consider the betting pattern A which specifies that If you are Harry Smith, bet you are in a red cubicle; if you are Helena Singh, bet that you are in a blue cubicle; so that for each person in the experiment it gives the advice that will lead him or her to be right. Adopting rule A will lead to more people winning their bets (100%) than any other rule. In particular, it outperforms SSA which has a mere 90% success rate.
Intuitively, it is clear that rules like A are cheating. This can be seen if we put A in the context of its rival permutations A, A, A etc., which map the participants to recommendations about betting red or blue in other ways than A. Most of these permutations will do rather badly, and on average they will give no better advice than flipping a coin, which we saw was inferior to accepting SSA. Only if the people in the cubicles could pick the right A-permutation would they benefit. In Hundred Cubicles they dont have any information that allows them to do this. If they picked A and consequently benefited, it would be pure luck.
2.
In Hundred Cubicles, the number of observers in existence was known. Lets now consider a variation where the total number of observers is different depending on which hypothesis under investigation is true.
Gods Coin Toss
Stage (a): God first creates hundred cubicles. Each cubicle has a unique number painted on it on the outside (which cant be seen from the inside); the numbers are the integers between 1 and 100. God creates one observer in cubicle #1. Then God tosses a fair coin. If the coin falls tails, He does nothing more. If the coin falls heads, He creates one observer in each of cubicle #2 - #100. Apart from this, the world is empty. It is now a time well after the coin has been tossed and any resulting observers have been created. Everyone knows all the above.
Stage (b): A little later, you have just stepped out of your cubicle and discovered that it is #1.
Question: What should your credence of the coin having fallen tails be at stages (a) and (b)?
3.
We shall look at three different models for how you should reason, each giving a different answer to this question. These three models seem to exhaust the range of solutions that have any degree of prima facie plausibility.
Model 1
At stage (a) you should set your credence of the coin having landed heads equal to 50%, since you know it has been a fair toss. Now, consider the conditional credence you should assign at stage (a) to being in a certain cubicle given a certain outcome of the coin toss. For example, the conditional probability of being in cubicle #1 given that the coin fell tails is 1, since that is the only cubicle you can be in if that happened. And by applying SSA to this situation, we get that the conditional probability of being in cubicle #1 given heads, is 1/100. Plugging this into Bayes formula, we get:
EMBED Equation.3
Therefore, upon learning that you are in cubicle #1, you should become almost certain (probability = 100/101) that the coin fell tails.
Answer: At stage (a) you credence of Tails should be 1/2 and at stage (b) it should be 100/101.
Model 2
Since you know the coin toss to have been fair, and you havent got any other information that is relevant to the issue, you credence of Tails at stage (b) should be 1/2. Since we know the conditional credences (same as in Model 1) we can infer what your credence of Tails should be at stage (a). This is can be done through a simple calculation using Bayes theorem, and the result is that your prior credence of Tails must equal 1/101.
Answer: At stage (a) you credence of Tails should be 1/101 and at stage (b) it should be 1/2.
Model 3
In neither stage (a) nor stage (b) do you have any relevant information as to how the coin (which you know to be fair) landed. Thus in both instances, your credence of Tails should be 1/2.
Answer: At stage (a) you credence of Tails should be 1/2 and at stage (b) it should be 1/2.
4.
Which of these models should one use?
Definitely not Model 3, for it is incoherent. It is easy to see (by inspecting Bayes theorem) that if we want to end up with the posterior probability of Tails being 1/2, and both Heads and Tails have a 50% prior probability, then the conditional probability of being in cubicle #1 must be the same on Tails as it is on Heads. But at stage (a) you know with certainty that if the coin fell heads then you are in cubicle #1; so this conditional probability has to equal 1. In order for Model 3 to be coherent, you would therefore have to set your conditional probability of being in cubicle #1 given Heads equal to 1 as well. That means you would already know with certainty at stage (a) that you are in cubicle #1. Which is simply not the case. Hence Model 3 is wrong.
Model 1 and Model 2 are both ok so far as probabilistic coherence goes. Choosing between them is therefore a matter of selecting the most plausible or intuitive credence function. Intuitively, it may seem as if the credence of Tails should be 1/2 at both stage (a) and stage (b), but as we have just seen, that is incoherent. (In passing, we may note that as a forth alternative we could define model that is a mixture of Model 1 and Model 2. But that seems to be the least attractive of all coherent alternatives it would force us to sacrifice both intuitions and admit that at neither stage should the credence of Tails be 1/2. Then all the counterintuitive consequences discussed below would obtain in some form.)
5.
Consider whats involved in Model 2. It says that at stage (a) you should assign a credence of 1/101 to the coin having landed tails. That is, just knowing about the setup but having no direct evidence about the outcome of the toss, you should be virtually certain that the coin fell in such a way as to create ninety-nine observers. This amounts to having an a priori bias towards the world containing many observers. Modifying the gedanken by using different numbers, it can be shown that in order for the probabilities always to work out the way Model 2 requires, you would have to subscribe to the principle that, other things equal, a hypothesis which implies that there are 2N observers should be assigned twice the credence of a hypothesis which implies that there are only N observers. I call this the Self indication assumption (SIA). As an illustration of what accepting SIA commits you to, consider the following example, which seems to be closely analogous to Gods Coin Toss:
The presumptuous philosopher
It is the year 2100 and physicists have narrowed down the search for a theory of everything to only two remaining plausible candidate theories, T1 and T2 (using considerations from super-duper symmetry). According to T1 the world is very, very big but finite, and there are a total of a trillion trillion observers in the cosmos. According to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion observers. The super-duper symmetry considerations seem to be roughly indifferent between these two theories. The physicists are planning on carrying out a simple experiment that will falsify one of the theories. Enter the presumptuous philosopher: Hey guys, it is completely unnecessary for you to do the experiment, because I can already show to you that T2 is about a trillion times more likely to be true than T1 (whereupon the philosopher runs the Gods Coin Toss thought experiment and explains Model 2)!
Somehow one suspects the Nobel Prize committee would be a bit hesitant about awarding the philosopher the big one for this contribution. But it is hard to see what the relevant difference is between this case and Gods Coin Toss. If there is no relevant difference, and we are not prepared to accept the argument of the presumptuous philosopher, then we are not justified in using Model 2 in Gods Coin Toss either.
6.
Which leaves us with Model 1 In this model, after finding that you are in cubicle #1, you should set your credence of Tails equal to 100/101. In other words, you should be almost certain that the world does not contain the extra ninety-nine observers. This might be the least unacceptable of the alternatives and therefore the one we ought to go for. Before uncorking the champagne, however, consider what choosing this option appears to entail:
What the snake said to Eve
Eve and Adam, the first two persons, knew that if they indulge their flesh Eve might bear a child, and if she did, they would be driven out from Eden and would go on to spawn billions of progeny that would fill the Earth with misery. One day a snake approached Eve and spoke thus: Pssst! If you embrace Adam, then either you will have a child or you wont. If you have a child then you will have been among the first two out of billions of people. The conditional probability of having such an early position in the human species given this hypothesis is extremely small. If, one the other hand, you dont become pregnant then the conditional probability given this of you being among the first two humans is equal to one. By Bayes theorem, the risk that you will have a child is less than one in a billion. So indulge, and worry not about the consequences!
Lets study the differences between Gods Coin Toss and the Eve-example to see if any of them is relevant, i.e. such that we think it should make a difference as to our credence assignments.
In the Gods Coin Toss experiment there was a point in time, stage (a), when the subject was actually ignorant about what her position was in the set of observers, while Eve presumably knew all along that she was the first woman. But it is not clear why that should matter. We can imagine that Adam and Eve begin their lives inside a cubicle and only after some time do they discover that they are the first humans. It still is counterintuitive to say that Eve shouldnt worry about getting pregnant.
When the subject is making the inference in Gods Coin Toss, the coin has already been tossed. In the case of Eve, the relevant chance event has not yet taken place. But this difference does not seem crucial either. In any case, we can suppose that the deciding chance event has already taken place in the Eve-example the couple has just had sex and they are now contemplating the implications. The worry seems to remain.
At stage (b) in Gods Coin Toss, any observers resulting from the toss have already been created, whereas Eves potential progeny does not yet exist at the time when she is assessing the odds. We can consider a variant of Gods Coin Toss where the cubicles and their content each exist in a different century. Stage (a) can now take place in the first century, and yet the credence of Tails and the conditional credence of being in a particular cubicle given Tails (or Heads) that one should assign at this stage seems to be the same as in the original version, provided one does not know what time it is. Exactly as before, Bayes theorem then implies that the posterior credence of Tails after finding out that one is in cubicle #1 (and therefore in the first century) should be much greater than the prior credence of Tails.
In Gods Coin Toss, the two hypotheses under consideration (Heads and Tails) had well-defined known prior probabilities (50%); but Eve has to use vague subjective considerations to assess the risk of pregnancy. True, but would we want to say that if Eves getting pregnant were determined by some distinct microscopic biological chance event that Eve knew about, she should then accept the snakes advice? If anything, that only makes the example even weirder.
7.
Unless some other difference can be found which is relevant, we have to accept that the same model should be applied to both Gods Coin Toss and the Eve-example. Then we either have to accept, however reluctantly, that the snakes advice to Eve is sound (if we choose Model 1) or the arguably even more unpalatable implication that our friend the presumptuous philosopher was right and the physicists neednt bother to conduct the experiment (if we choose Model 2). Either way, there is an air of mystique.
References
ADDIN ENBbu 1. Bostrom, N., The Doomsday Argument is Alive and Kicking. Mind, 1999. 108(431): p. 539-50.
2. Bostrom, N., Observer-relative chances in anthropic reasoning? Erkenntnis, 2000. In press.
3. Bostrom, N. & Cirkovic, M., Cosmological Constant and the Final Anthropic Hypothesis. Astrophysics and Space Science, 2000. In press.
4. Gott, R.J., Implications of the Copernican principle for our future prospects. Nature, 1993. 363 (27 May): p. 315-319.
5. Leslie, J., Observer-relative Chances and the Doomsday argument. Inquiry, 1997. 40: p. 427-36.
6. Leslie, J., The End of the World. 1996, London: Routledge.
7. Korb, K. and J. Oliver, A Refutation of the Doomsday Argument. Mind, 1999. 107: p. 403-10.
8. Dieks, D., Doomsday - Or: the Dangers of Statistics. Philosophical Quarterly, 1992. 42(166): p. 78-84.
9. Dieks, D., The Doomsday Argument, manuscript. 1999.
10. Bartha, P. and C. Hitchcock, No One Knows the Date of the Hour: An Unorthodox Application of Rev. Bayes's Theorem. Philosophy of Science, 1999. 66 (Proceedings): p. S229-S353.
11. Bartha, P. and C. Hitchcock, The shooting-room paradox and conditionalizing on measurably challenged sets. Synthese, 2000: p. 403-437.
12. Leslie, J., Fine tuning can be important. Australasian Journal of Philosophy, 1994. 72(3): p. 383.
13. Oliver, J. and K. Korb, A Bayesian analysis of the Doomsday Argument, tech-report, 1997, Department of Computer Science, Monash University.
14. Smith, Q., Anthropic explanations in cosmology. Australasian Journal of Philosophy, 1994. 72(3): p. 371-382.
Im grateful for comments and helpful discussions with Colin Howson, Craig Callender, John Leslie, Mark Greenberg, William Eckhardt, Dennis Dieks, Joseph Berkovitz, Jacques Mallah, Adam Elga, Robin Hanson, Wei Dai, Kevin Korb, Jonathan Oliver, Milan Cirkovic, Hal Finney, and Roger White.
For further explorations of this principle, see e.g. ADDIN ENRfu [1-7].
God is not supposed to count as an observer here; we may imagine an automaton instead of God.
See also ADDIN ENRfu [8-14].
PAGE 1
EMBED Equation.3
EMBED Equation.3
456DU]
DEijwx
hi-.
7@yDEjUhmHnH5CJOJQJCJOJQJj0J-6U6CJOJQJ6OJQJ65
j0J-UCJ$K67Dr{|* +
.$dh3$dh3$dh
3$dh $dh
~=z9!v%$dh
$dh
$dh$dh+ >7y|DF"""%%&))k+l+o+.-/-J-00h1_3 5G8:::<<"<<<x==T>>>Z??E@@7AA9BCD/D;D7y| $dhdd
~=z9!v% $dhdd
~=z9!v%
$dh$dh$dh3$dh{|}~
!L M Z [ i!m!!!""U"V"C%]%c%d%%&''l+o+/-J-00h1M2_34 55H6I6G89::K:M:<"<#<0<j0J-CJOJQJU6CJOJQJCJOJQJCJCJ6jCJEHUjj<
UV jUK|DF"""%%&) $dh
~=z9!v% $dh
~=z9!v% $dh
~=z9!v%$dh$dh))k+l+o+.-/-J-00h1_3 5G8::$
&Fdh
$dhdd
~=z9!v%$dh $dh
~=z9!v% $dh
~=z9!v% $dh
~=z9!v%::<<"<<<x==T>>>Z??E@@7AA9B\CCD/D,$0dh$sdh(($dh $dhdd
~=z9!v%0<1<2<C<n<o<s<{<~<<<<<<<=G=H=f=======>5>6>=>E>G>c>w>>>>>>>>'?(???G?I?h?}??@@@f@@@@@@@ A(A*ASAAAAAB%B'B7B8B9B:B\C]CCCCCCCCCCDDDDjU
j0J-U56 jUjCJUWD$D%D&D,D-D0D1D7D8D9D:D;D>D?DODPDQDRDTDUDeDfDgDhDlDȿjCJEHUjuj<
UVjCJEHUjrj<
UV0J9mHnH0J9
j0J9U jUjKU/D0D;DDSDTDiDjDkDlD$0dh88&`+0P. A!"#$n%p!E#sTHdЯ%7ubb.v άh2@xuAOQgf-еHIIN=hT&[-)MA/"1`xpB8yU6'7ٝ@F>yG1y u]c*S"܃KrlQ?
C`J}(y/N
yzZ<NC fq#ee4W~?UfX
UAHg9c|(yNX,ʖڑ+_X
%J_?^3eLz8
3OSa+
|=x<udL Sx)w.fد^0;4H=O/}g{֛:4
zG6
k
=I֖GrŪ{Ti+h?ôX (MYo MKA̳!7ՎSݨ˓~1=m)oiRmZ0aB\H4il>p!ë?)x`^砄ېNCs+ǧF
xR/Qkw?TA㈻B"
!#q WzA(\H*9T
F!ёXfeoLn_DdCAJD2#"VdZ)<ۡQ<:g:pxִRm\ߌ]eT]:hR\-2ڊ{d9ýFb::J
{
70QG(7Fh+xʷINWngbcEtV1۩_gI:{R~LWhҩ,'8œWeq&Ay
zAp$/[(}g4Dd
@B
SA?2kXT6-]D`!UkXT6- #xu1KPwMMBq**BhFc h::$BxwA|xܽM у&kY"Ea}[f[\I`6!~(}ril6NM8)X˄L#4g6N۞|_<݊?w7++?ƇI4s7{$^<6Si}O9JK |h%ro-xFibҡudoFADLDHDDd<
CA?2bYu
4hQAe$
>`!6Yu
4hQAe$
p2@INxuOQgf+Ram!RDHINeRmҽ-_H/qG^[&M[q@~͚4o4uۧ!'쥆Z='6Z3JyJSjՀJGѥF}
ҙDdp<
CA?2b&(i`!b&(iL
dxRJPͺUXłZX `n!`g'X(
Yn%V``Ϙ]/\2̜3" X
d͑,~ Q2HqKf{$3N]2VjGv;A,Dq|ˣ~ktM+̮/Vg/Z@8ʽ֠7+N[q4c)Ζjcp9Sb:yMPwCU
:r
ן"?Vc#7 V*gݹLol\3x19yѩ0,j<,F0{˨
6X:gv:߆ʫ;O9uLwpR7 B%\ڄdOʏi2;t*
|S?f3iL%P~~|rPd%/
NB<3qg4
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPRSTUVWX[^_`abcdefgijklmnopqrstuvwxyz{|}Root Entry F^V VfV]
Data
QWordDocumentgObjectPoolBfVfV_1013623694FBfVKfVOle
CompObjfObjInfo
!"#$%&'(*+,-./01235
FMicrosoft Equation 3.0DS EquationEquation.39q_|DnI@zI
Pr(Tails|Iamincubicle#1)Equation Native _1013623666FKfV`\fVOle
CompObj
f
FMicrosoft Equation 3.0DS EquationEquation.39q_II
=Pr(Iamincubicle#1|Tails)Pr(Tails)Pr(Iamincubicle#1|Tails)Pr(ObjInfo
Equation Native _1013623669 FcdfV@mfVOle
Tails)+Pr(Iamincubicle#1|Heads)Pr(Heads)
FMicrosoft Equation 3.0DS EquationEquation.39q_XI$I
=1)1CompObjfObjInfoEquation Native 1Tableh*2
1)12
+)1100
)12
=100101Oh+'0@Hx
!Remarks on the Doomsday Argumentros(JOO JARDIM x8?! PORRA! DIA 8 VOTA NO!<VOTA NO RSummaryInformation(LDocumentSummaryInformation8)CompObj4jEGIONALIZAO! SIM AO REFORO DO MUNICIPALISMO!Doomsday Argument, shooting room paradox,example,indeterminism,doomsday,human extinction,Bayes,Bayesianism,anthropic reasoning,anthropic principle,disaster, extinction,science of human extinction,Brandon Carter,John Leslie,the End of the World,Investigati%A REGIONALIZAO UM ERRO COLOSSAL!doxNormal.dotZ
Nick BostromO2ckMicrosoft Word 8.0U@@N˃@+_V@+_V N3՜.+,D՜.+,Lhp|
sm?1!Remarks on the Doomsday ArgumentTitleL0lt,
_PID_GUIDdescription
GENERATORAN{1B917A82-F146-11D3-BD96-00C04F44AD5D}XWill the human race soon become extinct? Read about the Carter-Leslie doomsday argumentMicrosoft FrontPage 3.0
FMicrosoft Word Document
MSWordDocWord.Document.89q>
[<@<Normal
1$ddCJhmH nH f@f Heading 1($1$4dHh@&
T6;@CJ KH$hmH B@B Heading 2$<@&56OJQJ<@< Heading 3$<@&OJQJd@d Heading 44$
&F1$4hd@&
CJ(hmH b@b Heading 51
&F1$4hd<@&
CJhmH d@d Heading 61
&F1$4hd<@&
6CJhmH j@j Heading 71
&F1$4hd<@&
CJOJQJhmH l@l Heading 81
&F1$4hd<@&
6CJOJQJhmH p @p Heading 91
&F1$4hd<@&
56CJOJQJhmH <A@<Default Paragraph Font:O:Definition Term
>O>Definition Listh(O(
Definition6*O*H1$@&5CJ0KH$&O&H2$@&5CJ$&O&H3$@&5CJ"O"H4$@&5&O&H5$@&5CJ&O&H6$@&5CJ.O.Address
60O0
Blockquote
hhOCITE6$O$CODECJOJQJ$X@$Emphasis6(U@( Hyperlink>*B*8V@8FollowedHyperlink>*B*0O0Keyboard5CJOJQJfOfPreformatted0
#~=z9!v%CJOJQJZOZz-Bottom of Form!$1$$d<CJOJQJhmH nH TOT
z-Top of Form"$1$&d<CJOJQJhmH nH $O1$SampleOJQJ W@A Strong50OQ0
TypewriterCJOJQJ$Oa$Variable6,Oq,HTML Markup<B*"O"Comment<8Y@8Document Map)-D OJQJR+@REndnote Text*1$4d
hmH 6*@6Endnote ReferenceH*X@X
Footnote Text,1$4d
CJhmH 8&@8Footnote ReferenceH*ZC@ZBody Text Indent.1$4d
hmH 6'@6Comment ReferenceCJV@VComment Text01$4d
CJhmH bR@bBody Text Indent 2#11$4d
hmH dS@"dBody Text Indent 3&21$4d
ThmH LB@2L Body Text31$4d
hmH J>@BJTitle"4$1$4d
CJ$hmH N2@RNList 2'51$46d
hmH BL@BDate61$4d
hmH H@rHHeader"71$4d
9r hmH H @HFooter"81$4d
9r hmH &)@&Page NumberOMSFootnote Text=:
8`0p@P !$`'CJEH OJQJmH nH <O<Thesis
;1$OJQJhmH $/@$List
<H6@H
List Bullet 2=
&F
h4
c!l@#r,l@,/l@
0<DlDCGKL |):/DlDDFHIJMlDE"8187>l@:Q !ZioQQ')/::/X2$%7ubb.v άM-2$`^砄ېNCs+ǧz@>(
N
SA?N
SA?B
S ?l@ T4w^4
_Hlt463536437 OLE_LINK1P@m@Q@m@8>0@:@m@Nick Bostrom?\\SERVERT\BOSTROM$\24sept99\Conference submissions\Mystique.docO,X, >; T" & Y;& D* @[C/@03 ; 'C; z< ,>dD6SA yCF ;L `XiNv@~T 3;U \/Yl*>mZe ah o3p elu _y Xiy Oo} 8} OJQJo(*hhOJQJo(hh.hhOJQJo(hh.hho(.hh.sOJQJo(hh.hhOJQJo(hhOJQJo(hhOJQJo(@56>*CJOJQJo(1. hhOJQJo(hhOJQJo(hh..hh.hhOJQJo(zo(()hhOJQJo(hhOJQJo(hhOJQJo(hhOJQJo(hhOJQJo(hhOJQJo(hh.hhOJQJo( Xiy8}3;U;T",>aho3p'C;_y>mZe\/Y`XiNOo};L~TY;&D*&3>;yCFz<6SAelu`X,@[C/ @hpOJQJo(l @0OJQJo(EN_Doc_Font_List_NameLEN_Lib_Name_List_NameEN_Main_Body_Style_Name$\'5c'01\'5c'01\'5c'0fTimes New Roman
11Bostrom.enlNumbered.ens@4:l@p@GTimes New Roman5Symbol3&Arial?5 Courier New5&:TahomaY New YorkTimes New RomanSTimesTimes New Roman;Wingdings"hUCUC
CF N3m!? Remarks on the Doomsday Argument'JOO JARDIM x8?! PORRA! DIA 8 VOTA NO!Doomsday Argument, shooting room paradox,example,indeterminism,doomsday,human extinction,Bayes,Bayesianism,anthropic reasoning,anthropic principle,disaster, extinction,science of human extinction,Brandon Carter,John Leslie,the End of the World,Investigati;VOTA NO REGIONALIZAO! SIM AO REFORO DO MUNICIPALISMO!Nick Bostrom