Does observing life on Earth increase the probability of life elsewhere?












10












$begingroup$


Say I have an implausibly large sack of balls. All I know is that the balls are numbered randomly from $1$ to $n$. For all I know, any value of $n$ (a positive integer) is equally likely.



I reach into the sack and choose a ball randomly. The ball says $42$. Does this change at all the probabilities of the values of $n$ used to number the balls where $n geq 42$?



(Intuitively it might seem like $n$ is a low number in that if $n$ were very very large (say $2^{42}$) it seems implausible we would hit on a very low number from the first ball sampled. On the other hand, if $n$ is a very very large number, $42$ is equally as like as any ball to emerge.)





Another simplified version might be where the balls are either blue or red, but I don't know how many are blue or how many are red. The first ball I choose is blue. Does this increase the probability of observing further blue balls in later samples?



(Again if there were only one blue ball, intuitively it seems unlikely we would choose it on the first sample. On the other hand, if there were only one blue ball, that ball is as equally likely to emerge as any on the first sample.)





It seems to be a question that crops up a lot. Like for example in the argument that well there's life here on Earth so it would be an improbable fluke if there were no life elsewhere. Of course this is a more complex question than just what colour the balls are, but the thrust of this argument seems to be a probabilistic one, like it boils down to the idea that we know there's one blue ball in the tiny sample we've seen, so there must be lots of blue balls in the implausibly large sack to explain that.



I'm not convinced this latter argument makes sense, but on the other hand, I don't know how to reason about the problem or prove one way or the other whether seeing a blue ball early on affects the (relative) probability of the number of blue balls in the population. Hence I'm wondering, for example, if there's some sort of general theorem from probability that talks about this?










share|cite|improve this question









$endgroup$








  • 4




    $begingroup$
    You should look up the German tank problem.
    $endgroup$
    – Arthur
    Nov 19 '16 at 14:49










  • $begingroup$
    Okay, so using a Frequentist inference, assuming all the balls are numbered uniquely (without replacement), in the case of $42$, the estimate would be $81$ (essentially $42$ sits in the middle of the range). With Bayesian inference, I got $0$ for every value of $n$ (since it includes $k-1$ as a factor where $k$ is the sample size, which in this case is $1$). But this is definitely interesting in terms of interpretations of probability, though I still find those estimates arbitrary in some sense or am still struggling to wrap my head around the core assumptions that differ in both cases.
    $endgroup$
    – badroit
    Nov 19 '16 at 15:10








  • 3




    $begingroup$
    Note that the case of life on Earth is very much unlike your other examples. Namely the only way to observe the existence of life is by being a life form, which implies to be on a planet that bears life. So no matter how few planets have life, and even if there should be just one single planet in the whole universe that has life, the probability that any sentient life form in the universe discovers that it is on a planet bearing life is exactly one. In other words, we have a selection bias about the planet Earth, because if Earth would not bear life, we wouldn't be there to observe it.
    $endgroup$
    – celtschk
    Nov 24 '16 at 22:52










  • $begingroup$
    That's a good point @celtschk! I do have that in mind, but this is the sort of question that must be tackled from multiple angles. To start with even assuming the simple case that our observation of life is "random" ... I'd like to understand what that case means first, or how to reason about that.
    $endgroup$
    – badroit
    Nov 24 '16 at 22:54










  • $begingroup$
    Here are some possible reasons why users downvoted. You ask three very different questions in one post, and it is hard to understand what is your point exactly. I didn't downvote myself; I would rather put a close vote for these reasons.
    $endgroup$
    – zhoraster
    Nov 30 '16 at 9:27


















10












$begingroup$


Say I have an implausibly large sack of balls. All I know is that the balls are numbered randomly from $1$ to $n$. For all I know, any value of $n$ (a positive integer) is equally likely.



I reach into the sack and choose a ball randomly. The ball says $42$. Does this change at all the probabilities of the values of $n$ used to number the balls where $n geq 42$?



(Intuitively it might seem like $n$ is a low number in that if $n$ were very very large (say $2^{42}$) it seems implausible we would hit on a very low number from the first ball sampled. On the other hand, if $n$ is a very very large number, $42$ is equally as like as any ball to emerge.)





Another simplified version might be where the balls are either blue or red, but I don't know how many are blue or how many are red. The first ball I choose is blue. Does this increase the probability of observing further blue balls in later samples?



(Again if there were only one blue ball, intuitively it seems unlikely we would choose it on the first sample. On the other hand, if there were only one blue ball, that ball is as equally likely to emerge as any on the first sample.)





It seems to be a question that crops up a lot. Like for example in the argument that well there's life here on Earth so it would be an improbable fluke if there were no life elsewhere. Of course this is a more complex question than just what colour the balls are, but the thrust of this argument seems to be a probabilistic one, like it boils down to the idea that we know there's one blue ball in the tiny sample we've seen, so there must be lots of blue balls in the implausibly large sack to explain that.



I'm not convinced this latter argument makes sense, but on the other hand, I don't know how to reason about the problem or prove one way or the other whether seeing a blue ball early on affects the (relative) probability of the number of blue balls in the population. Hence I'm wondering, for example, if there's some sort of general theorem from probability that talks about this?










share|cite|improve this question









$endgroup$








  • 4




    $begingroup$
    You should look up the German tank problem.
    $endgroup$
    – Arthur
    Nov 19 '16 at 14:49










  • $begingroup$
    Okay, so using a Frequentist inference, assuming all the balls are numbered uniquely (without replacement), in the case of $42$, the estimate would be $81$ (essentially $42$ sits in the middle of the range). With Bayesian inference, I got $0$ for every value of $n$ (since it includes $k-1$ as a factor where $k$ is the sample size, which in this case is $1$). But this is definitely interesting in terms of interpretations of probability, though I still find those estimates arbitrary in some sense or am still struggling to wrap my head around the core assumptions that differ in both cases.
    $endgroup$
    – badroit
    Nov 19 '16 at 15:10








  • 3




    $begingroup$
    Note that the case of life on Earth is very much unlike your other examples. Namely the only way to observe the existence of life is by being a life form, which implies to be on a planet that bears life. So no matter how few planets have life, and even if there should be just one single planet in the whole universe that has life, the probability that any sentient life form in the universe discovers that it is on a planet bearing life is exactly one. In other words, we have a selection bias about the planet Earth, because if Earth would not bear life, we wouldn't be there to observe it.
    $endgroup$
    – celtschk
    Nov 24 '16 at 22:52










  • $begingroup$
    That's a good point @celtschk! I do have that in mind, but this is the sort of question that must be tackled from multiple angles. To start with even assuming the simple case that our observation of life is "random" ... I'd like to understand what that case means first, or how to reason about that.
    $endgroup$
    – badroit
    Nov 24 '16 at 22:54










  • $begingroup$
    Here are some possible reasons why users downvoted. You ask three very different questions in one post, and it is hard to understand what is your point exactly. I didn't downvote myself; I would rather put a close vote for these reasons.
    $endgroup$
    – zhoraster
    Nov 30 '16 at 9:27
















10












10








10


4



$begingroup$


Say I have an implausibly large sack of balls. All I know is that the balls are numbered randomly from $1$ to $n$. For all I know, any value of $n$ (a positive integer) is equally likely.



I reach into the sack and choose a ball randomly. The ball says $42$. Does this change at all the probabilities of the values of $n$ used to number the balls where $n geq 42$?



(Intuitively it might seem like $n$ is a low number in that if $n$ were very very large (say $2^{42}$) it seems implausible we would hit on a very low number from the first ball sampled. On the other hand, if $n$ is a very very large number, $42$ is equally as like as any ball to emerge.)





Another simplified version might be where the balls are either blue or red, but I don't know how many are blue or how many are red. The first ball I choose is blue. Does this increase the probability of observing further blue balls in later samples?



(Again if there were only one blue ball, intuitively it seems unlikely we would choose it on the first sample. On the other hand, if there were only one blue ball, that ball is as equally likely to emerge as any on the first sample.)





It seems to be a question that crops up a lot. Like for example in the argument that well there's life here on Earth so it would be an improbable fluke if there were no life elsewhere. Of course this is a more complex question than just what colour the balls are, but the thrust of this argument seems to be a probabilistic one, like it boils down to the idea that we know there's one blue ball in the tiny sample we've seen, so there must be lots of blue balls in the implausibly large sack to explain that.



I'm not convinced this latter argument makes sense, but on the other hand, I don't know how to reason about the problem or prove one way or the other whether seeing a blue ball early on affects the (relative) probability of the number of blue balls in the population. Hence I'm wondering, for example, if there's some sort of general theorem from probability that talks about this?










share|cite|improve this question









$endgroup$




Say I have an implausibly large sack of balls. All I know is that the balls are numbered randomly from $1$ to $n$. For all I know, any value of $n$ (a positive integer) is equally likely.



I reach into the sack and choose a ball randomly. The ball says $42$. Does this change at all the probabilities of the values of $n$ used to number the balls where $n geq 42$?



(Intuitively it might seem like $n$ is a low number in that if $n$ were very very large (say $2^{42}$) it seems implausible we would hit on a very low number from the first ball sampled. On the other hand, if $n$ is a very very large number, $42$ is equally as like as any ball to emerge.)





Another simplified version might be where the balls are either blue or red, but I don't know how many are blue or how many are red. The first ball I choose is blue. Does this increase the probability of observing further blue balls in later samples?



(Again if there were only one blue ball, intuitively it seems unlikely we would choose it on the first sample. On the other hand, if there were only one blue ball, that ball is as equally likely to emerge as any on the first sample.)





It seems to be a question that crops up a lot. Like for example in the argument that well there's life here on Earth so it would be an improbable fluke if there were no life elsewhere. Of course this is a more complex question than just what colour the balls are, but the thrust of this argument seems to be a probabilistic one, like it boils down to the idea that we know there's one blue ball in the tiny sample we've seen, so there must be lots of blue balls in the implausibly large sack to explain that.



I'm not convinced this latter argument makes sense, but on the other hand, I don't know how to reason about the problem or prove one way or the other whether seeing a blue ball early on affects the (relative) probability of the number of blue balls in the population. Hence I'm wondering, for example, if there's some sort of general theorem from probability that talks about this?







probability-theory






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 19 '16 at 14:43









badroitbadroit

207216




207216








  • 4




    $begingroup$
    You should look up the German tank problem.
    $endgroup$
    – Arthur
    Nov 19 '16 at 14:49










  • $begingroup$
    Okay, so using a Frequentist inference, assuming all the balls are numbered uniquely (without replacement), in the case of $42$, the estimate would be $81$ (essentially $42$ sits in the middle of the range). With Bayesian inference, I got $0$ for every value of $n$ (since it includes $k-1$ as a factor where $k$ is the sample size, which in this case is $1$). But this is definitely interesting in terms of interpretations of probability, though I still find those estimates arbitrary in some sense or am still struggling to wrap my head around the core assumptions that differ in both cases.
    $endgroup$
    – badroit
    Nov 19 '16 at 15:10








  • 3




    $begingroup$
    Note that the case of life on Earth is very much unlike your other examples. Namely the only way to observe the existence of life is by being a life form, which implies to be on a planet that bears life. So no matter how few planets have life, and even if there should be just one single planet in the whole universe that has life, the probability that any sentient life form in the universe discovers that it is on a planet bearing life is exactly one. In other words, we have a selection bias about the planet Earth, because if Earth would not bear life, we wouldn't be there to observe it.
    $endgroup$
    – celtschk
    Nov 24 '16 at 22:52










  • $begingroup$
    That's a good point @celtschk! I do have that in mind, but this is the sort of question that must be tackled from multiple angles. To start with even assuming the simple case that our observation of life is "random" ... I'd like to understand what that case means first, or how to reason about that.
    $endgroup$
    – badroit
    Nov 24 '16 at 22:54










  • $begingroup$
    Here are some possible reasons why users downvoted. You ask three very different questions in one post, and it is hard to understand what is your point exactly. I didn't downvote myself; I would rather put a close vote for these reasons.
    $endgroup$
    – zhoraster
    Nov 30 '16 at 9:27
















  • 4




    $begingroup$
    You should look up the German tank problem.
    $endgroup$
    – Arthur
    Nov 19 '16 at 14:49










  • $begingroup$
    Okay, so using a Frequentist inference, assuming all the balls are numbered uniquely (without replacement), in the case of $42$, the estimate would be $81$ (essentially $42$ sits in the middle of the range). With Bayesian inference, I got $0$ for every value of $n$ (since it includes $k-1$ as a factor where $k$ is the sample size, which in this case is $1$). But this is definitely interesting in terms of interpretations of probability, though I still find those estimates arbitrary in some sense or am still struggling to wrap my head around the core assumptions that differ in both cases.
    $endgroup$
    – badroit
    Nov 19 '16 at 15:10








  • 3




    $begingroup$
    Note that the case of life on Earth is very much unlike your other examples. Namely the only way to observe the existence of life is by being a life form, which implies to be on a planet that bears life. So no matter how few planets have life, and even if there should be just one single planet in the whole universe that has life, the probability that any sentient life form in the universe discovers that it is on a planet bearing life is exactly one. In other words, we have a selection bias about the planet Earth, because if Earth would not bear life, we wouldn't be there to observe it.
    $endgroup$
    – celtschk
    Nov 24 '16 at 22:52










  • $begingroup$
    That's a good point @celtschk! I do have that in mind, but this is the sort of question that must be tackled from multiple angles. To start with even assuming the simple case that our observation of life is "random" ... I'd like to understand what that case means first, or how to reason about that.
    $endgroup$
    – badroit
    Nov 24 '16 at 22:54










  • $begingroup$
    Here are some possible reasons why users downvoted. You ask three very different questions in one post, and it is hard to understand what is your point exactly. I didn't downvote myself; I would rather put a close vote for these reasons.
    $endgroup$
    – zhoraster
    Nov 30 '16 at 9:27










4




4




$begingroup$
You should look up the German tank problem.
$endgroup$
– Arthur
Nov 19 '16 at 14:49




$begingroup$
You should look up the German tank problem.
$endgroup$
– Arthur
Nov 19 '16 at 14:49












$begingroup$
Okay, so using a Frequentist inference, assuming all the balls are numbered uniquely (without replacement), in the case of $42$, the estimate would be $81$ (essentially $42$ sits in the middle of the range). With Bayesian inference, I got $0$ for every value of $n$ (since it includes $k-1$ as a factor where $k$ is the sample size, which in this case is $1$). But this is definitely interesting in terms of interpretations of probability, though I still find those estimates arbitrary in some sense or am still struggling to wrap my head around the core assumptions that differ in both cases.
$endgroup$
– badroit
Nov 19 '16 at 15:10






$begingroup$
Okay, so using a Frequentist inference, assuming all the balls are numbered uniquely (without replacement), in the case of $42$, the estimate would be $81$ (essentially $42$ sits in the middle of the range). With Bayesian inference, I got $0$ for every value of $n$ (since it includes $k-1$ as a factor where $k$ is the sample size, which in this case is $1$). But this is definitely interesting in terms of interpretations of probability, though I still find those estimates arbitrary in some sense or am still struggling to wrap my head around the core assumptions that differ in both cases.
$endgroup$
– badroit
Nov 19 '16 at 15:10






3




3




$begingroup$
Note that the case of life on Earth is very much unlike your other examples. Namely the only way to observe the existence of life is by being a life form, which implies to be on a planet that bears life. So no matter how few planets have life, and even if there should be just one single planet in the whole universe that has life, the probability that any sentient life form in the universe discovers that it is on a planet bearing life is exactly one. In other words, we have a selection bias about the planet Earth, because if Earth would not bear life, we wouldn't be there to observe it.
$endgroup$
– celtschk
Nov 24 '16 at 22:52




$begingroup$
Note that the case of life on Earth is very much unlike your other examples. Namely the only way to observe the existence of life is by being a life form, which implies to be on a planet that bears life. So no matter how few planets have life, and even if there should be just one single planet in the whole universe that has life, the probability that any sentient life form in the universe discovers that it is on a planet bearing life is exactly one. In other words, we have a selection bias about the planet Earth, because if Earth would not bear life, we wouldn't be there to observe it.
$endgroup$
– celtschk
Nov 24 '16 at 22:52












$begingroup$
That's a good point @celtschk! I do have that in mind, but this is the sort of question that must be tackled from multiple angles. To start with even assuming the simple case that our observation of life is "random" ... I'd like to understand what that case means first, or how to reason about that.
$endgroup$
– badroit
Nov 24 '16 at 22:54




$begingroup$
That's a good point @celtschk! I do have that in mind, but this is the sort of question that must be tackled from multiple angles. To start with even assuming the simple case that our observation of life is "random" ... I'd like to understand what that case means first, or how to reason about that.
$endgroup$
– badroit
Nov 24 '16 at 22:54












$begingroup$
Here are some possible reasons why users downvoted. You ask three very different questions in one post, and it is hard to understand what is your point exactly. I didn't downvote myself; I would rather put a close vote for these reasons.
$endgroup$
– zhoraster
Nov 30 '16 at 9:27






$begingroup$
Here are some possible reasons why users downvoted. You ask three very different questions in one post, and it is hard to understand what is your point exactly. I didn't downvote myself; I would rather put a close vote for these reasons.
$endgroup$
– zhoraster
Nov 30 '16 at 9:27












4 Answers
4






active

oldest

votes


















4





+100







$begingroup$

This is a really interesting question. I suggest the following appoach using Bayes theorem.



Supose there exist n planets in total.



Define $E_r$ = event that there are exactly r planets with life(blue planets). You can check easily that the events are mutually exclusive and exhaustive.



A = event of observing one blue planet.



We shall calculate $P(E_r/ A)$= $frac {P(E_r ).P(A/E_r)}{sum P(E_i).P(A/E_i)}$



Assumig that the creator painted the planets randomly, what is the probabilty that r of them are blue?



Clearly its $P(E_r) = frac{nCr}{2^n}$.



Also $P(A/E_r) = frac{r}{n}$



SUbsituting,we will have



$P(E_r/A)= frac{(n-1)!}{(r-1)!.(n-r)!.2^{n-1}}$



Suppose, that n is comparatively small, about a million. Note how negligibly small the probability of observing only one blue planet (r=1) becomes.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Another really interesting observation is that the probabilty of observing r = n blue planets is also negligible. That is younshould also not expect almost all planets to have life.
    $endgroup$
    – Lelouch
    Nov 19 '16 at 15:47










  • $begingroup$
    Thanks! +1 for the example using Bayes theorem, which was helpful for me. However, I think that the small probability of there being only one blue planet (or all blue planets) in your model arises simply because of your assumption that the planets are coloured randomly. This is not something I mentioned in the example with the colours: in fact, this, for me, is precisely the unknown in the original example. (In the case of the numbers example, they are randomly assigned, but in that analogy, we could consider, for example, mapping all numbers less than 42 to blue planets with life.)
    $endgroup$
    – badroit
    Nov 19 '16 at 16:31










  • $begingroup$
    In reality, we can only observe a very small part of the universe, it is only reasonable to assumr that god does play dice, when we cannot observe. Physically speaking, assuming our position to be non special, the random colouring seems justified, but you can tweak this approach anyway you like. I just gave a plausible idea. Cheers!
    $endgroup$
    – Lelouch
    Nov 19 '16 at 16:35



















4





+50







$begingroup$

Let's look at your first problem, the one with the numbered balls.



Well, one problem with this problem is that there is no uniform distribution of all natural numbers. However we can consider the case where $n$ is uniformly distributed in the range $1$ to $N$, and see if we can make statements when $N$ goes to infinity.



So let's assume that we have a sack with $1le nle N$ numbered balls, and each value of $n$ in the range is initially equally likely, that is, we have an uniform prior for $n$. Now we draw at random (that is, again with uniform probability) a single ball from the sack, and get 42. The question is, what is the probability distribution for $n$ after drawing that ball.



According to Bayes' theorem, we have
$$P(n=n_0|text{42 drawn}) =
frac{P(n=n_0)P(text{42 drawn}|n=n_0)}{sum_k P(n=k)P(text{42 drawn}|n=k)}$$
Now $P(n=k) = frac{1}{N}$ and
$$P(text{42 drawn}|n=k)=begin{cases}
frac{1}{k} & kge 42\
0 & k<42
end{cases}$$
Therefore for $n_0ge 42$ we have
$$P(n=n_0|text{42 drawn}) = frac{1}{n_0sum_{k=42}^Nfrac{1}{k}}$$
Note that the sum in the denominator is independent of $n_0$ and basically just gives the normalization constant, so that the probabilities add up to $1$. Therefore the relevant information is:
$$P(n=n_0|text{42 drawn}) propto frac{1}{n_0}$$
Therefore small values of $n_0$ (with the restriction $n_0ge 42$, of course) are indeed favoured, but only very weakly; in particular, the probabilities still go to zero as $Ntoinfty$.



Let's calculate the expectation value of $n$:
$$langle nrangle = sum_{n_0=1}^N n_0,P(n=n_0|text{42 drawn}) = frac{N-41}{sum_{k=42}^Nfrac{1}{k}}$$
Since the numerator grows linearly while the denominator grows logarithmically, this diverges for $Ntoinfty$. The information we get from the single ball therefore is not sufficient to cut the expectation value down to a finite value, although it grows more slowly with $N$ than on the prior probability where it grows linearly with $N$.



Note that if we draw a second ball, then the probabilities should be $sim frac{1}{k^2}$, which gives a convergent series. Therefore drawing two balls should be sufficient to force a finite probability even in the limit $Ntoinfty$, and therefore probably also a finite expectation value (but at the moment I'm too lazy to calculate that, especially given that it is already far past midnight and I should go to bed).






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks! This helps me understand why previously I was getting a zero for the Bayesian formula used in the context of the German tank problem ... the fact that one sample is not sufficient as N approaches infinitiy makes a lot of sense.
    $endgroup$
    – badroit
    Nov 25 '16 at 2:44



















2












$begingroup$

This article is about problems of this sort, it generalizes the traditional ad hoc method where you assume the "Self Sampling Assumption" (SSA: one should reason as if one were a random sample from the set of all observers in one’s reference class) and the "Self Indication Assumption" (SIA: we should take our own existence as evidence that the number of observers in our reference class is more likely to be large than small). In case of the Doomsday argument, the SSA and SIA cancel each other out exactly, but as the article points out, invoking SSA and SIA is a rather ad hoc thing to do, it's better to simply take into account all the available information.






share|cite|improve this answer









$endgroup$





















    0












    $begingroup$

    Earlier I was trying to tackle this from an intuitive perspective so perhaps it's worth posting some ideas (as thinking out loud).



    Coming at this from someone who knows little about probability/probability theory, the only way I can see to reason about this problem is to think in an intuitive way about simulations and just counting cases.



    Let's take the case of blue and red balls for example. Let us assume we have $X$ balls and that $B$ of those are blue. To keep this as general as possible, we don't know the value for $B$ (other than $Bgeq 1$; put another way, we have no idea of what "prior" probability a ball has of being blue) nor for $X$ (other than $Xgeq B$).



    However, for argument's sake, let's fix some arbitrary large value for $X$. So given $X$, now we can try run a great many simulations and count how many times, for various values of $b leq X$, we sample a blue ball on the first go. Let's say we run $s$ simulations for each value of $b$ ($s gg X$), drawing a first ball and keeping track of how many times it's blue.



    As $s$ approaches infinity for a given value of $b$, the number of times we will sample blue in the first ball will equal $frac{b}{X}$. Across all values of $b$, we get will have $sX$ simulations in total. The number of times a blue ball will be sampled first will be $frac{sX}{2}$.



    Okay now we assume that we know that a blue ball was sampled first and we look at our $sX$ simulations to see in what ratio of simulations that happened for various values of $b$.



    Thus if we know that a blue ball is sampled first, then $s$ of those cases will occur when $b=X$, $s-1$ cases when $b=X-1$ and more generally, $s-n$ cases when $b=X-n$.



    Since we're interested in probabilities, rather than counting cases, we can look at ratios. In total, we can see that in half the cases, a blue ball is sampled first. For a given value of $b$, $frac{1}{X}$ cases will have been run for that value (either blue or not). The ratio of cases where the first ball sampled was blue will be $frac{b}{X^2}$. So for example, given that the first ball sampled is blue, that leaves us with $frac{1}{2}$ of the original cases, of which $frac{1}{X}$ cases are explained by all balls being blue ($b=X$).



    This is not so satisfying though since it always goes back to the value of $X$. But we can try to find a way to find a general conclusion or an "invariant": let's try to see in terms of "cumulative probability" at what point 50% of the cases where blue is drawn first are covered. In other words, we're looking for $beta$ such that $Sigma_{b leq beta} frac{b}{X^2} = frac{1}{4}$. We can consider $beta$ as something of a "tipping point", meaning that knowing that blue was sampled first, the value of $b$ being below $beta$ or above $beta$ are equally likely (where we would expect it, for sure, to be somewhere above $frac{X}{2}$). In fact, as $X$ approaches $infty$, the value for $beta$ converges to $frac{X}{sqrt{2}}$.



    So for example, if $X=100000$, and we know that a blue ball is drawn first, this tells us that $P(B>70711mid text{blue ball drawn first}) approxeq 0.5$. Also that ratio is fixed as $frac{X}{sqrt{2}}$, so given $X=10000$, $P(B>7071mid text{blue ball drawn first}) approxeq 0.5$.





    However, this all presumes I guess what one would call a "uniform distribution" over the possible values that $B$ might take. That seems like the most natural assumption to take when no other information is given, but I rather tend to think that it's just as unknown as the value for $B$, and hence really, no information can be gained from knowing that a blue ball is drawn first unless one assumes something quite strong: that any value of $B$ is somehow equally likely. Under that assumption, there's a 50/50 chance that $sim$70.7% or more of balls are blue according to the above line of reasoning.



    What I have difficulty grappling with now – on a more philosophical level – is just how reasonable it is to assume – in the absence of further information – that any value of $B$ is equally likely. It's tempting to assume this to even start to make progress, but equally it seems to be something we do not know and thus cannot use.



    (Comments are very welcome.)






    share|cite|improve this answer









    $endgroup$














      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2021323%2fdoes-observing-life-on-earth-increase-the-probability-of-life-elsewhere%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4





      +100







      $begingroup$

      This is a really interesting question. I suggest the following appoach using Bayes theorem.



      Supose there exist n planets in total.



      Define $E_r$ = event that there are exactly r planets with life(blue planets). You can check easily that the events are mutually exclusive and exhaustive.



      A = event of observing one blue planet.



      We shall calculate $P(E_r/ A)$= $frac {P(E_r ).P(A/E_r)}{sum P(E_i).P(A/E_i)}$



      Assumig that the creator painted the planets randomly, what is the probabilty that r of them are blue?



      Clearly its $P(E_r) = frac{nCr}{2^n}$.



      Also $P(A/E_r) = frac{r}{n}$



      SUbsituting,we will have



      $P(E_r/A)= frac{(n-1)!}{(r-1)!.(n-r)!.2^{n-1}}$



      Suppose, that n is comparatively small, about a million. Note how negligibly small the probability of observing only one blue planet (r=1) becomes.






      share|cite|improve this answer









      $endgroup$













      • $begingroup$
        Another really interesting observation is that the probabilty of observing r = n blue planets is also negligible. That is younshould also not expect almost all planets to have life.
        $endgroup$
        – Lelouch
        Nov 19 '16 at 15:47










      • $begingroup$
        Thanks! +1 for the example using Bayes theorem, which was helpful for me. However, I think that the small probability of there being only one blue planet (or all blue planets) in your model arises simply because of your assumption that the planets are coloured randomly. This is not something I mentioned in the example with the colours: in fact, this, for me, is precisely the unknown in the original example. (In the case of the numbers example, they are randomly assigned, but in that analogy, we could consider, for example, mapping all numbers less than 42 to blue planets with life.)
        $endgroup$
        – badroit
        Nov 19 '16 at 16:31










      • $begingroup$
        In reality, we can only observe a very small part of the universe, it is only reasonable to assumr that god does play dice, when we cannot observe. Physically speaking, assuming our position to be non special, the random colouring seems justified, but you can tweak this approach anyway you like. I just gave a plausible idea. Cheers!
        $endgroup$
        – Lelouch
        Nov 19 '16 at 16:35
















      4





      +100







      $begingroup$

      This is a really interesting question. I suggest the following appoach using Bayes theorem.



      Supose there exist n planets in total.



      Define $E_r$ = event that there are exactly r planets with life(blue planets). You can check easily that the events are mutually exclusive and exhaustive.



      A = event of observing one blue planet.



      We shall calculate $P(E_r/ A)$= $frac {P(E_r ).P(A/E_r)}{sum P(E_i).P(A/E_i)}$



      Assumig that the creator painted the planets randomly, what is the probabilty that r of them are blue?



      Clearly its $P(E_r) = frac{nCr}{2^n}$.



      Also $P(A/E_r) = frac{r}{n}$



      SUbsituting,we will have



      $P(E_r/A)= frac{(n-1)!}{(r-1)!.(n-r)!.2^{n-1}}$



      Suppose, that n is comparatively small, about a million. Note how negligibly small the probability of observing only one blue planet (r=1) becomes.






      share|cite|improve this answer









      $endgroup$













      • $begingroup$
        Another really interesting observation is that the probabilty of observing r = n blue planets is also negligible. That is younshould also not expect almost all planets to have life.
        $endgroup$
        – Lelouch
        Nov 19 '16 at 15:47










      • $begingroup$
        Thanks! +1 for the example using Bayes theorem, which was helpful for me. However, I think that the small probability of there being only one blue planet (or all blue planets) in your model arises simply because of your assumption that the planets are coloured randomly. This is not something I mentioned in the example with the colours: in fact, this, for me, is precisely the unknown in the original example. (In the case of the numbers example, they are randomly assigned, but in that analogy, we could consider, for example, mapping all numbers less than 42 to blue planets with life.)
        $endgroup$
        – badroit
        Nov 19 '16 at 16:31










      • $begingroup$
        In reality, we can only observe a very small part of the universe, it is only reasonable to assumr that god does play dice, when we cannot observe. Physically speaking, assuming our position to be non special, the random colouring seems justified, but you can tweak this approach anyway you like. I just gave a plausible idea. Cheers!
        $endgroup$
        – Lelouch
        Nov 19 '16 at 16:35














      4





      +100







      4





      +100



      4




      +100



      $begingroup$

      This is a really interesting question. I suggest the following appoach using Bayes theorem.



      Supose there exist n planets in total.



      Define $E_r$ = event that there are exactly r planets with life(blue planets). You can check easily that the events are mutually exclusive and exhaustive.



      A = event of observing one blue planet.



      We shall calculate $P(E_r/ A)$= $frac {P(E_r ).P(A/E_r)}{sum P(E_i).P(A/E_i)}$



      Assumig that the creator painted the planets randomly, what is the probabilty that r of them are blue?



      Clearly its $P(E_r) = frac{nCr}{2^n}$.



      Also $P(A/E_r) = frac{r}{n}$



      SUbsituting,we will have



      $P(E_r/A)= frac{(n-1)!}{(r-1)!.(n-r)!.2^{n-1}}$



      Suppose, that n is comparatively small, about a million. Note how negligibly small the probability of observing only one blue planet (r=1) becomes.






      share|cite|improve this answer









      $endgroup$



      This is a really interesting question. I suggest the following appoach using Bayes theorem.



      Supose there exist n planets in total.



      Define $E_r$ = event that there are exactly r planets with life(blue planets). You can check easily that the events are mutually exclusive and exhaustive.



      A = event of observing one blue planet.



      We shall calculate $P(E_r/ A)$= $frac {P(E_r ).P(A/E_r)}{sum P(E_i).P(A/E_i)}$



      Assumig that the creator painted the planets randomly, what is the probabilty that r of them are blue?



      Clearly its $P(E_r) = frac{nCr}{2^n}$.



      Also $P(A/E_r) = frac{r}{n}$



      SUbsituting,we will have



      $P(E_r/A)= frac{(n-1)!}{(r-1)!.(n-r)!.2^{n-1}}$



      Suppose, that n is comparatively small, about a million. Note how negligibly small the probability of observing only one blue planet (r=1) becomes.







      share|cite|improve this answer












      share|cite|improve this answer



      share|cite|improve this answer










      answered Nov 19 '16 at 15:41









      LelouchLelouch

      501313




      501313












      • $begingroup$
        Another really interesting observation is that the probabilty of observing r = n blue planets is also negligible. That is younshould also not expect almost all planets to have life.
        $endgroup$
        – Lelouch
        Nov 19 '16 at 15:47










      • $begingroup$
        Thanks! +1 for the example using Bayes theorem, which was helpful for me. However, I think that the small probability of there being only one blue planet (or all blue planets) in your model arises simply because of your assumption that the planets are coloured randomly. This is not something I mentioned in the example with the colours: in fact, this, for me, is precisely the unknown in the original example. (In the case of the numbers example, they are randomly assigned, but in that analogy, we could consider, for example, mapping all numbers less than 42 to blue planets with life.)
        $endgroup$
        – badroit
        Nov 19 '16 at 16:31










      • $begingroup$
        In reality, we can only observe a very small part of the universe, it is only reasonable to assumr that god does play dice, when we cannot observe. Physically speaking, assuming our position to be non special, the random colouring seems justified, but you can tweak this approach anyway you like. I just gave a plausible idea. Cheers!
        $endgroup$
        – Lelouch
        Nov 19 '16 at 16:35


















      • $begingroup$
        Another really interesting observation is that the probabilty of observing r = n blue planets is also negligible. That is younshould also not expect almost all planets to have life.
        $endgroup$
        – Lelouch
        Nov 19 '16 at 15:47










      • $begingroup$
        Thanks! +1 for the example using Bayes theorem, which was helpful for me. However, I think that the small probability of there being only one blue planet (or all blue planets) in your model arises simply because of your assumption that the planets are coloured randomly. This is not something I mentioned in the example with the colours: in fact, this, for me, is precisely the unknown in the original example. (In the case of the numbers example, they are randomly assigned, but in that analogy, we could consider, for example, mapping all numbers less than 42 to blue planets with life.)
        $endgroup$
        – badroit
        Nov 19 '16 at 16:31










      • $begingroup$
        In reality, we can only observe a very small part of the universe, it is only reasonable to assumr that god does play dice, when we cannot observe. Physically speaking, assuming our position to be non special, the random colouring seems justified, but you can tweak this approach anyway you like. I just gave a plausible idea. Cheers!
        $endgroup$
        – Lelouch
        Nov 19 '16 at 16:35
















      $begingroup$
      Another really interesting observation is that the probabilty of observing r = n blue planets is also negligible. That is younshould also not expect almost all planets to have life.
      $endgroup$
      – Lelouch
      Nov 19 '16 at 15:47




      $begingroup$
      Another really interesting observation is that the probabilty of observing r = n blue planets is also negligible. That is younshould also not expect almost all planets to have life.
      $endgroup$
      – Lelouch
      Nov 19 '16 at 15:47












      $begingroup$
      Thanks! +1 for the example using Bayes theorem, which was helpful for me. However, I think that the small probability of there being only one blue planet (or all blue planets) in your model arises simply because of your assumption that the planets are coloured randomly. This is not something I mentioned in the example with the colours: in fact, this, for me, is precisely the unknown in the original example. (In the case of the numbers example, they are randomly assigned, but in that analogy, we could consider, for example, mapping all numbers less than 42 to blue planets with life.)
      $endgroup$
      – badroit
      Nov 19 '16 at 16:31




      $begingroup$
      Thanks! +1 for the example using Bayes theorem, which was helpful for me. However, I think that the small probability of there being only one blue planet (or all blue planets) in your model arises simply because of your assumption that the planets are coloured randomly. This is not something I mentioned in the example with the colours: in fact, this, for me, is precisely the unknown in the original example. (In the case of the numbers example, they are randomly assigned, but in that analogy, we could consider, for example, mapping all numbers less than 42 to blue planets with life.)
      $endgroup$
      – badroit
      Nov 19 '16 at 16:31












      $begingroup$
      In reality, we can only observe a very small part of the universe, it is only reasonable to assumr that god does play dice, when we cannot observe. Physically speaking, assuming our position to be non special, the random colouring seems justified, but you can tweak this approach anyway you like. I just gave a plausible idea. Cheers!
      $endgroup$
      – Lelouch
      Nov 19 '16 at 16:35




      $begingroup$
      In reality, we can only observe a very small part of the universe, it is only reasonable to assumr that god does play dice, when we cannot observe. Physically speaking, assuming our position to be non special, the random colouring seems justified, but you can tweak this approach anyway you like. I just gave a plausible idea. Cheers!
      $endgroup$
      – Lelouch
      Nov 19 '16 at 16:35











      4





      +50







      $begingroup$

      Let's look at your first problem, the one with the numbered balls.



      Well, one problem with this problem is that there is no uniform distribution of all natural numbers. However we can consider the case where $n$ is uniformly distributed in the range $1$ to $N$, and see if we can make statements when $N$ goes to infinity.



      So let's assume that we have a sack with $1le nle N$ numbered balls, and each value of $n$ in the range is initially equally likely, that is, we have an uniform prior for $n$. Now we draw at random (that is, again with uniform probability) a single ball from the sack, and get 42. The question is, what is the probability distribution for $n$ after drawing that ball.



      According to Bayes' theorem, we have
      $$P(n=n_0|text{42 drawn}) =
      frac{P(n=n_0)P(text{42 drawn}|n=n_0)}{sum_k P(n=k)P(text{42 drawn}|n=k)}$$
      Now $P(n=k) = frac{1}{N}$ and
      $$P(text{42 drawn}|n=k)=begin{cases}
      frac{1}{k} & kge 42\
      0 & k<42
      end{cases}$$
      Therefore for $n_0ge 42$ we have
      $$P(n=n_0|text{42 drawn}) = frac{1}{n_0sum_{k=42}^Nfrac{1}{k}}$$
      Note that the sum in the denominator is independent of $n_0$ and basically just gives the normalization constant, so that the probabilities add up to $1$. Therefore the relevant information is:
      $$P(n=n_0|text{42 drawn}) propto frac{1}{n_0}$$
      Therefore small values of $n_0$ (with the restriction $n_0ge 42$, of course) are indeed favoured, but only very weakly; in particular, the probabilities still go to zero as $Ntoinfty$.



      Let's calculate the expectation value of $n$:
      $$langle nrangle = sum_{n_0=1}^N n_0,P(n=n_0|text{42 drawn}) = frac{N-41}{sum_{k=42}^Nfrac{1}{k}}$$
      Since the numerator grows linearly while the denominator grows logarithmically, this diverges for $Ntoinfty$. The information we get from the single ball therefore is not sufficient to cut the expectation value down to a finite value, although it grows more slowly with $N$ than on the prior probability where it grows linearly with $N$.



      Note that if we draw a second ball, then the probabilities should be $sim frac{1}{k^2}$, which gives a convergent series. Therefore drawing two balls should be sufficient to force a finite probability even in the limit $Ntoinfty$, and therefore probably also a finite expectation value (but at the moment I'm too lazy to calculate that, especially given that it is already far past midnight and I should go to bed).






      share|cite|improve this answer









      $endgroup$













      • $begingroup$
        Thanks! This helps me understand why previously I was getting a zero for the Bayesian formula used in the context of the German tank problem ... the fact that one sample is not sufficient as N approaches infinitiy makes a lot of sense.
        $endgroup$
        – badroit
        Nov 25 '16 at 2:44
















      4





      +50







      $begingroup$

      Let's look at your first problem, the one with the numbered balls.



      Well, one problem with this problem is that there is no uniform distribution of all natural numbers. However we can consider the case where $n$ is uniformly distributed in the range $1$ to $N$, and see if we can make statements when $N$ goes to infinity.



      So let's assume that we have a sack with $1le nle N$ numbered balls, and each value of $n$ in the range is initially equally likely, that is, we have an uniform prior for $n$. Now we draw at random (that is, again with uniform probability) a single ball from the sack, and get 42. The question is, what is the probability distribution for $n$ after drawing that ball.



      According to Bayes' theorem, we have
      $$P(n=n_0|text{42 drawn}) =
      frac{P(n=n_0)P(text{42 drawn}|n=n_0)}{sum_k P(n=k)P(text{42 drawn}|n=k)}$$
      Now $P(n=k) = frac{1}{N}$ and
      $$P(text{42 drawn}|n=k)=begin{cases}
      frac{1}{k} & kge 42\
      0 & k<42
      end{cases}$$
      Therefore for $n_0ge 42$ we have
      $$P(n=n_0|text{42 drawn}) = frac{1}{n_0sum_{k=42}^Nfrac{1}{k}}$$
      Note that the sum in the denominator is independent of $n_0$ and basically just gives the normalization constant, so that the probabilities add up to $1$. Therefore the relevant information is:
      $$P(n=n_0|text{42 drawn}) propto frac{1}{n_0}$$
      Therefore small values of $n_0$ (with the restriction $n_0ge 42$, of course) are indeed favoured, but only very weakly; in particular, the probabilities still go to zero as $Ntoinfty$.



      Let's calculate the expectation value of $n$:
      $$langle nrangle = sum_{n_0=1}^N n_0,P(n=n_0|text{42 drawn}) = frac{N-41}{sum_{k=42}^Nfrac{1}{k}}$$
      Since the numerator grows linearly while the denominator grows logarithmically, this diverges for $Ntoinfty$. The information we get from the single ball therefore is not sufficient to cut the expectation value down to a finite value, although it grows more slowly with $N$ than on the prior probability where it grows linearly with $N$.



      Note that if we draw a second ball, then the probabilities should be $sim frac{1}{k^2}$, which gives a convergent series. Therefore drawing two balls should be sufficient to force a finite probability even in the limit $Ntoinfty$, and therefore probably also a finite expectation value (but at the moment I'm too lazy to calculate that, especially given that it is already far past midnight and I should go to bed).






      share|cite|improve this answer









      $endgroup$













      • $begingroup$
        Thanks! This helps me understand why previously I was getting a zero for the Bayesian formula used in the context of the German tank problem ... the fact that one sample is not sufficient as N approaches infinitiy makes a lot of sense.
        $endgroup$
        – badroit
        Nov 25 '16 at 2:44














      4





      +50







      4





      +50



      4




      +50



      $begingroup$

      Let's look at your first problem, the one with the numbered balls.



      Well, one problem with this problem is that there is no uniform distribution of all natural numbers. However we can consider the case where $n$ is uniformly distributed in the range $1$ to $N$, and see if we can make statements when $N$ goes to infinity.



      So let's assume that we have a sack with $1le nle N$ numbered balls, and each value of $n$ in the range is initially equally likely, that is, we have an uniform prior for $n$. Now we draw at random (that is, again with uniform probability) a single ball from the sack, and get 42. The question is, what is the probability distribution for $n$ after drawing that ball.



      According to Bayes' theorem, we have
      $$P(n=n_0|text{42 drawn}) =
      frac{P(n=n_0)P(text{42 drawn}|n=n_0)}{sum_k P(n=k)P(text{42 drawn}|n=k)}$$
      Now $P(n=k) = frac{1}{N}$ and
      $$P(text{42 drawn}|n=k)=begin{cases}
      frac{1}{k} & kge 42\
      0 & k<42
      end{cases}$$
      Therefore for $n_0ge 42$ we have
      $$P(n=n_0|text{42 drawn}) = frac{1}{n_0sum_{k=42}^Nfrac{1}{k}}$$
      Note that the sum in the denominator is independent of $n_0$ and basically just gives the normalization constant, so that the probabilities add up to $1$. Therefore the relevant information is:
      $$P(n=n_0|text{42 drawn}) propto frac{1}{n_0}$$
      Therefore small values of $n_0$ (with the restriction $n_0ge 42$, of course) are indeed favoured, but only very weakly; in particular, the probabilities still go to zero as $Ntoinfty$.



      Let's calculate the expectation value of $n$:
      $$langle nrangle = sum_{n_0=1}^N n_0,P(n=n_0|text{42 drawn}) = frac{N-41}{sum_{k=42}^Nfrac{1}{k}}$$
      Since the numerator grows linearly while the denominator grows logarithmically, this diverges for $Ntoinfty$. The information we get from the single ball therefore is not sufficient to cut the expectation value down to a finite value, although it grows more slowly with $N$ than on the prior probability where it grows linearly with $N$.



      Note that if we draw a second ball, then the probabilities should be $sim frac{1}{k^2}$, which gives a convergent series. Therefore drawing two balls should be sufficient to force a finite probability even in the limit $Ntoinfty$, and therefore probably also a finite expectation value (but at the moment I'm too lazy to calculate that, especially given that it is already far past midnight and I should go to bed).






      share|cite|improve this answer









      $endgroup$



      Let's look at your first problem, the one with the numbered balls.



      Well, one problem with this problem is that there is no uniform distribution of all natural numbers. However we can consider the case where $n$ is uniformly distributed in the range $1$ to $N$, and see if we can make statements when $N$ goes to infinity.



      So let's assume that we have a sack with $1le nle N$ numbered balls, and each value of $n$ in the range is initially equally likely, that is, we have an uniform prior for $n$. Now we draw at random (that is, again with uniform probability) a single ball from the sack, and get 42. The question is, what is the probability distribution for $n$ after drawing that ball.



      According to Bayes' theorem, we have
      $$P(n=n_0|text{42 drawn}) =
      frac{P(n=n_0)P(text{42 drawn}|n=n_0)}{sum_k P(n=k)P(text{42 drawn}|n=k)}$$
      Now $P(n=k) = frac{1}{N}$ and
      $$P(text{42 drawn}|n=k)=begin{cases}
      frac{1}{k} & kge 42\
      0 & k<42
      end{cases}$$
      Therefore for $n_0ge 42$ we have
      $$P(n=n_0|text{42 drawn}) = frac{1}{n_0sum_{k=42}^Nfrac{1}{k}}$$
      Note that the sum in the denominator is independent of $n_0$ and basically just gives the normalization constant, so that the probabilities add up to $1$. Therefore the relevant information is:
      $$P(n=n_0|text{42 drawn}) propto frac{1}{n_0}$$
      Therefore small values of $n_0$ (with the restriction $n_0ge 42$, of course) are indeed favoured, but only very weakly; in particular, the probabilities still go to zero as $Ntoinfty$.



      Let's calculate the expectation value of $n$:
      $$langle nrangle = sum_{n_0=1}^N n_0,P(n=n_0|text{42 drawn}) = frac{N-41}{sum_{k=42}^Nfrac{1}{k}}$$
      Since the numerator grows linearly while the denominator grows logarithmically, this diverges for $Ntoinfty$. The information we get from the single ball therefore is not sufficient to cut the expectation value down to a finite value, although it grows more slowly with $N$ than on the prior probability where it grows linearly with $N$.



      Note that if we draw a second ball, then the probabilities should be $sim frac{1}{k^2}$, which gives a convergent series. Therefore drawing two balls should be sufficient to force a finite probability even in the limit $Ntoinfty$, and therefore probably also a finite expectation value (but at the moment I'm too lazy to calculate that, especially given that it is already far past midnight and I should go to bed).







      share|cite|improve this answer












      share|cite|improve this answer



      share|cite|improve this answer










      answered Nov 24 '16 at 23:47









      celtschkceltschk

      30.4k755101




      30.4k755101












      • $begingroup$
        Thanks! This helps me understand why previously I was getting a zero for the Bayesian formula used in the context of the German tank problem ... the fact that one sample is not sufficient as N approaches infinitiy makes a lot of sense.
        $endgroup$
        – badroit
        Nov 25 '16 at 2:44


















      • $begingroup$
        Thanks! This helps me understand why previously I was getting a zero for the Bayesian formula used in the context of the German tank problem ... the fact that one sample is not sufficient as N approaches infinitiy makes a lot of sense.
        $endgroup$
        – badroit
        Nov 25 '16 at 2:44
















      $begingroup$
      Thanks! This helps me understand why previously I was getting a zero for the Bayesian formula used in the context of the German tank problem ... the fact that one sample is not sufficient as N approaches infinitiy makes a lot of sense.
      $endgroup$
      – badroit
      Nov 25 '16 at 2:44




      $begingroup$
      Thanks! This helps me understand why previously I was getting a zero for the Bayesian formula used in the context of the German tank problem ... the fact that one sample is not sufficient as N approaches infinitiy makes a lot of sense.
      $endgroup$
      – badroit
      Nov 25 '16 at 2:44











      2












      $begingroup$

      This article is about problems of this sort, it generalizes the traditional ad hoc method where you assume the "Self Sampling Assumption" (SSA: one should reason as if one were a random sample from the set of all observers in one’s reference class) and the "Self Indication Assumption" (SIA: we should take our own existence as evidence that the number of observers in our reference class is more likely to be large than small). In case of the Doomsday argument, the SSA and SIA cancel each other out exactly, but as the article points out, invoking SSA and SIA is a rather ad hoc thing to do, it's better to simply take into account all the available information.






      share|cite|improve this answer









      $endgroup$


















        2












        $begingroup$

        This article is about problems of this sort, it generalizes the traditional ad hoc method where you assume the "Self Sampling Assumption" (SSA: one should reason as if one were a random sample from the set of all observers in one’s reference class) and the "Self Indication Assumption" (SIA: we should take our own existence as evidence that the number of observers in our reference class is more likely to be large than small). In case of the Doomsday argument, the SSA and SIA cancel each other out exactly, but as the article points out, invoking SSA and SIA is a rather ad hoc thing to do, it's better to simply take into account all the available information.






        share|cite|improve this answer









        $endgroup$
















          2












          2








          2





          $begingroup$

          This article is about problems of this sort, it generalizes the traditional ad hoc method where you assume the "Self Sampling Assumption" (SSA: one should reason as if one were a random sample from the set of all observers in one’s reference class) and the "Self Indication Assumption" (SIA: we should take our own existence as evidence that the number of observers in our reference class is more likely to be large than small). In case of the Doomsday argument, the SSA and SIA cancel each other out exactly, but as the article points out, invoking SSA and SIA is a rather ad hoc thing to do, it's better to simply take into account all the available information.






          share|cite|improve this answer









          $endgroup$



          This article is about problems of this sort, it generalizes the traditional ad hoc method where you assume the "Self Sampling Assumption" (SSA: one should reason as if one were a random sample from the set of all observers in one’s reference class) and the "Self Indication Assumption" (SIA: we should take our own existence as evidence that the number of observers in our reference class is more likely to be large than small). In case of the Doomsday argument, the SSA and SIA cancel each other out exactly, but as the article points out, invoking SSA and SIA is a rather ad hoc thing to do, it's better to simply take into account all the available information.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Nov 24 '16 at 22:26









          Count IblisCount Iblis

          8,53221534




          8,53221534























              0












              $begingroup$

              Earlier I was trying to tackle this from an intuitive perspective so perhaps it's worth posting some ideas (as thinking out loud).



              Coming at this from someone who knows little about probability/probability theory, the only way I can see to reason about this problem is to think in an intuitive way about simulations and just counting cases.



              Let's take the case of blue and red balls for example. Let us assume we have $X$ balls and that $B$ of those are blue. To keep this as general as possible, we don't know the value for $B$ (other than $Bgeq 1$; put another way, we have no idea of what "prior" probability a ball has of being blue) nor for $X$ (other than $Xgeq B$).



              However, for argument's sake, let's fix some arbitrary large value for $X$. So given $X$, now we can try run a great many simulations and count how many times, for various values of $b leq X$, we sample a blue ball on the first go. Let's say we run $s$ simulations for each value of $b$ ($s gg X$), drawing a first ball and keeping track of how many times it's blue.



              As $s$ approaches infinity for a given value of $b$, the number of times we will sample blue in the first ball will equal $frac{b}{X}$. Across all values of $b$, we get will have $sX$ simulations in total. The number of times a blue ball will be sampled first will be $frac{sX}{2}$.



              Okay now we assume that we know that a blue ball was sampled first and we look at our $sX$ simulations to see in what ratio of simulations that happened for various values of $b$.



              Thus if we know that a blue ball is sampled first, then $s$ of those cases will occur when $b=X$, $s-1$ cases when $b=X-1$ and more generally, $s-n$ cases when $b=X-n$.



              Since we're interested in probabilities, rather than counting cases, we can look at ratios. In total, we can see that in half the cases, a blue ball is sampled first. For a given value of $b$, $frac{1}{X}$ cases will have been run for that value (either blue or not). The ratio of cases where the first ball sampled was blue will be $frac{b}{X^2}$. So for example, given that the first ball sampled is blue, that leaves us with $frac{1}{2}$ of the original cases, of which $frac{1}{X}$ cases are explained by all balls being blue ($b=X$).



              This is not so satisfying though since it always goes back to the value of $X$. But we can try to find a way to find a general conclusion or an "invariant": let's try to see in terms of "cumulative probability" at what point 50% of the cases where blue is drawn first are covered. In other words, we're looking for $beta$ such that $Sigma_{b leq beta} frac{b}{X^2} = frac{1}{4}$. We can consider $beta$ as something of a "tipping point", meaning that knowing that blue was sampled first, the value of $b$ being below $beta$ or above $beta$ are equally likely (where we would expect it, for sure, to be somewhere above $frac{X}{2}$). In fact, as $X$ approaches $infty$, the value for $beta$ converges to $frac{X}{sqrt{2}}$.



              So for example, if $X=100000$, and we know that a blue ball is drawn first, this tells us that $P(B>70711mid text{blue ball drawn first}) approxeq 0.5$. Also that ratio is fixed as $frac{X}{sqrt{2}}$, so given $X=10000$, $P(B>7071mid text{blue ball drawn first}) approxeq 0.5$.





              However, this all presumes I guess what one would call a "uniform distribution" over the possible values that $B$ might take. That seems like the most natural assumption to take when no other information is given, but I rather tend to think that it's just as unknown as the value for $B$, and hence really, no information can be gained from knowing that a blue ball is drawn first unless one assumes something quite strong: that any value of $B$ is somehow equally likely. Under that assumption, there's a 50/50 chance that $sim$70.7% or more of balls are blue according to the above line of reasoning.



              What I have difficulty grappling with now – on a more philosophical level – is just how reasonable it is to assume – in the absence of further information – that any value of $B$ is equally likely. It's tempting to assume this to even start to make progress, but equally it seems to be something we do not know and thus cannot use.



              (Comments are very welcome.)






              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                Earlier I was trying to tackle this from an intuitive perspective so perhaps it's worth posting some ideas (as thinking out loud).



                Coming at this from someone who knows little about probability/probability theory, the only way I can see to reason about this problem is to think in an intuitive way about simulations and just counting cases.



                Let's take the case of blue and red balls for example. Let us assume we have $X$ balls and that $B$ of those are blue. To keep this as general as possible, we don't know the value for $B$ (other than $Bgeq 1$; put another way, we have no idea of what "prior" probability a ball has of being blue) nor for $X$ (other than $Xgeq B$).



                However, for argument's sake, let's fix some arbitrary large value for $X$. So given $X$, now we can try run a great many simulations and count how many times, for various values of $b leq X$, we sample a blue ball on the first go. Let's say we run $s$ simulations for each value of $b$ ($s gg X$), drawing a first ball and keeping track of how many times it's blue.



                As $s$ approaches infinity for a given value of $b$, the number of times we will sample blue in the first ball will equal $frac{b}{X}$. Across all values of $b$, we get will have $sX$ simulations in total. The number of times a blue ball will be sampled first will be $frac{sX}{2}$.



                Okay now we assume that we know that a blue ball was sampled first and we look at our $sX$ simulations to see in what ratio of simulations that happened for various values of $b$.



                Thus if we know that a blue ball is sampled first, then $s$ of those cases will occur when $b=X$, $s-1$ cases when $b=X-1$ and more generally, $s-n$ cases when $b=X-n$.



                Since we're interested in probabilities, rather than counting cases, we can look at ratios. In total, we can see that in half the cases, a blue ball is sampled first. For a given value of $b$, $frac{1}{X}$ cases will have been run for that value (either blue or not). The ratio of cases where the first ball sampled was blue will be $frac{b}{X^2}$. So for example, given that the first ball sampled is blue, that leaves us with $frac{1}{2}$ of the original cases, of which $frac{1}{X}$ cases are explained by all balls being blue ($b=X$).



                This is not so satisfying though since it always goes back to the value of $X$. But we can try to find a way to find a general conclusion or an "invariant": let's try to see in terms of "cumulative probability" at what point 50% of the cases where blue is drawn first are covered. In other words, we're looking for $beta$ such that $Sigma_{b leq beta} frac{b}{X^2} = frac{1}{4}$. We can consider $beta$ as something of a "tipping point", meaning that knowing that blue was sampled first, the value of $b$ being below $beta$ or above $beta$ are equally likely (where we would expect it, for sure, to be somewhere above $frac{X}{2}$). In fact, as $X$ approaches $infty$, the value for $beta$ converges to $frac{X}{sqrt{2}}$.



                So for example, if $X=100000$, and we know that a blue ball is drawn first, this tells us that $P(B>70711mid text{blue ball drawn first}) approxeq 0.5$. Also that ratio is fixed as $frac{X}{sqrt{2}}$, so given $X=10000$, $P(B>7071mid text{blue ball drawn first}) approxeq 0.5$.





                However, this all presumes I guess what one would call a "uniform distribution" over the possible values that $B$ might take. That seems like the most natural assumption to take when no other information is given, but I rather tend to think that it's just as unknown as the value for $B$, and hence really, no information can be gained from knowing that a blue ball is drawn first unless one assumes something quite strong: that any value of $B$ is somehow equally likely. Under that assumption, there's a 50/50 chance that $sim$70.7% or more of balls are blue according to the above line of reasoning.



                What I have difficulty grappling with now – on a more philosophical level – is just how reasonable it is to assume – in the absence of further information – that any value of $B$ is equally likely. It's tempting to assume this to even start to make progress, but equally it seems to be something we do not know and thus cannot use.



                (Comments are very welcome.)






                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  Earlier I was trying to tackle this from an intuitive perspective so perhaps it's worth posting some ideas (as thinking out loud).



                  Coming at this from someone who knows little about probability/probability theory, the only way I can see to reason about this problem is to think in an intuitive way about simulations and just counting cases.



                  Let's take the case of blue and red balls for example. Let us assume we have $X$ balls and that $B$ of those are blue. To keep this as general as possible, we don't know the value for $B$ (other than $Bgeq 1$; put another way, we have no idea of what "prior" probability a ball has of being blue) nor for $X$ (other than $Xgeq B$).



                  However, for argument's sake, let's fix some arbitrary large value for $X$. So given $X$, now we can try run a great many simulations and count how many times, for various values of $b leq X$, we sample a blue ball on the first go. Let's say we run $s$ simulations for each value of $b$ ($s gg X$), drawing a first ball and keeping track of how many times it's blue.



                  As $s$ approaches infinity for a given value of $b$, the number of times we will sample blue in the first ball will equal $frac{b}{X}$. Across all values of $b$, we get will have $sX$ simulations in total. The number of times a blue ball will be sampled first will be $frac{sX}{2}$.



                  Okay now we assume that we know that a blue ball was sampled first and we look at our $sX$ simulations to see in what ratio of simulations that happened for various values of $b$.



                  Thus if we know that a blue ball is sampled first, then $s$ of those cases will occur when $b=X$, $s-1$ cases when $b=X-1$ and more generally, $s-n$ cases when $b=X-n$.



                  Since we're interested in probabilities, rather than counting cases, we can look at ratios. In total, we can see that in half the cases, a blue ball is sampled first. For a given value of $b$, $frac{1}{X}$ cases will have been run for that value (either blue or not). The ratio of cases where the first ball sampled was blue will be $frac{b}{X^2}$. So for example, given that the first ball sampled is blue, that leaves us with $frac{1}{2}$ of the original cases, of which $frac{1}{X}$ cases are explained by all balls being blue ($b=X$).



                  This is not so satisfying though since it always goes back to the value of $X$. But we can try to find a way to find a general conclusion or an "invariant": let's try to see in terms of "cumulative probability" at what point 50% of the cases where blue is drawn first are covered. In other words, we're looking for $beta$ such that $Sigma_{b leq beta} frac{b}{X^2} = frac{1}{4}$. We can consider $beta$ as something of a "tipping point", meaning that knowing that blue was sampled first, the value of $b$ being below $beta$ or above $beta$ are equally likely (where we would expect it, for sure, to be somewhere above $frac{X}{2}$). In fact, as $X$ approaches $infty$, the value for $beta$ converges to $frac{X}{sqrt{2}}$.



                  So for example, if $X=100000$, and we know that a blue ball is drawn first, this tells us that $P(B>70711mid text{blue ball drawn first}) approxeq 0.5$. Also that ratio is fixed as $frac{X}{sqrt{2}}$, so given $X=10000$, $P(B>7071mid text{blue ball drawn first}) approxeq 0.5$.





                  However, this all presumes I guess what one would call a "uniform distribution" over the possible values that $B$ might take. That seems like the most natural assumption to take when no other information is given, but I rather tend to think that it's just as unknown as the value for $B$, and hence really, no information can be gained from knowing that a blue ball is drawn first unless one assumes something quite strong: that any value of $B$ is somehow equally likely. Under that assumption, there's a 50/50 chance that $sim$70.7% or more of balls are blue according to the above line of reasoning.



                  What I have difficulty grappling with now – on a more philosophical level – is just how reasonable it is to assume – in the absence of further information – that any value of $B$ is equally likely. It's tempting to assume this to even start to make progress, but equally it seems to be something we do not know and thus cannot use.



                  (Comments are very welcome.)






                  share|cite|improve this answer









                  $endgroup$



                  Earlier I was trying to tackle this from an intuitive perspective so perhaps it's worth posting some ideas (as thinking out loud).



                  Coming at this from someone who knows little about probability/probability theory, the only way I can see to reason about this problem is to think in an intuitive way about simulations and just counting cases.



                  Let's take the case of blue and red balls for example. Let us assume we have $X$ balls and that $B$ of those are blue. To keep this as general as possible, we don't know the value for $B$ (other than $Bgeq 1$; put another way, we have no idea of what "prior" probability a ball has of being blue) nor for $X$ (other than $Xgeq B$).



                  However, for argument's sake, let's fix some arbitrary large value for $X$. So given $X$, now we can try run a great many simulations and count how many times, for various values of $b leq X$, we sample a blue ball on the first go. Let's say we run $s$ simulations for each value of $b$ ($s gg X$), drawing a first ball and keeping track of how many times it's blue.



                  As $s$ approaches infinity for a given value of $b$, the number of times we will sample blue in the first ball will equal $frac{b}{X}$. Across all values of $b$, we get will have $sX$ simulations in total. The number of times a blue ball will be sampled first will be $frac{sX}{2}$.



                  Okay now we assume that we know that a blue ball was sampled first and we look at our $sX$ simulations to see in what ratio of simulations that happened for various values of $b$.



                  Thus if we know that a blue ball is sampled first, then $s$ of those cases will occur when $b=X$, $s-1$ cases when $b=X-1$ and more generally, $s-n$ cases when $b=X-n$.



                  Since we're interested in probabilities, rather than counting cases, we can look at ratios. In total, we can see that in half the cases, a blue ball is sampled first. For a given value of $b$, $frac{1}{X}$ cases will have been run for that value (either blue or not). The ratio of cases where the first ball sampled was blue will be $frac{b}{X^2}$. So for example, given that the first ball sampled is blue, that leaves us with $frac{1}{2}$ of the original cases, of which $frac{1}{X}$ cases are explained by all balls being blue ($b=X$).



                  This is not so satisfying though since it always goes back to the value of $X$. But we can try to find a way to find a general conclusion or an "invariant": let's try to see in terms of "cumulative probability" at what point 50% of the cases where blue is drawn first are covered. In other words, we're looking for $beta$ such that $Sigma_{b leq beta} frac{b}{X^2} = frac{1}{4}$. We can consider $beta$ as something of a "tipping point", meaning that knowing that blue was sampled first, the value of $b$ being below $beta$ or above $beta$ are equally likely (where we would expect it, for sure, to be somewhere above $frac{X}{2}$). In fact, as $X$ approaches $infty$, the value for $beta$ converges to $frac{X}{sqrt{2}}$.



                  So for example, if $X=100000$, and we know that a blue ball is drawn first, this tells us that $P(B>70711mid text{blue ball drawn first}) approxeq 0.5$. Also that ratio is fixed as $frac{X}{sqrt{2}}$, so given $X=10000$, $P(B>7071mid text{blue ball drawn first}) approxeq 0.5$.





                  However, this all presumes I guess what one would call a "uniform distribution" over the possible values that $B$ might take. That seems like the most natural assumption to take when no other information is given, but I rather tend to think that it's just as unknown as the value for $B$, and hence really, no information can be gained from knowing that a blue ball is drawn first unless one assumes something quite strong: that any value of $B$ is somehow equally likely. Under that assumption, there's a 50/50 chance that $sim$70.7% or more of balls are blue according to the above line of reasoning.



                  What I have difficulty grappling with now – on a more philosophical level – is just how reasonable it is to assume – in the absence of further information – that any value of $B$ is equally likely. It's tempting to assume this to even start to make progress, but equally it seems to be something we do not know and thus cannot use.



                  (Comments are very welcome.)







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Nov 25 '16 at 3:32









                  badroitbadroit

                  207216




                  207216






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2021323%2fdoes-observing-life-on-earth-increase-the-probability-of-life-elsewhere%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      MongoDB - Not Authorized To Execute Command

                      How to fix TextFormField cause rebuild widget in Flutter

                      Npm cannot find a required file even through it is in the searched directory