Bayes Theorem confusion… (more complex)
$begingroup$
(a)A gambler has a fair coin and a two-headed
coin in his pocket. He selects one of the coins
at random; when he flips it, it shows heads.
What is the probability that it is the fair coin?
(b) Suppose that he flips the same coin a second
time and, again, it shows heads. Now what is
the probability that it is the fair coin?
answer to (a) is 1/3 which you need for (b),
the answer to (b) is
I learned the basics of Bayes, but I don't understand what it means to have $O_1$ and $O_2$
Problem (c))
Suppose that he fluids the same coin a third time and it shows tails. What's the probability that it is the fair coin?
How do we solve this?
probability
$endgroup$
add a comment |
$begingroup$
(a)A gambler has a fair coin and a two-headed
coin in his pocket. He selects one of the coins
at random; when he flips it, it shows heads.
What is the probability that it is the fair coin?
(b) Suppose that he flips the same coin a second
time and, again, it shows heads. Now what is
the probability that it is the fair coin?
answer to (a) is 1/3 which you need for (b),
the answer to (b) is
I learned the basics of Bayes, but I don't understand what it means to have $O_1$ and $O_2$
Problem (c))
Suppose that he fluids the same coin a third time and it shows tails. What's the probability that it is the fair coin?
How do we solve this?
probability
$endgroup$
$begingroup$
For part (c), the answer very obviously should be $1$ since $HHT$ cannot be observed with the two-headed coin. But you can proceed systematically using $P(HHT|F) = 1/8$, $P(HHT|F^c) = 0$ and plug and chug in $$P(F|HHT)=frac{P(HHT|F)P(F)}{P(HHT|F)P(F)+P(HHT|F^c)P(F^c)}.$$
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 21:47
$begingroup$
@DilipSarwate what page can i find a usage instruction for the notations you're using?
$endgroup$
– user133466
Sep 19 '12 at 20:05
add a comment |
$begingroup$
(a)A gambler has a fair coin and a two-headed
coin in his pocket. He selects one of the coins
at random; when he flips it, it shows heads.
What is the probability that it is the fair coin?
(b) Suppose that he flips the same coin a second
time and, again, it shows heads. Now what is
the probability that it is the fair coin?
answer to (a) is 1/3 which you need for (b),
the answer to (b) is
I learned the basics of Bayes, but I don't understand what it means to have $O_1$ and $O_2$
Problem (c))
Suppose that he fluids the same coin a third time and it shows tails. What's the probability that it is the fair coin?
How do we solve this?
probability
$endgroup$
(a)A gambler has a fair coin and a two-headed
coin in his pocket. He selects one of the coins
at random; when he flips it, it shows heads.
What is the probability that it is the fair coin?
(b) Suppose that he flips the same coin a second
time and, again, it shows heads. Now what is
the probability that it is the fair coin?
answer to (a) is 1/3 which you need for (b),
the answer to (b) is
I learned the basics of Bayes, but I don't understand what it means to have $O_1$ and $O_2$
Problem (c))
Suppose that he fluids the same coin a third time and it shows tails. What's the probability that it is the fair coin?
How do we solve this?
probability
probability
edited Jan 7 at 7:28


Glorfindel
3,41981830
3,41981830
asked Sep 18 '12 at 20:10
user133466user133466
5511724
5511724
$begingroup$
For part (c), the answer very obviously should be $1$ since $HHT$ cannot be observed with the two-headed coin. But you can proceed systematically using $P(HHT|F) = 1/8$, $P(HHT|F^c) = 0$ and plug and chug in $$P(F|HHT)=frac{P(HHT|F)P(F)}{P(HHT|F)P(F)+P(HHT|F^c)P(F^c)}.$$
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 21:47
$begingroup$
@DilipSarwate what page can i find a usage instruction for the notations you're using?
$endgroup$
– user133466
Sep 19 '12 at 20:05
add a comment |
$begingroup$
For part (c), the answer very obviously should be $1$ since $HHT$ cannot be observed with the two-headed coin. But you can proceed systematically using $P(HHT|F) = 1/8$, $P(HHT|F^c) = 0$ and plug and chug in $$P(F|HHT)=frac{P(HHT|F)P(F)}{P(HHT|F)P(F)+P(HHT|F^c)P(F^c)}.$$
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 21:47
$begingroup$
@DilipSarwate what page can i find a usage instruction for the notations you're using?
$endgroup$
– user133466
Sep 19 '12 at 20:05
$begingroup$
For part (c), the answer very obviously should be $1$ since $HHT$ cannot be observed with the two-headed coin. But you can proceed systematically using $P(HHT|F) = 1/8$, $P(HHT|F^c) = 0$ and plug and chug in $$P(F|HHT)=frac{P(HHT|F)P(F)}{P(HHT|F)P(F)+P(HHT|F^c)P(F^c)}.$$
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 21:47
$begingroup$
For part (c), the answer very obviously should be $1$ since $HHT$ cannot be observed with the two-headed coin. But you can proceed systematically using $P(HHT|F) = 1/8$, $P(HHT|F^c) = 0$ and plug and chug in $$P(F|HHT)=frac{P(HHT|F)P(F)}{P(HHT|F)P(F)+P(HHT|F^c)P(F^c)}.$$
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 21:47
$begingroup$
@DilipSarwate what page can i find a usage instruction for the notations you're using?
$endgroup$
– user133466
Sep 19 '12 at 20:05
$begingroup$
@DilipSarwate what page can i find a usage instruction for the notations you're using?
$endgroup$
– user133466
Sep 19 '12 at 20:05
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
It appears that $O_1$ and $O_2$ are first outcome is heads and second outcome is heads. $P(F|O_1)$ is the probability that the coin is fair given that it comes up heads the first time is $1/3$, and the probability that it comes up heads the second time given that it is the fair coin and came up heads the first time is $1/2$. You can check that the numbers in the denominator are also consistent with this interpretation of $O_1$ and $O_2$.
$endgroup$
$begingroup$
so does the comma between $O_1$ and $O_1$ mean intersect?
$endgroup$
– user133466
Sep 19 '12 at 2:05
$begingroup$
@user133466: Yes, if you’re thinking of events as sets in a probability space. In ordinary, everyday terms it’s simply and.
$endgroup$
– Brian M. Scott
Sep 19 '12 at 2:20
add a comment |
$begingroup$
We can simplify the problem to: given we got two heads in a row, what is the probability it is the fair coin?
So let $R$ be the event "two heads in a row" and $F$ the event "fair coin." We want $Pr(F|R)$. We have
$$Pr(F|R)=frac{Pr(Fcap R)}{Pr(R)}.$$
The analysis is now standard. (i) The probability we chose the fair coin is $dfrac{1}{2}$. Given we chose the fair coin, the probability of two heads in a row is $dfrac{1}{4}$. So $Pr(Fcap R)=dfrac{1}{2}cdotdfrac{1}{4}=dfrac{1}{8}$.
(ii) The probability that we chose the two-headed coin is $dfrac{1}{2}$. Given that it is two-headed, the probability of $R$ is $1$.
Thus $Pr(R)=dfrac{1}{2}cdotdfrac{1}{4}+dfrac{1}{2}cdot 1=dfrac{5}{8}$.
Now divide. The required conditional probability $Pr(F|R)$ simplifies to $dfrac{1}{5}$.
$endgroup$
$begingroup$
+1 I think the method given in the OP's book or notes is a very poor approach to the problem, and the notation used just adds to the confusion. I was writing up my solution (essentially the same as yours except I used HH instead of R) when yours was uploaded. Incidentally, this is my favorite example for showing that conditionally independent events are not necessarily independent events. "Heads on first toss" and "Heads on second toss" are conditionally independent events given which coin is drawn but are not unconditionally independent.
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 20:44
$begingroup$
@DilipSarwate: You are right, $HH$ is a better choice than $R$. And subscripts almost guarantee confusion.
$endgroup$
– André Nicolas
Sep 18 '12 at 20:53
$begingroup$
Using this approach we can get away with using 2 events, but I think the reason they used three events is so we can practice with 3 events. theres actually a third part to the problem I will add to the question when I get home. I think we might not be able to get away with 2 events...
$endgroup$
– user133466
Sep 18 '12 at 20:54
$begingroup$
Can you show an approach using hh? Thank you!!!
$endgroup$
– user133466
Sep 18 '12 at 20:57
$begingroup$
@user133466: You misunderstood the comment. I meant that the symbol $HH$ is a better one than $R$, because you can't forget what it stands for. So everywhere I wrote $R$, write $HH$. Nothing changes.
$endgroup$
– André Nicolas
Sep 18 '12 at 21:06
add a comment |
$begingroup$
I like doing it with tables, like here.
You first put your hypothesis into the columns
The column entries say the expected distribution of observations (I guess that O stands for "Observation" in your question), where it may take heads or tails values/events. The priors .5 are given to every hypothesis (coin fariness).
Result are read from third table
where rows show distribution of hypothesis given observation. You say that heads is observed -- read first row.
We can add more rows
and see that fair under two heads observation is 1/5 since it is the value in the fair in the hh
row in the table 3
In your formula, however, instead of making two observations under assumption of equal priors .5/.5, you make a single (second) observation with priors 1/3 and 2/3 that you have got from the first experiment. Enter the values
You see, you have got .2 again in the last table.
The algorithm first converts the column distribution of the first table into the intermediate, joint distirbution by multiplying every first table entry with the coumn weight (bayesian call it column or hypothesis "prior"). This step correpsonds to the nominator of your formula.
In the second step, it makes the row distribution by dividing the row entries with the row totals, which correponds to the denominator of your formula. We can go back to the joint distribution if multiply the row entries with the row margins.
Effectively, first table allows you to focus on desired hypothesis (what is observation distribution given column-hypothesis is true), second table gives joint distribution you should be aware of and third table allows you to focus on desired observation (a row) and consider what are the hypothesis probabilities, given that obswervation $O_n$.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f198697%2fbayes-theorem-confusion-more-complex%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
It appears that $O_1$ and $O_2$ are first outcome is heads and second outcome is heads. $P(F|O_1)$ is the probability that the coin is fair given that it comes up heads the first time is $1/3$, and the probability that it comes up heads the second time given that it is the fair coin and came up heads the first time is $1/2$. You can check that the numbers in the denominator are also consistent with this interpretation of $O_1$ and $O_2$.
$endgroup$
$begingroup$
so does the comma between $O_1$ and $O_1$ mean intersect?
$endgroup$
– user133466
Sep 19 '12 at 2:05
$begingroup$
@user133466: Yes, if you’re thinking of events as sets in a probability space. In ordinary, everyday terms it’s simply and.
$endgroup$
– Brian M. Scott
Sep 19 '12 at 2:20
add a comment |
$begingroup$
It appears that $O_1$ and $O_2$ are first outcome is heads and second outcome is heads. $P(F|O_1)$ is the probability that the coin is fair given that it comes up heads the first time is $1/3$, and the probability that it comes up heads the second time given that it is the fair coin and came up heads the first time is $1/2$. You can check that the numbers in the denominator are also consistent with this interpretation of $O_1$ and $O_2$.
$endgroup$
$begingroup$
so does the comma between $O_1$ and $O_1$ mean intersect?
$endgroup$
– user133466
Sep 19 '12 at 2:05
$begingroup$
@user133466: Yes, if you’re thinking of events as sets in a probability space. In ordinary, everyday terms it’s simply and.
$endgroup$
– Brian M. Scott
Sep 19 '12 at 2:20
add a comment |
$begingroup$
It appears that $O_1$ and $O_2$ are first outcome is heads and second outcome is heads. $P(F|O_1)$ is the probability that the coin is fair given that it comes up heads the first time is $1/3$, and the probability that it comes up heads the second time given that it is the fair coin and came up heads the first time is $1/2$. You can check that the numbers in the denominator are also consistent with this interpretation of $O_1$ and $O_2$.
$endgroup$
It appears that $O_1$ and $O_2$ are first outcome is heads and second outcome is heads. $P(F|O_1)$ is the probability that the coin is fair given that it comes up heads the first time is $1/3$, and the probability that it comes up heads the second time given that it is the fair coin and came up heads the first time is $1/2$. You can check that the numbers in the denominator are also consistent with this interpretation of $O_1$ and $O_2$.
answered Sep 18 '12 at 20:22


Brian M. ScottBrian M. Scott
456k38509909
456k38509909
$begingroup$
so does the comma between $O_1$ and $O_1$ mean intersect?
$endgroup$
– user133466
Sep 19 '12 at 2:05
$begingroup$
@user133466: Yes, if you’re thinking of events as sets in a probability space. In ordinary, everyday terms it’s simply and.
$endgroup$
– Brian M. Scott
Sep 19 '12 at 2:20
add a comment |
$begingroup$
so does the comma between $O_1$ and $O_1$ mean intersect?
$endgroup$
– user133466
Sep 19 '12 at 2:05
$begingroup$
@user133466: Yes, if you’re thinking of events as sets in a probability space. In ordinary, everyday terms it’s simply and.
$endgroup$
– Brian M. Scott
Sep 19 '12 at 2:20
$begingroup$
so does the comma between $O_1$ and $O_1$ mean intersect?
$endgroup$
– user133466
Sep 19 '12 at 2:05
$begingroup$
so does the comma between $O_1$ and $O_1$ mean intersect?
$endgroup$
– user133466
Sep 19 '12 at 2:05
$begingroup$
@user133466: Yes, if you’re thinking of events as sets in a probability space. In ordinary, everyday terms it’s simply and.
$endgroup$
– Brian M. Scott
Sep 19 '12 at 2:20
$begingroup$
@user133466: Yes, if you’re thinking of events as sets in a probability space. In ordinary, everyday terms it’s simply and.
$endgroup$
– Brian M. Scott
Sep 19 '12 at 2:20
add a comment |
$begingroup$
We can simplify the problem to: given we got two heads in a row, what is the probability it is the fair coin?
So let $R$ be the event "two heads in a row" and $F$ the event "fair coin." We want $Pr(F|R)$. We have
$$Pr(F|R)=frac{Pr(Fcap R)}{Pr(R)}.$$
The analysis is now standard. (i) The probability we chose the fair coin is $dfrac{1}{2}$. Given we chose the fair coin, the probability of two heads in a row is $dfrac{1}{4}$. So $Pr(Fcap R)=dfrac{1}{2}cdotdfrac{1}{4}=dfrac{1}{8}$.
(ii) The probability that we chose the two-headed coin is $dfrac{1}{2}$. Given that it is two-headed, the probability of $R$ is $1$.
Thus $Pr(R)=dfrac{1}{2}cdotdfrac{1}{4}+dfrac{1}{2}cdot 1=dfrac{5}{8}$.
Now divide. The required conditional probability $Pr(F|R)$ simplifies to $dfrac{1}{5}$.
$endgroup$
$begingroup$
+1 I think the method given in the OP's book or notes is a very poor approach to the problem, and the notation used just adds to the confusion. I was writing up my solution (essentially the same as yours except I used HH instead of R) when yours was uploaded. Incidentally, this is my favorite example for showing that conditionally independent events are not necessarily independent events. "Heads on first toss" and "Heads on second toss" are conditionally independent events given which coin is drawn but are not unconditionally independent.
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 20:44
$begingroup$
@DilipSarwate: You are right, $HH$ is a better choice than $R$. And subscripts almost guarantee confusion.
$endgroup$
– André Nicolas
Sep 18 '12 at 20:53
$begingroup$
Using this approach we can get away with using 2 events, but I think the reason they used three events is so we can practice with 3 events. theres actually a third part to the problem I will add to the question when I get home. I think we might not be able to get away with 2 events...
$endgroup$
– user133466
Sep 18 '12 at 20:54
$begingroup$
Can you show an approach using hh? Thank you!!!
$endgroup$
– user133466
Sep 18 '12 at 20:57
$begingroup$
@user133466: You misunderstood the comment. I meant that the symbol $HH$ is a better one than $R$, because you can't forget what it stands for. So everywhere I wrote $R$, write $HH$. Nothing changes.
$endgroup$
– André Nicolas
Sep 18 '12 at 21:06
add a comment |
$begingroup$
We can simplify the problem to: given we got two heads in a row, what is the probability it is the fair coin?
So let $R$ be the event "two heads in a row" and $F$ the event "fair coin." We want $Pr(F|R)$. We have
$$Pr(F|R)=frac{Pr(Fcap R)}{Pr(R)}.$$
The analysis is now standard. (i) The probability we chose the fair coin is $dfrac{1}{2}$. Given we chose the fair coin, the probability of two heads in a row is $dfrac{1}{4}$. So $Pr(Fcap R)=dfrac{1}{2}cdotdfrac{1}{4}=dfrac{1}{8}$.
(ii) The probability that we chose the two-headed coin is $dfrac{1}{2}$. Given that it is two-headed, the probability of $R$ is $1$.
Thus $Pr(R)=dfrac{1}{2}cdotdfrac{1}{4}+dfrac{1}{2}cdot 1=dfrac{5}{8}$.
Now divide. The required conditional probability $Pr(F|R)$ simplifies to $dfrac{1}{5}$.
$endgroup$
$begingroup$
+1 I think the method given in the OP's book or notes is a very poor approach to the problem, and the notation used just adds to the confusion. I was writing up my solution (essentially the same as yours except I used HH instead of R) when yours was uploaded. Incidentally, this is my favorite example for showing that conditionally independent events are not necessarily independent events. "Heads on first toss" and "Heads on second toss" are conditionally independent events given which coin is drawn but are not unconditionally independent.
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 20:44
$begingroup$
@DilipSarwate: You are right, $HH$ is a better choice than $R$. And subscripts almost guarantee confusion.
$endgroup$
– André Nicolas
Sep 18 '12 at 20:53
$begingroup$
Using this approach we can get away with using 2 events, but I think the reason they used three events is so we can practice with 3 events. theres actually a third part to the problem I will add to the question when I get home. I think we might not be able to get away with 2 events...
$endgroup$
– user133466
Sep 18 '12 at 20:54
$begingroup$
Can you show an approach using hh? Thank you!!!
$endgroup$
– user133466
Sep 18 '12 at 20:57
$begingroup$
@user133466: You misunderstood the comment. I meant that the symbol $HH$ is a better one than $R$, because you can't forget what it stands for. So everywhere I wrote $R$, write $HH$. Nothing changes.
$endgroup$
– André Nicolas
Sep 18 '12 at 21:06
add a comment |
$begingroup$
We can simplify the problem to: given we got two heads in a row, what is the probability it is the fair coin?
So let $R$ be the event "two heads in a row" and $F$ the event "fair coin." We want $Pr(F|R)$. We have
$$Pr(F|R)=frac{Pr(Fcap R)}{Pr(R)}.$$
The analysis is now standard. (i) The probability we chose the fair coin is $dfrac{1}{2}$. Given we chose the fair coin, the probability of two heads in a row is $dfrac{1}{4}$. So $Pr(Fcap R)=dfrac{1}{2}cdotdfrac{1}{4}=dfrac{1}{8}$.
(ii) The probability that we chose the two-headed coin is $dfrac{1}{2}$. Given that it is two-headed, the probability of $R$ is $1$.
Thus $Pr(R)=dfrac{1}{2}cdotdfrac{1}{4}+dfrac{1}{2}cdot 1=dfrac{5}{8}$.
Now divide. The required conditional probability $Pr(F|R)$ simplifies to $dfrac{1}{5}$.
$endgroup$
We can simplify the problem to: given we got two heads in a row, what is the probability it is the fair coin?
So let $R$ be the event "two heads in a row" and $F$ the event "fair coin." We want $Pr(F|R)$. We have
$$Pr(F|R)=frac{Pr(Fcap R)}{Pr(R)}.$$
The analysis is now standard. (i) The probability we chose the fair coin is $dfrac{1}{2}$. Given we chose the fair coin, the probability of two heads in a row is $dfrac{1}{4}$. So $Pr(Fcap R)=dfrac{1}{2}cdotdfrac{1}{4}=dfrac{1}{8}$.
(ii) The probability that we chose the two-headed coin is $dfrac{1}{2}$. Given that it is two-headed, the probability of $R$ is $1$.
Thus $Pr(R)=dfrac{1}{2}cdotdfrac{1}{4}+dfrac{1}{2}cdot 1=dfrac{5}{8}$.
Now divide. The required conditional probability $Pr(F|R)$ simplifies to $dfrac{1}{5}$.
answered Sep 18 '12 at 20:36
André NicolasAndré Nicolas
452k36424808
452k36424808
$begingroup$
+1 I think the method given in the OP's book or notes is a very poor approach to the problem, and the notation used just adds to the confusion. I was writing up my solution (essentially the same as yours except I used HH instead of R) when yours was uploaded. Incidentally, this is my favorite example for showing that conditionally independent events are not necessarily independent events. "Heads on first toss" and "Heads on second toss" are conditionally independent events given which coin is drawn but are not unconditionally independent.
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 20:44
$begingroup$
@DilipSarwate: You are right, $HH$ is a better choice than $R$. And subscripts almost guarantee confusion.
$endgroup$
– André Nicolas
Sep 18 '12 at 20:53
$begingroup$
Using this approach we can get away with using 2 events, but I think the reason they used three events is so we can practice with 3 events. theres actually a third part to the problem I will add to the question when I get home. I think we might not be able to get away with 2 events...
$endgroup$
– user133466
Sep 18 '12 at 20:54
$begingroup$
Can you show an approach using hh? Thank you!!!
$endgroup$
– user133466
Sep 18 '12 at 20:57
$begingroup$
@user133466: You misunderstood the comment. I meant that the symbol $HH$ is a better one than $R$, because you can't forget what it stands for. So everywhere I wrote $R$, write $HH$. Nothing changes.
$endgroup$
– André Nicolas
Sep 18 '12 at 21:06
add a comment |
$begingroup$
+1 I think the method given in the OP's book or notes is a very poor approach to the problem, and the notation used just adds to the confusion. I was writing up my solution (essentially the same as yours except I used HH instead of R) when yours was uploaded. Incidentally, this is my favorite example for showing that conditionally independent events are not necessarily independent events. "Heads on first toss" and "Heads on second toss" are conditionally independent events given which coin is drawn but are not unconditionally independent.
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 20:44
$begingroup$
@DilipSarwate: You are right, $HH$ is a better choice than $R$. And subscripts almost guarantee confusion.
$endgroup$
– André Nicolas
Sep 18 '12 at 20:53
$begingroup$
Using this approach we can get away with using 2 events, but I think the reason they used three events is so we can practice with 3 events. theres actually a third part to the problem I will add to the question when I get home. I think we might not be able to get away with 2 events...
$endgroup$
– user133466
Sep 18 '12 at 20:54
$begingroup$
Can you show an approach using hh? Thank you!!!
$endgroup$
– user133466
Sep 18 '12 at 20:57
$begingroup$
@user133466: You misunderstood the comment. I meant that the symbol $HH$ is a better one than $R$, because you can't forget what it stands for. So everywhere I wrote $R$, write $HH$. Nothing changes.
$endgroup$
– André Nicolas
Sep 18 '12 at 21:06
$begingroup$
+1 I think the method given in the OP's book or notes is a very poor approach to the problem, and the notation used just adds to the confusion. I was writing up my solution (essentially the same as yours except I used HH instead of R) when yours was uploaded. Incidentally, this is my favorite example for showing that conditionally independent events are not necessarily independent events. "Heads on first toss" and "Heads on second toss" are conditionally independent events given which coin is drawn but are not unconditionally independent.
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 20:44
$begingroup$
+1 I think the method given in the OP's book or notes is a very poor approach to the problem, and the notation used just adds to the confusion. I was writing up my solution (essentially the same as yours except I used HH instead of R) when yours was uploaded. Incidentally, this is my favorite example for showing that conditionally independent events are not necessarily independent events. "Heads on first toss" and "Heads on second toss" are conditionally independent events given which coin is drawn but are not unconditionally independent.
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 20:44
$begingroup$
@DilipSarwate: You are right, $HH$ is a better choice than $R$. And subscripts almost guarantee confusion.
$endgroup$
– André Nicolas
Sep 18 '12 at 20:53
$begingroup$
@DilipSarwate: You are right, $HH$ is a better choice than $R$. And subscripts almost guarantee confusion.
$endgroup$
– André Nicolas
Sep 18 '12 at 20:53
$begingroup$
Using this approach we can get away with using 2 events, but I think the reason they used three events is so we can practice with 3 events. theres actually a third part to the problem I will add to the question when I get home. I think we might not be able to get away with 2 events...
$endgroup$
– user133466
Sep 18 '12 at 20:54
$begingroup$
Using this approach we can get away with using 2 events, but I think the reason they used three events is so we can practice with 3 events. theres actually a third part to the problem I will add to the question when I get home. I think we might not be able to get away with 2 events...
$endgroup$
– user133466
Sep 18 '12 at 20:54
$begingroup$
Can you show an approach using hh? Thank you!!!
$endgroup$
– user133466
Sep 18 '12 at 20:57
$begingroup$
Can you show an approach using hh? Thank you!!!
$endgroup$
– user133466
Sep 18 '12 at 20:57
$begingroup$
@user133466: You misunderstood the comment. I meant that the symbol $HH$ is a better one than $R$, because you can't forget what it stands for. So everywhere I wrote $R$, write $HH$. Nothing changes.
$endgroup$
– André Nicolas
Sep 18 '12 at 21:06
$begingroup$
@user133466: You misunderstood the comment. I meant that the symbol $HH$ is a better one than $R$, because you can't forget what it stands for. So everywhere I wrote $R$, write $HH$. Nothing changes.
$endgroup$
– André Nicolas
Sep 18 '12 at 21:06
add a comment |
$begingroup$
I like doing it with tables, like here.
You first put your hypothesis into the columns
The column entries say the expected distribution of observations (I guess that O stands for "Observation" in your question), where it may take heads or tails values/events. The priors .5 are given to every hypothesis (coin fariness).
Result are read from third table
where rows show distribution of hypothesis given observation. You say that heads is observed -- read first row.
We can add more rows
and see that fair under two heads observation is 1/5 since it is the value in the fair in the hh
row in the table 3
In your formula, however, instead of making two observations under assumption of equal priors .5/.5, you make a single (second) observation with priors 1/3 and 2/3 that you have got from the first experiment. Enter the values
You see, you have got .2 again in the last table.
The algorithm first converts the column distribution of the first table into the intermediate, joint distirbution by multiplying every first table entry with the coumn weight (bayesian call it column or hypothesis "prior"). This step correpsonds to the nominator of your formula.
In the second step, it makes the row distribution by dividing the row entries with the row totals, which correponds to the denominator of your formula. We can go back to the joint distribution if multiply the row entries with the row margins.
Effectively, first table allows you to focus on desired hypothesis (what is observation distribution given column-hypothesis is true), second table gives joint distribution you should be aware of and third table allows you to focus on desired observation (a row) and consider what are the hypothesis probabilities, given that obswervation $O_n$.
$endgroup$
add a comment |
$begingroup$
I like doing it with tables, like here.
You first put your hypothesis into the columns
The column entries say the expected distribution of observations (I guess that O stands for "Observation" in your question), where it may take heads or tails values/events. The priors .5 are given to every hypothesis (coin fariness).
Result are read from third table
where rows show distribution of hypothesis given observation. You say that heads is observed -- read first row.
We can add more rows
and see that fair under two heads observation is 1/5 since it is the value in the fair in the hh
row in the table 3
In your formula, however, instead of making two observations under assumption of equal priors .5/.5, you make a single (second) observation with priors 1/3 and 2/3 that you have got from the first experiment. Enter the values
You see, you have got .2 again in the last table.
The algorithm first converts the column distribution of the first table into the intermediate, joint distirbution by multiplying every first table entry with the coumn weight (bayesian call it column or hypothesis "prior"). This step correpsonds to the nominator of your formula.
In the second step, it makes the row distribution by dividing the row entries with the row totals, which correponds to the denominator of your formula. We can go back to the joint distribution if multiply the row entries with the row margins.
Effectively, first table allows you to focus on desired hypothesis (what is observation distribution given column-hypothesis is true), second table gives joint distribution you should be aware of and third table allows you to focus on desired observation (a row) and consider what are the hypothesis probabilities, given that obswervation $O_n$.
$endgroup$
add a comment |
$begingroup$
I like doing it with tables, like here.
You first put your hypothesis into the columns
The column entries say the expected distribution of observations (I guess that O stands for "Observation" in your question), where it may take heads or tails values/events. The priors .5 are given to every hypothesis (coin fariness).
Result are read from third table
where rows show distribution of hypothesis given observation. You say that heads is observed -- read first row.
We can add more rows
and see that fair under two heads observation is 1/5 since it is the value in the fair in the hh
row in the table 3
In your formula, however, instead of making two observations under assumption of equal priors .5/.5, you make a single (second) observation with priors 1/3 and 2/3 that you have got from the first experiment. Enter the values
You see, you have got .2 again in the last table.
The algorithm first converts the column distribution of the first table into the intermediate, joint distirbution by multiplying every first table entry with the coumn weight (bayesian call it column or hypothesis "prior"). This step correpsonds to the nominator of your formula.
In the second step, it makes the row distribution by dividing the row entries with the row totals, which correponds to the denominator of your formula. We can go back to the joint distribution if multiply the row entries with the row margins.
Effectively, first table allows you to focus on desired hypothesis (what is observation distribution given column-hypothesis is true), second table gives joint distribution you should be aware of and third table allows you to focus on desired observation (a row) and consider what are the hypothesis probabilities, given that obswervation $O_n$.
$endgroup$
I like doing it with tables, like here.
You first put your hypothesis into the columns
The column entries say the expected distribution of observations (I guess that O stands for "Observation" in your question), where it may take heads or tails values/events. The priors .5 are given to every hypothesis (coin fariness).
Result are read from third table
where rows show distribution of hypothesis given observation. You say that heads is observed -- read first row.
We can add more rows
and see that fair under two heads observation is 1/5 since it is the value in the fair in the hh
row in the table 3
In your formula, however, instead of making two observations under assumption of equal priors .5/.5, you make a single (second) observation with priors 1/3 and 2/3 that you have got from the first experiment. Enter the values
You see, you have got .2 again in the last table.
The algorithm first converts the column distribution of the first table into the intermediate, joint distirbution by multiplying every first table entry with the coumn weight (bayesian call it column or hypothesis "prior"). This step correpsonds to the nominator of your formula.
In the second step, it makes the row distribution by dividing the row entries with the row totals, which correponds to the denominator of your formula. We can go back to the joint distribution if multiply the row entries with the row margins.
Effectively, first table allows you to focus on desired hypothesis (what is observation distribution given column-hypothesis is true), second table gives joint distribution you should be aware of and third table allows you to focus on desired observation (a row) and consider what are the hypothesis probabilities, given that obswervation $O_n$.
edited Oct 27 '16 at 13:17
answered Oct 27 '16 at 6:04
Little AlienLittle Alien
1
1
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f198697%2fbayes-theorem-confusion-more-complex%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
For part (c), the answer very obviously should be $1$ since $HHT$ cannot be observed with the two-headed coin. But you can proceed systematically using $P(HHT|F) = 1/8$, $P(HHT|F^c) = 0$ and plug and chug in $$P(F|HHT)=frac{P(HHT|F)P(F)}{P(HHT|F)P(F)+P(HHT|F^c)P(F^c)}.$$
$endgroup$
– Dilip Sarwate
Sep 18 '12 at 21:47
$begingroup$
@DilipSarwate what page can i find a usage instruction for the notations you're using?
$endgroup$
– user133466
Sep 19 '12 at 20:05