What is an estimator for the “number of trials” given observed successes and the success probability?












3












$begingroup$


The binomial distribution with $n$ trials, $k$ successes and success probability $p$ is given by



$$P(k;n,p) = binom{n}{k} p^k (1-p)^{(n-k)}, quad k in {0,...,n}$$



Suppose that we observe $k$ successes and know $p$ but we do not know $n$. Observe that now $k$ and $p$ are fixed whereas $n$ is stochastic. So if $k=6$ and $p=0.4$,



$$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$
This is however (remark by @Xiaomi) not a valid probability function as it does not sum to one over its suppoer. Is there a probability mass function for $n$? What is a useful (unbiased, consistent) estimator for its parameter $n$?










share|cite|improve this question











$endgroup$












  • $begingroup$
    I think you are misunderstanding the notation. '$|$' here doesn't mean anything conditional, it means variables that are given. You can't use Bayes's theorem. It's not really clear of what you are asking either. One could try to guess, that you are talking about some sort of extension of binomial distribution to all natural numbers. But that's speculations
    $endgroup$
    – Jakobian
    Oct 4 '18 at 14:58












  • $begingroup$
    @Jakobian I completely revised the question to make it clearer what I am asking.
    $endgroup$
    – tomka
    Oct 4 '18 at 15:29
















3












$begingroup$


The binomial distribution with $n$ trials, $k$ successes and success probability $p$ is given by



$$P(k;n,p) = binom{n}{k} p^k (1-p)^{(n-k)}, quad k in {0,...,n}$$



Suppose that we observe $k$ successes and know $p$ but we do not know $n$. Observe that now $k$ and $p$ are fixed whereas $n$ is stochastic. So if $k=6$ and $p=0.4$,



$$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$
This is however (remark by @Xiaomi) not a valid probability function as it does not sum to one over its suppoer. Is there a probability mass function for $n$? What is a useful (unbiased, consistent) estimator for its parameter $n$?










share|cite|improve this question











$endgroup$












  • $begingroup$
    I think you are misunderstanding the notation. '$|$' here doesn't mean anything conditional, it means variables that are given. You can't use Bayes's theorem. It's not really clear of what you are asking either. One could try to guess, that you are talking about some sort of extension of binomial distribution to all natural numbers. But that's speculations
    $endgroup$
    – Jakobian
    Oct 4 '18 at 14:58












  • $begingroup$
    @Jakobian I completely revised the question to make it clearer what I am asking.
    $endgroup$
    – tomka
    Oct 4 '18 at 15:29














3












3








3


2



$begingroup$


The binomial distribution with $n$ trials, $k$ successes and success probability $p$ is given by



$$P(k;n,p) = binom{n}{k} p^k (1-p)^{(n-k)}, quad k in {0,...,n}$$



Suppose that we observe $k$ successes and know $p$ but we do not know $n$. Observe that now $k$ and $p$ are fixed whereas $n$ is stochastic. So if $k=6$ and $p=0.4$,



$$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$
This is however (remark by @Xiaomi) not a valid probability function as it does not sum to one over its suppoer. Is there a probability mass function for $n$? What is a useful (unbiased, consistent) estimator for its parameter $n$?










share|cite|improve this question











$endgroup$




The binomial distribution with $n$ trials, $k$ successes and success probability $p$ is given by



$$P(k;n,p) = binom{n}{k} p^k (1-p)^{(n-k)}, quad k in {0,...,n}$$



Suppose that we observe $k$ successes and know $p$ but we do not know $n$. Observe that now $k$ and $p$ are fixed whereas $n$ is stochastic. So if $k=6$ and $p=0.4$,



$$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$
This is however (remark by @Xiaomi) not a valid probability function as it does not sum to one over its suppoer. Is there a probability mass function for $n$? What is a useful (unbiased, consistent) estimator for its parameter $n$?







statistics binomial-distribution parameter-estimation






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Oct 4 '18 at 16:09







tomka

















asked Oct 4 '18 at 14:53









tomkatomka

395213




395213












  • $begingroup$
    I think you are misunderstanding the notation. '$|$' here doesn't mean anything conditional, it means variables that are given. You can't use Bayes's theorem. It's not really clear of what you are asking either. One could try to guess, that you are talking about some sort of extension of binomial distribution to all natural numbers. But that's speculations
    $endgroup$
    – Jakobian
    Oct 4 '18 at 14:58












  • $begingroup$
    @Jakobian I completely revised the question to make it clearer what I am asking.
    $endgroup$
    – tomka
    Oct 4 '18 at 15:29


















  • $begingroup$
    I think you are misunderstanding the notation. '$|$' here doesn't mean anything conditional, it means variables that are given. You can't use Bayes's theorem. It's not really clear of what you are asking either. One could try to guess, that you are talking about some sort of extension of binomial distribution to all natural numbers. But that's speculations
    $endgroup$
    – Jakobian
    Oct 4 '18 at 14:58












  • $begingroup$
    @Jakobian I completely revised the question to make it clearer what I am asking.
    $endgroup$
    – tomka
    Oct 4 '18 at 15:29
















$begingroup$
I think you are misunderstanding the notation. '$|$' here doesn't mean anything conditional, it means variables that are given. You can't use Bayes's theorem. It's not really clear of what you are asking either. One could try to guess, that you are talking about some sort of extension of binomial distribution to all natural numbers. But that's speculations
$endgroup$
– Jakobian
Oct 4 '18 at 14:58






$begingroup$
I think you are misunderstanding the notation. '$|$' here doesn't mean anything conditional, it means variables that are given. You can't use Bayes's theorem. It's not really clear of what you are asking either. One could try to guess, that you are talking about some sort of extension of binomial distribution to all natural numbers. But that's speculations
$endgroup$
– Jakobian
Oct 4 '18 at 14:58














$begingroup$
@Jakobian I completely revised the question to make it clearer what I am asking.
$endgroup$
– tomka
Oct 4 '18 at 15:29




$begingroup$
@Jakobian I completely revised the question to make it clearer what I am asking.
$endgroup$
– tomka
Oct 4 '18 at 15:29










3 Answers
3






active

oldest

votes


















1












$begingroup$

As noted in Xiaomi's answer, the probability distribution



$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$



fails. The problem is that it assumes the six successes occur randomly among the $n$ occurrences, but this is not true. To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$. Only the first five successes occur randomly, and they are restricted to the first $n-1$ attempts (but no need for the fifth success to occur exactly at attempt $n-1$. The correct probability distribution with these characteristics is



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^5 (0.6)^{((n-1)-5)}color{blue}{(0.4)}, quad n in {6,...,infty}.$$



where the blue factor forces a success on trial $n$ and the rest of the expression accounts for the proper random occurrence of the other five successes. This simplifies to



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$



which now does normalize properly and should give consistent statistical estimates.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Great! How would you arrive at an estimator though? It seems that the expectation of $n$ is the solution of an infinite sum starting at $k$, $E(n) = sum_{n=k}^{infty} binom{n}{k} p^k (1-p)^{(n-k)}$. I am not even sure if this sum converges.
    $endgroup$
    – tomka
    Oct 4 '18 at 19:53












  • $begingroup$
    Jusr sum $kP(n=k)$. The series converges by the ratio test with ratio = $1−p=0.6$ in this case. The mean will be what you expect, namely $r/p$ where you demand $r$ successes, try it. The variance is derived by summing $k^2P(n=k)$ and subtracting the mean squared, which gives $r(1−p)/p$.
    $endgroup$
    – Oscar Lanzi
    Oct 4 '18 at 21:50










  • $begingroup$
    "To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$". Why ? what about the sequence 6th success at $n-1$ plus fail at $n$, etc. ?
    $endgroup$
    – G Cab
    Jan 31 at 1:34



















1












$begingroup$

First of all, what you've stated is not the distribution function of $n$. It's the distribution function of $X$ given parameters $n,p$. You cannot simply interchange $n$ and $k$. If it was the PMF of $n$, it would sum to $1$ over all values of $n$, and that clearly doesn't. To answer your question...



In the (very unrealistic) situation where we have a Binomial random variable $X$, the number of successes out of $n$ trials, and we know $p$ in advance, we can estimate $n$ as simply as



$$hat{n} = frac{X}{p}$$



The basic idea here being that we observe $X$ successes, and so to get back to $n$ we need to re-scale by $1/p$. However this entire thought process is a bit non-sensical, as a Binomial random variable is characterised as being a number of successes out of some fixed and known number of trials $n$.



An interesting question is whether this estimator is consistent. Clearly it is unbiased, since



$$E[X/p] = np/p = n$$



But for the variance, we have



$$Var(hat{n}) = Var(X/p) = Var(X)/p^2 = np(1-p)/p^2$$



So our estimator is clearly not consistent.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks. In an earlier version of my question I asked what is the distribution of $n$. Maybe this version was not so bad after all. However, ultimately I am interested in an estimator for $n$; however I believe a good estimator should be consistent.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:06










  • $begingroup$
    I amended my question a bit following your answer.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:09










  • $begingroup$
    I found the way to confirm your results for the variance. For the expected value there is instead an additional term.
    $endgroup$
    – G Cab
    Jan 31 at 23:56



















1












$begingroup$

The binomial distribution is the probability of having $s$ successes in $n$ trials, given that the probability of success
in each trial is $p$, and the outcomes of the trials are i.i.d. (Bernoulli Trials) .



The parameter $n$ is given, so wrt this the distribution is a conditional probability
and we can write
$$
Pleft( {s,left| {,n} right.} right) = left( matrix{
n cr
s cr} right)p^{,s} q^{,n - s} = {{Pleft( {s wedge n} right)} over {P(n)}}
$$



We want to determine the complementary conditional probability
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
$$

which is a totally licit question, provided that we know $P(n)$.



Assume that $n$ is uniformly distributed over the interval $[0,N]$.

Thus $P(n)= 1/(N+1)$, and we get
$$
Pleft( {s wedge n} right) = {{left[ {0 le n le N} right]} over {N + 1}}binom{n}{s}p^{,s} q^{,n - s}
$$

where $[P]$ denotes the Iverson bracket



Note that the sum of the bivariate distribution
$$
eqalign{
& sumlimits_{0, le ,n,left( { le ,N} right)} {sumlimits_{0, le ,s,left( { le ,n} right)} {Pleft( {s wedge n} right)} }
= {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]sumlimits_{0, le ,s,left( { le ,n} right)} {
binom{n}{s} p^{,s} q^{,n - s} } } = cr
& = {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]} = 1 cr}
$$

correctly checks to be $1$.



Then the marginal distribution in $s$ will be
$$
P(s) = sumlimits_{0, le ,n,left( { le ,N} right)} {Pleft( {s wedge n} right)}
= {{p^{,s} q^{, - s} } over {N + 1}}sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} }
$$

and we reach to
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
= left[ {0 le n le N} right]{{binom{n}{s}q^{,n} }
over {sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} } }}
$$



In the limit for $N to infty$ the expression above converges to
$$ bbox[lightyellow] {
Pleft( {n,left| {,s} right.} right) = binom{n}{s} , q^{,n - s} p^{,s + 1}
}$$



The expected value and the variance for $n$ result to be:
$$ bbox[lightyellow] {
eqalign{
& Eleft( {nleft| {,s} right.} right) = sumlimits_{0, le ,n,} {nbinom{n}{s}q^{,n - s} p^{,s + 1} }
= {{1 - p} over p} + {1 over p}s cr
& sigma ^{,2} = sumlimits_{0, le ,n,} {left( {n - {{1 - p + s} over p}} right)^{,2} binom{n}{s}q^{,n - s} p^{,s + 1} }
= {{left( {1 - p} right)left( {s + 1} right)} over {p^{,2} }} cr}
}$$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    @tomka since the problem is interesting, I recasted my answer to render it more rigorous
    $endgroup$
    – G Cab
    Jan 31 at 23:53












Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2942227%2fwhat-is-an-estimator-for-the-number-of-trials-given-observed-successes-and-the%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























3 Answers
3






active

oldest

votes








3 Answers
3






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

As noted in Xiaomi's answer, the probability distribution



$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$



fails. The problem is that it assumes the six successes occur randomly among the $n$ occurrences, but this is not true. To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$. Only the first five successes occur randomly, and they are restricted to the first $n-1$ attempts (but no need for the fifth success to occur exactly at attempt $n-1$. The correct probability distribution with these characteristics is



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^5 (0.6)^{((n-1)-5)}color{blue}{(0.4)}, quad n in {6,...,infty}.$$



where the blue factor forces a success on trial $n$ and the rest of the expression accounts for the proper random occurrence of the other five successes. This simplifies to



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$



which now does normalize properly and should give consistent statistical estimates.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Great! How would you arrive at an estimator though? It seems that the expectation of $n$ is the solution of an infinite sum starting at $k$, $E(n) = sum_{n=k}^{infty} binom{n}{k} p^k (1-p)^{(n-k)}$. I am not even sure if this sum converges.
    $endgroup$
    – tomka
    Oct 4 '18 at 19:53












  • $begingroup$
    Jusr sum $kP(n=k)$. The series converges by the ratio test with ratio = $1−p=0.6$ in this case. The mean will be what you expect, namely $r/p$ where you demand $r$ successes, try it. The variance is derived by summing $k^2P(n=k)$ and subtracting the mean squared, which gives $r(1−p)/p$.
    $endgroup$
    – Oscar Lanzi
    Oct 4 '18 at 21:50










  • $begingroup$
    "To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$". Why ? what about the sequence 6th success at $n-1$ plus fail at $n$, etc. ?
    $endgroup$
    – G Cab
    Jan 31 at 1:34
















1












$begingroup$

As noted in Xiaomi's answer, the probability distribution



$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$



fails. The problem is that it assumes the six successes occur randomly among the $n$ occurrences, but this is not true. To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$. Only the first five successes occur randomly, and they are restricted to the first $n-1$ attempts (but no need for the fifth success to occur exactly at attempt $n-1$. The correct probability distribution with these characteristics is



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^5 (0.6)^{((n-1)-5)}color{blue}{(0.4)}, quad n in {6,...,infty}.$$



where the blue factor forces a success on trial $n$ and the rest of the expression accounts for the proper random occurrence of the other five successes. This simplifies to



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$



which now does normalize properly and should give consistent statistical estimates.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Great! How would you arrive at an estimator though? It seems that the expectation of $n$ is the solution of an infinite sum starting at $k$, $E(n) = sum_{n=k}^{infty} binom{n}{k} p^k (1-p)^{(n-k)}$. I am not even sure if this sum converges.
    $endgroup$
    – tomka
    Oct 4 '18 at 19:53












  • $begingroup$
    Jusr sum $kP(n=k)$. The series converges by the ratio test with ratio = $1−p=0.6$ in this case. The mean will be what you expect, namely $r/p$ where you demand $r$ successes, try it. The variance is derived by summing $k^2P(n=k)$ and subtracting the mean squared, which gives $r(1−p)/p$.
    $endgroup$
    – Oscar Lanzi
    Oct 4 '18 at 21:50










  • $begingroup$
    "To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$". Why ? what about the sequence 6th success at $n-1$ plus fail at $n$, etc. ?
    $endgroup$
    – G Cab
    Jan 31 at 1:34














1












1








1





$begingroup$

As noted in Xiaomi's answer, the probability distribution



$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$



fails. The problem is that it assumes the six successes occur randomly among the $n$ occurrences, but this is not true. To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$. Only the first five successes occur randomly, and they are restricted to the first $n-1$ attempts (but no need for the fifth success to occur exactly at attempt $n-1$. The correct probability distribution with these characteristics is



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^5 (0.6)^{((n-1)-5)}color{blue}{(0.4)}, quad n in {6,...,infty}.$$



where the blue factor forces a success on trial $n$ and the rest of the expression accounts for the proper random occurrence of the other five successes. This simplifies to



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$



which now does normalize properly and should give consistent statistical estimates.






share|cite|improve this answer











$endgroup$



As noted in Xiaomi's answer, the probability distribution



$P(k=6; n ,p=0.4) = binom{n}{6} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$



fails. The problem is that it assumes the six successes occur randomly among the $n$ occurrences, but this is not true. To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$. Only the first five successes occur randomly, and they are restricted to the first $n-1$ attempts (but no need for the fifth success to occur exactly at attempt $n-1$. The correct probability distribution with these characteristics is



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^5 (0.6)^{((n-1)-5)}color{blue}{(0.4)}, quad n in {6,...,infty}.$$



where the blue factor forces a success on trial $n$ and the rest of the expression accounts for the proper random occurrence of the other five successes. This simplifies to



$$P(k=6; n ,p=0.4) = binom{n-1}{5} 0.4^6 (0.6)^{(n-6)}, quad n in {6,...,infty}.$$



which now does normalize properly and should give consistent statistical estimates.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Oct 4 '18 at 19:33

























answered Oct 4 '18 at 19:02









Oscar LanziOscar Lanzi

13.6k12136




13.6k12136












  • $begingroup$
    Great! How would you arrive at an estimator though? It seems that the expectation of $n$ is the solution of an infinite sum starting at $k$, $E(n) = sum_{n=k}^{infty} binom{n}{k} p^k (1-p)^{(n-k)}$. I am not even sure if this sum converges.
    $endgroup$
    – tomka
    Oct 4 '18 at 19:53












  • $begingroup$
    Jusr sum $kP(n=k)$. The series converges by the ratio test with ratio = $1−p=0.6$ in this case. The mean will be what you expect, namely $r/p$ where you demand $r$ successes, try it. The variance is derived by summing $k^2P(n=k)$ and subtracting the mean squared, which gives $r(1−p)/p$.
    $endgroup$
    – Oscar Lanzi
    Oct 4 '18 at 21:50










  • $begingroup$
    "To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$". Why ? what about the sequence 6th success at $n-1$ plus fail at $n$, etc. ?
    $endgroup$
    – G Cab
    Jan 31 at 1:34


















  • $begingroup$
    Great! How would you arrive at an estimator though? It seems that the expectation of $n$ is the solution of an infinite sum starting at $k$, $E(n) = sum_{n=k}^{infty} binom{n}{k} p^k (1-p)^{(n-k)}$. I am not even sure if this sum converges.
    $endgroup$
    – tomka
    Oct 4 '18 at 19:53












  • $begingroup$
    Jusr sum $kP(n=k)$. The series converges by the ratio test with ratio = $1−p=0.6$ in this case. The mean will be what you expect, namely $r/p$ where you demand $r$ successes, try it. The variance is derived by summing $k^2P(n=k)$ and subtracting the mean squared, which gives $r(1−p)/p$.
    $endgroup$
    – Oscar Lanzi
    Oct 4 '18 at 21:50










  • $begingroup$
    "To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$". Why ? what about the sequence 6th success at $n-1$ plus fail at $n$, etc. ?
    $endgroup$
    – G Cab
    Jan 31 at 1:34
















$begingroup$
Great! How would you arrive at an estimator though? It seems that the expectation of $n$ is the solution of an infinite sum starting at $k$, $E(n) = sum_{n=k}^{infty} binom{n}{k} p^k (1-p)^{(n-k)}$. I am not even sure if this sum converges.
$endgroup$
– tomka
Oct 4 '18 at 19:53






$begingroup$
Great! How would you arrive at an estimator though? It seems that the expectation of $n$ is the solution of an infinite sum starting at $k$, $E(n) = sum_{n=k}^{infty} binom{n}{k} p^k (1-p)^{(n-k)}$. I am not even sure if this sum converges.
$endgroup$
– tomka
Oct 4 '18 at 19:53














$begingroup$
Jusr sum $kP(n=k)$. The series converges by the ratio test with ratio = $1−p=0.6$ in this case. The mean will be what you expect, namely $r/p$ where you demand $r$ successes, try it. The variance is derived by summing $k^2P(n=k)$ and subtracting the mean squared, which gives $r(1−p)/p$.
$endgroup$
– Oscar Lanzi
Oct 4 '18 at 21:50




$begingroup$
Jusr sum $kP(n=k)$. The series converges by the ratio test with ratio = $1−p=0.6$ in this case. The mean will be what you expect, namely $r/p$ where you demand $r$ successes, try it. The variance is derived by summing $k^2P(n=k)$ and subtracting the mean squared, which gives $r(1−p)/p$.
$endgroup$
– Oscar Lanzi
Oct 4 '18 at 21:50












$begingroup$
"To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$". Why ? what about the sequence 6th success at $n-1$ plus fail at $n$, etc. ?
$endgroup$
– G Cab
Jan 31 at 1:34




$begingroup$
"To achieve $n$ as an outcome the sixth success must occur exactly on attempt $n$". Why ? what about the sequence 6th success at $n-1$ plus fail at $n$, etc. ?
$endgroup$
– G Cab
Jan 31 at 1:34











1












$begingroup$

First of all, what you've stated is not the distribution function of $n$. It's the distribution function of $X$ given parameters $n,p$. You cannot simply interchange $n$ and $k$. If it was the PMF of $n$, it would sum to $1$ over all values of $n$, and that clearly doesn't. To answer your question...



In the (very unrealistic) situation where we have a Binomial random variable $X$, the number of successes out of $n$ trials, and we know $p$ in advance, we can estimate $n$ as simply as



$$hat{n} = frac{X}{p}$$



The basic idea here being that we observe $X$ successes, and so to get back to $n$ we need to re-scale by $1/p$. However this entire thought process is a bit non-sensical, as a Binomial random variable is characterised as being a number of successes out of some fixed and known number of trials $n$.



An interesting question is whether this estimator is consistent. Clearly it is unbiased, since



$$E[X/p] = np/p = n$$



But for the variance, we have



$$Var(hat{n}) = Var(X/p) = Var(X)/p^2 = np(1-p)/p^2$$



So our estimator is clearly not consistent.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks. In an earlier version of my question I asked what is the distribution of $n$. Maybe this version was not so bad after all. However, ultimately I am interested in an estimator for $n$; however I believe a good estimator should be consistent.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:06










  • $begingroup$
    I amended my question a bit following your answer.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:09










  • $begingroup$
    I found the way to confirm your results for the variance. For the expected value there is instead an additional term.
    $endgroup$
    – G Cab
    Jan 31 at 23:56
















1












$begingroup$

First of all, what you've stated is not the distribution function of $n$. It's the distribution function of $X$ given parameters $n,p$. You cannot simply interchange $n$ and $k$. If it was the PMF of $n$, it would sum to $1$ over all values of $n$, and that clearly doesn't. To answer your question...



In the (very unrealistic) situation where we have a Binomial random variable $X$, the number of successes out of $n$ trials, and we know $p$ in advance, we can estimate $n$ as simply as



$$hat{n} = frac{X}{p}$$



The basic idea here being that we observe $X$ successes, and so to get back to $n$ we need to re-scale by $1/p$. However this entire thought process is a bit non-sensical, as a Binomial random variable is characterised as being a number of successes out of some fixed and known number of trials $n$.



An interesting question is whether this estimator is consistent. Clearly it is unbiased, since



$$E[X/p] = np/p = n$$



But for the variance, we have



$$Var(hat{n}) = Var(X/p) = Var(X)/p^2 = np(1-p)/p^2$$



So our estimator is clearly not consistent.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks. In an earlier version of my question I asked what is the distribution of $n$. Maybe this version was not so bad after all. However, ultimately I am interested in an estimator for $n$; however I believe a good estimator should be consistent.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:06










  • $begingroup$
    I amended my question a bit following your answer.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:09










  • $begingroup$
    I found the way to confirm your results for the variance. For the expected value there is instead an additional term.
    $endgroup$
    – G Cab
    Jan 31 at 23:56














1












1








1





$begingroup$

First of all, what you've stated is not the distribution function of $n$. It's the distribution function of $X$ given parameters $n,p$. You cannot simply interchange $n$ and $k$. If it was the PMF of $n$, it would sum to $1$ over all values of $n$, and that clearly doesn't. To answer your question...



In the (very unrealistic) situation where we have a Binomial random variable $X$, the number of successes out of $n$ trials, and we know $p$ in advance, we can estimate $n$ as simply as



$$hat{n} = frac{X}{p}$$



The basic idea here being that we observe $X$ successes, and so to get back to $n$ we need to re-scale by $1/p$. However this entire thought process is a bit non-sensical, as a Binomial random variable is characterised as being a number of successes out of some fixed and known number of trials $n$.



An interesting question is whether this estimator is consistent. Clearly it is unbiased, since



$$E[X/p] = np/p = n$$



But for the variance, we have



$$Var(hat{n}) = Var(X/p) = Var(X)/p^2 = np(1-p)/p^2$$



So our estimator is clearly not consistent.






share|cite|improve this answer









$endgroup$



First of all, what you've stated is not the distribution function of $n$. It's the distribution function of $X$ given parameters $n,p$. You cannot simply interchange $n$ and $k$. If it was the PMF of $n$, it would sum to $1$ over all values of $n$, and that clearly doesn't. To answer your question...



In the (very unrealistic) situation where we have a Binomial random variable $X$, the number of successes out of $n$ trials, and we know $p$ in advance, we can estimate $n$ as simply as



$$hat{n} = frac{X}{p}$$



The basic idea here being that we observe $X$ successes, and so to get back to $n$ we need to re-scale by $1/p$. However this entire thought process is a bit non-sensical, as a Binomial random variable is characterised as being a number of successes out of some fixed and known number of trials $n$.



An interesting question is whether this estimator is consistent. Clearly it is unbiased, since



$$E[X/p] = np/p = n$$



But for the variance, we have



$$Var(hat{n}) = Var(X/p) = Var(X)/p^2 = np(1-p)/p^2$$



So our estimator is clearly not consistent.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Oct 4 '18 at 15:46









XiaomiXiaomi

1,081115




1,081115












  • $begingroup$
    Thanks. In an earlier version of my question I asked what is the distribution of $n$. Maybe this version was not so bad after all. However, ultimately I am interested in an estimator for $n$; however I believe a good estimator should be consistent.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:06










  • $begingroup$
    I amended my question a bit following your answer.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:09










  • $begingroup$
    I found the way to confirm your results for the variance. For the expected value there is instead an additional term.
    $endgroup$
    – G Cab
    Jan 31 at 23:56


















  • $begingroup$
    Thanks. In an earlier version of my question I asked what is the distribution of $n$. Maybe this version was not so bad after all. However, ultimately I am interested in an estimator for $n$; however I believe a good estimator should be consistent.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:06










  • $begingroup$
    I amended my question a bit following your answer.
    $endgroup$
    – tomka
    Oct 4 '18 at 16:09










  • $begingroup$
    I found the way to confirm your results for the variance. For the expected value there is instead an additional term.
    $endgroup$
    – G Cab
    Jan 31 at 23:56
















$begingroup$
Thanks. In an earlier version of my question I asked what is the distribution of $n$. Maybe this version was not so bad after all. However, ultimately I am interested in an estimator for $n$; however I believe a good estimator should be consistent.
$endgroup$
– tomka
Oct 4 '18 at 16:06




$begingroup$
Thanks. In an earlier version of my question I asked what is the distribution of $n$. Maybe this version was not so bad after all. However, ultimately I am interested in an estimator for $n$; however I believe a good estimator should be consistent.
$endgroup$
– tomka
Oct 4 '18 at 16:06












$begingroup$
I amended my question a bit following your answer.
$endgroup$
– tomka
Oct 4 '18 at 16:09




$begingroup$
I amended my question a bit following your answer.
$endgroup$
– tomka
Oct 4 '18 at 16:09












$begingroup$
I found the way to confirm your results for the variance. For the expected value there is instead an additional term.
$endgroup$
– G Cab
Jan 31 at 23:56




$begingroup$
I found the way to confirm your results for the variance. For the expected value there is instead an additional term.
$endgroup$
– G Cab
Jan 31 at 23:56











1












$begingroup$

The binomial distribution is the probability of having $s$ successes in $n$ trials, given that the probability of success
in each trial is $p$, and the outcomes of the trials are i.i.d. (Bernoulli Trials) .



The parameter $n$ is given, so wrt this the distribution is a conditional probability
and we can write
$$
Pleft( {s,left| {,n} right.} right) = left( matrix{
n cr
s cr} right)p^{,s} q^{,n - s} = {{Pleft( {s wedge n} right)} over {P(n)}}
$$



We want to determine the complementary conditional probability
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
$$

which is a totally licit question, provided that we know $P(n)$.



Assume that $n$ is uniformly distributed over the interval $[0,N]$.

Thus $P(n)= 1/(N+1)$, and we get
$$
Pleft( {s wedge n} right) = {{left[ {0 le n le N} right]} over {N + 1}}binom{n}{s}p^{,s} q^{,n - s}
$$

where $[P]$ denotes the Iverson bracket



Note that the sum of the bivariate distribution
$$
eqalign{
& sumlimits_{0, le ,n,left( { le ,N} right)} {sumlimits_{0, le ,s,left( { le ,n} right)} {Pleft( {s wedge n} right)} }
= {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]sumlimits_{0, le ,s,left( { le ,n} right)} {
binom{n}{s} p^{,s} q^{,n - s} } } = cr
& = {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]} = 1 cr}
$$

correctly checks to be $1$.



Then the marginal distribution in $s$ will be
$$
P(s) = sumlimits_{0, le ,n,left( { le ,N} right)} {Pleft( {s wedge n} right)}
= {{p^{,s} q^{, - s} } over {N + 1}}sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} }
$$

and we reach to
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
= left[ {0 le n le N} right]{{binom{n}{s}q^{,n} }
over {sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} } }}
$$



In the limit for $N to infty$ the expression above converges to
$$ bbox[lightyellow] {
Pleft( {n,left| {,s} right.} right) = binom{n}{s} , q^{,n - s} p^{,s + 1}
}$$



The expected value and the variance for $n$ result to be:
$$ bbox[lightyellow] {
eqalign{
& Eleft( {nleft| {,s} right.} right) = sumlimits_{0, le ,n,} {nbinom{n}{s}q^{,n - s} p^{,s + 1} }
= {{1 - p} over p} + {1 over p}s cr
& sigma ^{,2} = sumlimits_{0, le ,n,} {left( {n - {{1 - p + s} over p}} right)^{,2} binom{n}{s}q^{,n - s} p^{,s + 1} }
= {{left( {1 - p} right)left( {s + 1} right)} over {p^{,2} }} cr}
}$$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    @tomka since the problem is interesting, I recasted my answer to render it more rigorous
    $endgroup$
    – G Cab
    Jan 31 at 23:53
















1












$begingroup$

The binomial distribution is the probability of having $s$ successes in $n$ trials, given that the probability of success
in each trial is $p$, and the outcomes of the trials are i.i.d. (Bernoulli Trials) .



The parameter $n$ is given, so wrt this the distribution is a conditional probability
and we can write
$$
Pleft( {s,left| {,n} right.} right) = left( matrix{
n cr
s cr} right)p^{,s} q^{,n - s} = {{Pleft( {s wedge n} right)} over {P(n)}}
$$



We want to determine the complementary conditional probability
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
$$

which is a totally licit question, provided that we know $P(n)$.



Assume that $n$ is uniformly distributed over the interval $[0,N]$.

Thus $P(n)= 1/(N+1)$, and we get
$$
Pleft( {s wedge n} right) = {{left[ {0 le n le N} right]} over {N + 1}}binom{n}{s}p^{,s} q^{,n - s}
$$

where $[P]$ denotes the Iverson bracket



Note that the sum of the bivariate distribution
$$
eqalign{
& sumlimits_{0, le ,n,left( { le ,N} right)} {sumlimits_{0, le ,s,left( { le ,n} right)} {Pleft( {s wedge n} right)} }
= {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]sumlimits_{0, le ,s,left( { le ,n} right)} {
binom{n}{s} p^{,s} q^{,n - s} } } = cr
& = {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]} = 1 cr}
$$

correctly checks to be $1$.



Then the marginal distribution in $s$ will be
$$
P(s) = sumlimits_{0, le ,n,left( { le ,N} right)} {Pleft( {s wedge n} right)}
= {{p^{,s} q^{, - s} } over {N + 1}}sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} }
$$

and we reach to
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
= left[ {0 le n le N} right]{{binom{n}{s}q^{,n} }
over {sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} } }}
$$



In the limit for $N to infty$ the expression above converges to
$$ bbox[lightyellow] {
Pleft( {n,left| {,s} right.} right) = binom{n}{s} , q^{,n - s} p^{,s + 1}
}$$



The expected value and the variance for $n$ result to be:
$$ bbox[lightyellow] {
eqalign{
& Eleft( {nleft| {,s} right.} right) = sumlimits_{0, le ,n,} {nbinom{n}{s}q^{,n - s} p^{,s + 1} }
= {{1 - p} over p} + {1 over p}s cr
& sigma ^{,2} = sumlimits_{0, le ,n,} {left( {n - {{1 - p + s} over p}} right)^{,2} binom{n}{s}q^{,n - s} p^{,s + 1} }
= {{left( {1 - p} right)left( {s + 1} right)} over {p^{,2} }} cr}
}$$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    @tomka since the problem is interesting, I recasted my answer to render it more rigorous
    $endgroup$
    – G Cab
    Jan 31 at 23:53














1












1








1





$begingroup$

The binomial distribution is the probability of having $s$ successes in $n$ trials, given that the probability of success
in each trial is $p$, and the outcomes of the trials are i.i.d. (Bernoulli Trials) .



The parameter $n$ is given, so wrt this the distribution is a conditional probability
and we can write
$$
Pleft( {s,left| {,n} right.} right) = left( matrix{
n cr
s cr} right)p^{,s} q^{,n - s} = {{Pleft( {s wedge n} right)} over {P(n)}}
$$



We want to determine the complementary conditional probability
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
$$

which is a totally licit question, provided that we know $P(n)$.



Assume that $n$ is uniformly distributed over the interval $[0,N]$.

Thus $P(n)= 1/(N+1)$, and we get
$$
Pleft( {s wedge n} right) = {{left[ {0 le n le N} right]} over {N + 1}}binom{n}{s}p^{,s} q^{,n - s}
$$

where $[P]$ denotes the Iverson bracket



Note that the sum of the bivariate distribution
$$
eqalign{
& sumlimits_{0, le ,n,left( { le ,N} right)} {sumlimits_{0, le ,s,left( { le ,n} right)} {Pleft( {s wedge n} right)} }
= {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]sumlimits_{0, le ,s,left( { le ,n} right)} {
binom{n}{s} p^{,s} q^{,n - s} } } = cr
& = {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]} = 1 cr}
$$

correctly checks to be $1$.



Then the marginal distribution in $s$ will be
$$
P(s) = sumlimits_{0, le ,n,left( { le ,N} right)} {Pleft( {s wedge n} right)}
= {{p^{,s} q^{, - s} } over {N + 1}}sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} }
$$

and we reach to
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
= left[ {0 le n le N} right]{{binom{n}{s}q^{,n} }
over {sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} } }}
$$



In the limit for $N to infty$ the expression above converges to
$$ bbox[lightyellow] {
Pleft( {n,left| {,s} right.} right) = binom{n}{s} , q^{,n - s} p^{,s + 1}
}$$



The expected value and the variance for $n$ result to be:
$$ bbox[lightyellow] {
eqalign{
& Eleft( {nleft| {,s} right.} right) = sumlimits_{0, le ,n,} {nbinom{n}{s}q^{,n - s} p^{,s + 1} }
= {{1 - p} over p} + {1 over p}s cr
& sigma ^{,2} = sumlimits_{0, le ,n,} {left( {n - {{1 - p + s} over p}} right)^{,2} binom{n}{s}q^{,n - s} p^{,s + 1} }
= {{left( {1 - p} right)left( {s + 1} right)} over {p^{,2} }} cr}
}$$






share|cite|improve this answer











$endgroup$



The binomial distribution is the probability of having $s$ successes in $n$ trials, given that the probability of success
in each trial is $p$, and the outcomes of the trials are i.i.d. (Bernoulli Trials) .



The parameter $n$ is given, so wrt this the distribution is a conditional probability
and we can write
$$
Pleft( {s,left| {,n} right.} right) = left( matrix{
n cr
s cr} right)p^{,s} q^{,n - s} = {{Pleft( {s wedge n} right)} over {P(n)}}
$$



We want to determine the complementary conditional probability
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
$$

which is a totally licit question, provided that we know $P(n)$.



Assume that $n$ is uniformly distributed over the interval $[0,N]$.

Thus $P(n)= 1/(N+1)$, and we get
$$
Pleft( {s wedge n} right) = {{left[ {0 le n le N} right]} over {N + 1}}binom{n}{s}p^{,s} q^{,n - s}
$$

where $[P]$ denotes the Iverson bracket



Note that the sum of the bivariate distribution
$$
eqalign{
& sumlimits_{0, le ,n,left( { le ,N} right)} {sumlimits_{0, le ,s,left( { le ,n} right)} {Pleft( {s wedge n} right)} }
= {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]sumlimits_{0, le ,s,left( { le ,n} right)} {
binom{n}{s} p^{,s} q^{,n - s} } } = cr
& = {1 over {N + 1}}sumlimits_{0, le ,n,left( { le ,N} right)} {left[ {0 le n le N} right]} = 1 cr}
$$

correctly checks to be $1$.



Then the marginal distribution in $s$ will be
$$
P(s) = sumlimits_{0, le ,n,left( { le ,N} right)} {Pleft( {s wedge n} right)}
= {{p^{,s} q^{, - s} } over {N + 1}}sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} }
$$

and we reach to
$$
Pleft( {n,left| {,s} right.} right) = {{Pleft( {s wedge n} right)} over {P(s)}}
= left[ {0 le n le N} right]{{binom{n}{s}q^{,n} }
over {sumlimits_{0, le ,n, le ,N} {binom{n}{s}q^{,n} } }}
$$



In the limit for $N to infty$ the expression above converges to
$$ bbox[lightyellow] {
Pleft( {n,left| {,s} right.} right) = binom{n}{s} , q^{,n - s} p^{,s + 1}
}$$



The expected value and the variance for $n$ result to be:
$$ bbox[lightyellow] {
eqalign{
& Eleft( {nleft| {,s} right.} right) = sumlimits_{0, le ,n,} {nbinom{n}{s}q^{,n - s} p^{,s + 1} }
= {{1 - p} over p} + {1 over p}s cr
& sigma ^{,2} = sumlimits_{0, le ,n,} {left( {n - {{1 - p + s} over p}} right)^{,2} binom{n}{s}q^{,n - s} p^{,s + 1} }
= {{left( {1 - p} right)left( {s + 1} right)} over {p^{,2} }} cr}
}$$







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 31 at 23:52

























answered Oct 8 '18 at 22:28









G CabG Cab

20.4k31341




20.4k31341












  • $begingroup$
    @tomka since the problem is interesting, I recasted my answer to render it more rigorous
    $endgroup$
    – G Cab
    Jan 31 at 23:53


















  • $begingroup$
    @tomka since the problem is interesting, I recasted my answer to render it more rigorous
    $endgroup$
    – G Cab
    Jan 31 at 23:53
















$begingroup$
@tomka since the problem is interesting, I recasted my answer to render it more rigorous
$endgroup$
– G Cab
Jan 31 at 23:53




$begingroup$
@tomka since the problem is interesting, I recasted my answer to render it more rigorous
$endgroup$
– G Cab
Jan 31 at 23:53


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2942227%2fwhat-is-an-estimator-for-the-number-of-trials-given-observed-successes-and-the%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

SQL update select statement

'app-layout' is not a known element: how to share Component with different Modules