Where did the mean estimator come from?
$begingroup$
What was the motivation behind the definition of the mean estimator :
$$hat{mu}=frac{1}{N}sum_{i=1}^N X_{i}$$
Did we come up with this very form through trial and error ?
I do know that it's unbiased but aren't there other estimators that are also unbiased , so why did we favor this particular one ?
probability statistics math-history
$endgroup$
|
show 1 more comment
$begingroup$
What was the motivation behind the definition of the mean estimator :
$$hat{mu}=frac{1}{N}sum_{i=1}^N X_{i}$$
Did we come up with this very form through trial and error ?
I do know that it's unbiased but aren't there other estimators that are also unbiased , so why did we favor this particular one ?
probability statistics math-history
$endgroup$
$begingroup$
If there is a natural estimator, it is this one....
$endgroup$
– Jean Marie
Jan 28 at 19:45
$begingroup$
How do you quantify "naturalness" ? and why would you even consider it as a criterion for selecting this estimator ?
$endgroup$
– Hilbert
Jan 28 at 19:51
$begingroup$
This is the mean of the empirical distribution given the samples ${X_1,cdots,X_n}$.
$endgroup$
– Sangchul Lee
Jan 28 at 20:21
$begingroup$
yes but it's also an estimator of the population's mean $mu$.
$endgroup$
– Hilbert
Jan 28 at 20:24
2
$begingroup$
I was trying to say that, given only $n$ observed values, empirical distribution is one natural choice to go, and then the sample mean is simply the mean of this distribution. The idea of assigning probability $frac{1}{n}$ to each value is partially justified by the fact that such distribution maximizes entropy if all the $n$ values are distinct.
$endgroup$
– Sangchul Lee
Jan 28 at 21:01
|
show 1 more comment
$begingroup$
What was the motivation behind the definition of the mean estimator :
$$hat{mu}=frac{1}{N}sum_{i=1}^N X_{i}$$
Did we come up with this very form through trial and error ?
I do know that it's unbiased but aren't there other estimators that are also unbiased , so why did we favor this particular one ?
probability statistics math-history
$endgroup$
What was the motivation behind the definition of the mean estimator :
$$hat{mu}=frac{1}{N}sum_{i=1}^N X_{i}$$
Did we come up with this very form through trial and error ?
I do know that it's unbiased but aren't there other estimators that are also unbiased , so why did we favor this particular one ?
probability statistics math-history
probability statistics math-history
asked Jan 28 at 19:07
HilbertHilbert
1769
1769
$begingroup$
If there is a natural estimator, it is this one....
$endgroup$
– Jean Marie
Jan 28 at 19:45
$begingroup$
How do you quantify "naturalness" ? and why would you even consider it as a criterion for selecting this estimator ?
$endgroup$
– Hilbert
Jan 28 at 19:51
$begingroup$
This is the mean of the empirical distribution given the samples ${X_1,cdots,X_n}$.
$endgroup$
– Sangchul Lee
Jan 28 at 20:21
$begingroup$
yes but it's also an estimator of the population's mean $mu$.
$endgroup$
– Hilbert
Jan 28 at 20:24
2
$begingroup$
I was trying to say that, given only $n$ observed values, empirical distribution is one natural choice to go, and then the sample mean is simply the mean of this distribution. The idea of assigning probability $frac{1}{n}$ to each value is partially justified by the fact that such distribution maximizes entropy if all the $n$ values are distinct.
$endgroup$
– Sangchul Lee
Jan 28 at 21:01
|
show 1 more comment
$begingroup$
If there is a natural estimator, it is this one....
$endgroup$
– Jean Marie
Jan 28 at 19:45
$begingroup$
How do you quantify "naturalness" ? and why would you even consider it as a criterion for selecting this estimator ?
$endgroup$
– Hilbert
Jan 28 at 19:51
$begingroup$
This is the mean of the empirical distribution given the samples ${X_1,cdots,X_n}$.
$endgroup$
– Sangchul Lee
Jan 28 at 20:21
$begingroup$
yes but it's also an estimator of the population's mean $mu$.
$endgroup$
– Hilbert
Jan 28 at 20:24
2
$begingroup$
I was trying to say that, given only $n$ observed values, empirical distribution is one natural choice to go, and then the sample mean is simply the mean of this distribution. The idea of assigning probability $frac{1}{n}$ to each value is partially justified by the fact that such distribution maximizes entropy if all the $n$ values are distinct.
$endgroup$
– Sangchul Lee
Jan 28 at 21:01
$begingroup$
If there is a natural estimator, it is this one....
$endgroup$
– Jean Marie
Jan 28 at 19:45
$begingroup$
If there is a natural estimator, it is this one....
$endgroup$
– Jean Marie
Jan 28 at 19:45
$begingroup$
How do you quantify "naturalness" ? and why would you even consider it as a criterion for selecting this estimator ?
$endgroup$
– Hilbert
Jan 28 at 19:51
$begingroup$
How do you quantify "naturalness" ? and why would you even consider it as a criterion for selecting this estimator ?
$endgroup$
– Hilbert
Jan 28 at 19:51
$begingroup$
This is the mean of the empirical distribution given the samples ${X_1,cdots,X_n}$.
$endgroup$
– Sangchul Lee
Jan 28 at 20:21
$begingroup$
This is the mean of the empirical distribution given the samples ${X_1,cdots,X_n}$.
$endgroup$
– Sangchul Lee
Jan 28 at 20:21
$begingroup$
yes but it's also an estimator of the population's mean $mu$.
$endgroup$
– Hilbert
Jan 28 at 20:24
$begingroup$
yes but it's also an estimator of the population's mean $mu$.
$endgroup$
– Hilbert
Jan 28 at 20:24
2
2
$begingroup$
I was trying to say that, given only $n$ observed values, empirical distribution is one natural choice to go, and then the sample mean is simply the mean of this distribution. The idea of assigning probability $frac{1}{n}$ to each value is partially justified by the fact that such distribution maximizes entropy if all the $n$ values are distinct.
$endgroup$
– Sangchul Lee
Jan 28 at 21:01
$begingroup$
I was trying to say that, given only $n$ observed values, empirical distribution is one natural choice to go, and then the sample mean is simply the mean of this distribution. The idea of assigning probability $frac{1}{n}$ to each value is partially justified by the fact that such distribution maximizes entropy if all the $n$ values are distinct.
$endgroup$
– Sangchul Lee
Jan 28 at 21:01
|
show 1 more comment
1 Answer
1
active
oldest
votes
$begingroup$
First, it works better to reserve $N$ for the size of a finite population,
and to use $n$ for the size of a sample. So if we take a sample $X_1, X_2, dots, X_n$ of size $n$ from a population that has mean $mu,$ then we
use the sample mean
$$bar X = hatmu = frac 1 n sum_{i=1}^n X_i$$
as an estimate of $mu.$
Intuitively, as @Ian comments, it seems reasonable to try using
the mean of a random sample to estimate the mean of a population.
More formally, this is called the "Method of Moments." The idea is
to estimate $frac 1 n sum_{i=}^n X_i^k$ as an estimate of $frac 1 N sum_{i=1}^N X_i^k$ for a finite population. (A similar expression
with an integral is used for some infinite populations.) Using
the sample mean $bar X$ to estimate the population mean $mu$ is
simply the case where $k = 1$ in the Method of Moments.
One good property of an estimator is unbiasedness, and
one can show that $bar X$ is an unbiased estimator of $mu.$ In symbols: $E(bar X) = mu.$
If one is sampling from a normal population, then it turns out
that $Var(bar X) = sigma^2/n,$ where $sigma^2$ is the population variance, and also no other unbiased estimator has a smaller variance.
So, taken as an estimate of $mu,$ the sample mean has two nice properties. (1) It is unbiased (neither systematically too large or too small; aimed at the right target). (2) It has minimal variability (it's aim at the target is optimal).
For normal data, you might try using the sample median or the sample midrange to estimate $mu.$ (The 'midrange' is halfway between the maximum and minimum values.) Both of these alternative estimators are also unbiased. But both of them are more variable than the sample mean $bar X.$
Note: You have asked a good question, and there is more to the
complete answer than I can discuss here. There are principles of estimation other than the Method of Moments. And there are criteria
other than unbiasedness and minimal variances that are used in the
search for 'good' estimators.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3091270%2fwhere-did-the-mean-estimator-come-from%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
First, it works better to reserve $N$ for the size of a finite population,
and to use $n$ for the size of a sample. So if we take a sample $X_1, X_2, dots, X_n$ of size $n$ from a population that has mean $mu,$ then we
use the sample mean
$$bar X = hatmu = frac 1 n sum_{i=1}^n X_i$$
as an estimate of $mu.$
Intuitively, as @Ian comments, it seems reasonable to try using
the mean of a random sample to estimate the mean of a population.
More formally, this is called the "Method of Moments." The idea is
to estimate $frac 1 n sum_{i=}^n X_i^k$ as an estimate of $frac 1 N sum_{i=1}^N X_i^k$ for a finite population. (A similar expression
with an integral is used for some infinite populations.) Using
the sample mean $bar X$ to estimate the population mean $mu$ is
simply the case where $k = 1$ in the Method of Moments.
One good property of an estimator is unbiasedness, and
one can show that $bar X$ is an unbiased estimator of $mu.$ In symbols: $E(bar X) = mu.$
If one is sampling from a normal population, then it turns out
that $Var(bar X) = sigma^2/n,$ where $sigma^2$ is the population variance, and also no other unbiased estimator has a smaller variance.
So, taken as an estimate of $mu,$ the sample mean has two nice properties. (1) It is unbiased (neither systematically too large or too small; aimed at the right target). (2) It has minimal variability (it's aim at the target is optimal).
For normal data, you might try using the sample median or the sample midrange to estimate $mu.$ (The 'midrange' is halfway between the maximum and minimum values.) Both of these alternative estimators are also unbiased. But both of them are more variable than the sample mean $bar X.$
Note: You have asked a good question, and there is more to the
complete answer than I can discuss here. There are principles of estimation other than the Method of Moments. And there are criteria
other than unbiasedness and minimal variances that are used in the
search for 'good' estimators.
$endgroup$
add a comment |
$begingroup$
First, it works better to reserve $N$ for the size of a finite population,
and to use $n$ for the size of a sample. So if we take a sample $X_1, X_2, dots, X_n$ of size $n$ from a population that has mean $mu,$ then we
use the sample mean
$$bar X = hatmu = frac 1 n sum_{i=1}^n X_i$$
as an estimate of $mu.$
Intuitively, as @Ian comments, it seems reasonable to try using
the mean of a random sample to estimate the mean of a population.
More formally, this is called the "Method of Moments." The idea is
to estimate $frac 1 n sum_{i=}^n X_i^k$ as an estimate of $frac 1 N sum_{i=1}^N X_i^k$ for a finite population. (A similar expression
with an integral is used for some infinite populations.) Using
the sample mean $bar X$ to estimate the population mean $mu$ is
simply the case where $k = 1$ in the Method of Moments.
One good property of an estimator is unbiasedness, and
one can show that $bar X$ is an unbiased estimator of $mu.$ In symbols: $E(bar X) = mu.$
If one is sampling from a normal population, then it turns out
that $Var(bar X) = sigma^2/n,$ where $sigma^2$ is the population variance, and also no other unbiased estimator has a smaller variance.
So, taken as an estimate of $mu,$ the sample mean has two nice properties. (1) It is unbiased (neither systematically too large or too small; aimed at the right target). (2) It has minimal variability (it's aim at the target is optimal).
For normal data, you might try using the sample median or the sample midrange to estimate $mu.$ (The 'midrange' is halfway between the maximum and minimum values.) Both of these alternative estimators are also unbiased. But both of them are more variable than the sample mean $bar X.$
Note: You have asked a good question, and there is more to the
complete answer than I can discuss here. There are principles of estimation other than the Method of Moments. And there are criteria
other than unbiasedness and minimal variances that are used in the
search for 'good' estimators.
$endgroup$
add a comment |
$begingroup$
First, it works better to reserve $N$ for the size of a finite population,
and to use $n$ for the size of a sample. So if we take a sample $X_1, X_2, dots, X_n$ of size $n$ from a population that has mean $mu,$ then we
use the sample mean
$$bar X = hatmu = frac 1 n sum_{i=1}^n X_i$$
as an estimate of $mu.$
Intuitively, as @Ian comments, it seems reasonable to try using
the mean of a random sample to estimate the mean of a population.
More formally, this is called the "Method of Moments." The idea is
to estimate $frac 1 n sum_{i=}^n X_i^k$ as an estimate of $frac 1 N sum_{i=1}^N X_i^k$ for a finite population. (A similar expression
with an integral is used for some infinite populations.) Using
the sample mean $bar X$ to estimate the population mean $mu$ is
simply the case where $k = 1$ in the Method of Moments.
One good property of an estimator is unbiasedness, and
one can show that $bar X$ is an unbiased estimator of $mu.$ In symbols: $E(bar X) = mu.$
If one is sampling from a normal population, then it turns out
that $Var(bar X) = sigma^2/n,$ where $sigma^2$ is the population variance, and also no other unbiased estimator has a smaller variance.
So, taken as an estimate of $mu,$ the sample mean has two nice properties. (1) It is unbiased (neither systematically too large or too small; aimed at the right target). (2) It has minimal variability (it's aim at the target is optimal).
For normal data, you might try using the sample median or the sample midrange to estimate $mu.$ (The 'midrange' is halfway between the maximum and minimum values.) Both of these alternative estimators are also unbiased. But both of them are more variable than the sample mean $bar X.$
Note: You have asked a good question, and there is more to the
complete answer than I can discuss here. There are principles of estimation other than the Method of Moments. And there are criteria
other than unbiasedness and minimal variances that are used in the
search for 'good' estimators.
$endgroup$
First, it works better to reserve $N$ for the size of a finite population,
and to use $n$ for the size of a sample. So if we take a sample $X_1, X_2, dots, X_n$ of size $n$ from a population that has mean $mu,$ then we
use the sample mean
$$bar X = hatmu = frac 1 n sum_{i=1}^n X_i$$
as an estimate of $mu.$
Intuitively, as @Ian comments, it seems reasonable to try using
the mean of a random sample to estimate the mean of a population.
More formally, this is called the "Method of Moments." The idea is
to estimate $frac 1 n sum_{i=}^n X_i^k$ as an estimate of $frac 1 N sum_{i=1}^N X_i^k$ for a finite population. (A similar expression
with an integral is used for some infinite populations.) Using
the sample mean $bar X$ to estimate the population mean $mu$ is
simply the case where $k = 1$ in the Method of Moments.
One good property of an estimator is unbiasedness, and
one can show that $bar X$ is an unbiased estimator of $mu.$ In symbols: $E(bar X) = mu.$
If one is sampling from a normal population, then it turns out
that $Var(bar X) = sigma^2/n,$ where $sigma^2$ is the population variance, and also no other unbiased estimator has a smaller variance.
So, taken as an estimate of $mu,$ the sample mean has two nice properties. (1) It is unbiased (neither systematically too large or too small; aimed at the right target). (2) It has minimal variability (it's aim at the target is optimal).
For normal data, you might try using the sample median or the sample midrange to estimate $mu.$ (The 'midrange' is halfway between the maximum and minimum values.) Both of these alternative estimators are also unbiased. But both of them are more variable than the sample mean $bar X.$
Note: You have asked a good question, and there is more to the
complete answer than I can discuss here. There are principles of estimation other than the Method of Moments. And there are criteria
other than unbiasedness and minimal variances that are used in the
search for 'good' estimators.
edited Jan 28 at 22:33
answered Jan 28 at 22:20
BruceETBruceET
36.1k71540
36.1k71540
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3091270%2fwhere-did-the-mean-estimator-come-from%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
If there is a natural estimator, it is this one....
$endgroup$
– Jean Marie
Jan 28 at 19:45
$begingroup$
How do you quantify "naturalness" ? and why would you even consider it as a criterion for selecting this estimator ?
$endgroup$
– Hilbert
Jan 28 at 19:51
$begingroup$
This is the mean of the empirical distribution given the samples ${X_1,cdots,X_n}$.
$endgroup$
– Sangchul Lee
Jan 28 at 20:21
$begingroup$
yes but it's also an estimator of the population's mean $mu$.
$endgroup$
– Hilbert
Jan 28 at 20:24
2
$begingroup$
I was trying to say that, given only $n$ observed values, empirical distribution is one natural choice to go, and then the sample mean is simply the mean of this distribution. The idea of assigning probability $frac{1}{n}$ to each value is partially justified by the fact that such distribution maximizes entropy if all the $n$ values are distinct.
$endgroup$
– Sangchul Lee
Jan 28 at 21:01