Law of Large Numbers contradicts Central Limit Theorem?












4












$begingroup$


My text defines the weak law of large numbers:




If $X_1,ldots,X_n$ are IID, then $overline{X} overset{P}{to} mu$.




And the CLT as:




Let $X_1,ldots,X_n$ be IID with mean $mu$ and variance $sigma^2$.
Then:



$$Z_n = frac{sqrt{n}(overline{X}_n - mu)}{sigma} rightsquigarrow Z$$



Where $Z sim N(0,1)$.




The weak law of large numbers says that the sample mean converges in probability to a constant (the population mean). Convergence in probability implies convergence in distribution, so it is also saying that the sample mean converges in distribution to that same constant.



In contrast the central limit theorem appears to be saying that the sample mean converges to a standard normal distribution, not a constant. I recognize that strictly speaking the two expressions are not the same -- but I wouldn't expect subtracting the population mean (a constant) or dividing by the standard deviation (a constant) to change the expression in such a way that it no longer converges to a constant.



The only other difference is the multiplication by $sqrt{n}$. If this is what makes the difference (which seems plausible because it will change the sample mean from being the sum divided by $n$ to the sum divided by $sqrt{n}$), then it seems like we went out of our way to make it so that we would know less about the convergence -- if the point of the CLT is to be able to make probability statements about the sample mean, that seems backwards (we were better off before just using WLLN where knew the number with certainty). What am I missing here?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    The CLT is a refinement of the LLN. Namely, the latter says that the sample mean converges to the population mean, and the first gives you a more precise asymptotic result. That is, $bar X_ntomu$ and the difference is actually of the size $frac 1{sqrt n}$. After multiplying the difference by the sharp scale factor $sqrt{n}$ you obtain a limit profile, namely a standard normal profile.
    $endgroup$
    – GReyes
    Jan 27 at 20:51












  • $begingroup$
    The square root of n is the key, it takes the deviations of the sample mean from the population mean and "stretches" them to be of order 1 (neither going to zero nor blowing up) even as n goes to infinity. The advantage is that now you. can make good quantitative estimates of moderate deviations between the sample mean and population mean (moderate meaning of order $n^{-1/2}$).
    $endgroup$
    – Ian
    Jan 27 at 20:52












  • $begingroup$
    Look at the relationship between the CLT, LLN, and LIL here
    $endgroup$
    – d.k.o.
    Jan 27 at 22:52
















4












$begingroup$


My text defines the weak law of large numbers:




If $X_1,ldots,X_n$ are IID, then $overline{X} overset{P}{to} mu$.




And the CLT as:




Let $X_1,ldots,X_n$ be IID with mean $mu$ and variance $sigma^2$.
Then:



$$Z_n = frac{sqrt{n}(overline{X}_n - mu)}{sigma} rightsquigarrow Z$$



Where $Z sim N(0,1)$.




The weak law of large numbers says that the sample mean converges in probability to a constant (the population mean). Convergence in probability implies convergence in distribution, so it is also saying that the sample mean converges in distribution to that same constant.



In contrast the central limit theorem appears to be saying that the sample mean converges to a standard normal distribution, not a constant. I recognize that strictly speaking the two expressions are not the same -- but I wouldn't expect subtracting the population mean (a constant) or dividing by the standard deviation (a constant) to change the expression in such a way that it no longer converges to a constant.



The only other difference is the multiplication by $sqrt{n}$. If this is what makes the difference (which seems plausible because it will change the sample mean from being the sum divided by $n$ to the sum divided by $sqrt{n}$), then it seems like we went out of our way to make it so that we would know less about the convergence -- if the point of the CLT is to be able to make probability statements about the sample mean, that seems backwards (we were better off before just using WLLN where knew the number with certainty). What am I missing here?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    The CLT is a refinement of the LLN. Namely, the latter says that the sample mean converges to the population mean, and the first gives you a more precise asymptotic result. That is, $bar X_ntomu$ and the difference is actually of the size $frac 1{sqrt n}$. After multiplying the difference by the sharp scale factor $sqrt{n}$ you obtain a limit profile, namely a standard normal profile.
    $endgroup$
    – GReyes
    Jan 27 at 20:51












  • $begingroup$
    The square root of n is the key, it takes the deviations of the sample mean from the population mean and "stretches" them to be of order 1 (neither going to zero nor blowing up) even as n goes to infinity. The advantage is that now you. can make good quantitative estimates of moderate deviations between the sample mean and population mean (moderate meaning of order $n^{-1/2}$).
    $endgroup$
    – Ian
    Jan 27 at 20:52












  • $begingroup$
    Look at the relationship between the CLT, LLN, and LIL here
    $endgroup$
    – d.k.o.
    Jan 27 at 22:52














4












4








4





$begingroup$


My text defines the weak law of large numbers:




If $X_1,ldots,X_n$ are IID, then $overline{X} overset{P}{to} mu$.




And the CLT as:




Let $X_1,ldots,X_n$ be IID with mean $mu$ and variance $sigma^2$.
Then:



$$Z_n = frac{sqrt{n}(overline{X}_n - mu)}{sigma} rightsquigarrow Z$$



Where $Z sim N(0,1)$.




The weak law of large numbers says that the sample mean converges in probability to a constant (the population mean). Convergence in probability implies convergence in distribution, so it is also saying that the sample mean converges in distribution to that same constant.



In contrast the central limit theorem appears to be saying that the sample mean converges to a standard normal distribution, not a constant. I recognize that strictly speaking the two expressions are not the same -- but I wouldn't expect subtracting the population mean (a constant) or dividing by the standard deviation (a constant) to change the expression in such a way that it no longer converges to a constant.



The only other difference is the multiplication by $sqrt{n}$. If this is what makes the difference (which seems plausible because it will change the sample mean from being the sum divided by $n$ to the sum divided by $sqrt{n}$), then it seems like we went out of our way to make it so that we would know less about the convergence -- if the point of the CLT is to be able to make probability statements about the sample mean, that seems backwards (we were better off before just using WLLN where knew the number with certainty). What am I missing here?










share|cite|improve this question











$endgroup$




My text defines the weak law of large numbers:




If $X_1,ldots,X_n$ are IID, then $overline{X} overset{P}{to} mu$.




And the CLT as:




Let $X_1,ldots,X_n$ be IID with mean $mu$ and variance $sigma^2$.
Then:



$$Z_n = frac{sqrt{n}(overline{X}_n - mu)}{sigma} rightsquigarrow Z$$



Where $Z sim N(0,1)$.




The weak law of large numbers says that the sample mean converges in probability to a constant (the population mean). Convergence in probability implies convergence in distribution, so it is also saying that the sample mean converges in distribution to that same constant.



In contrast the central limit theorem appears to be saying that the sample mean converges to a standard normal distribution, not a constant. I recognize that strictly speaking the two expressions are not the same -- but I wouldn't expect subtracting the population mean (a constant) or dividing by the standard deviation (a constant) to change the expression in such a way that it no longer converges to a constant.



The only other difference is the multiplication by $sqrt{n}$. If this is what makes the difference (which seems plausible because it will change the sample mean from being the sum divided by $n$ to the sum divided by $sqrt{n}$), then it seems like we went out of our way to make it so that we would know less about the convergence -- if the point of the CLT is to be able to make probability statements about the sample mean, that seems backwards (we were better off before just using WLLN where knew the number with certainty). What am I missing here?







probability central-limit-theorem law-of-large-numbers






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 27 at 23:37







Joseph Garvin

















asked Jan 27 at 20:40









Joseph GarvinJoseph Garvin

45928




45928








  • 1




    $begingroup$
    The CLT is a refinement of the LLN. Namely, the latter says that the sample mean converges to the population mean, and the first gives you a more precise asymptotic result. That is, $bar X_ntomu$ and the difference is actually of the size $frac 1{sqrt n}$. After multiplying the difference by the sharp scale factor $sqrt{n}$ you obtain a limit profile, namely a standard normal profile.
    $endgroup$
    – GReyes
    Jan 27 at 20:51












  • $begingroup$
    The square root of n is the key, it takes the deviations of the sample mean from the population mean and "stretches" them to be of order 1 (neither going to zero nor blowing up) even as n goes to infinity. The advantage is that now you. can make good quantitative estimates of moderate deviations between the sample mean and population mean (moderate meaning of order $n^{-1/2}$).
    $endgroup$
    – Ian
    Jan 27 at 20:52












  • $begingroup$
    Look at the relationship between the CLT, LLN, and LIL here
    $endgroup$
    – d.k.o.
    Jan 27 at 22:52














  • 1




    $begingroup$
    The CLT is a refinement of the LLN. Namely, the latter says that the sample mean converges to the population mean, and the first gives you a more precise asymptotic result. That is, $bar X_ntomu$ and the difference is actually of the size $frac 1{sqrt n}$. After multiplying the difference by the sharp scale factor $sqrt{n}$ you obtain a limit profile, namely a standard normal profile.
    $endgroup$
    – GReyes
    Jan 27 at 20:51












  • $begingroup$
    The square root of n is the key, it takes the deviations of the sample mean from the population mean and "stretches" them to be of order 1 (neither going to zero nor blowing up) even as n goes to infinity. The advantage is that now you. can make good quantitative estimates of moderate deviations between the sample mean and population mean (moderate meaning of order $n^{-1/2}$).
    $endgroup$
    – Ian
    Jan 27 at 20:52












  • $begingroup$
    Look at the relationship between the CLT, LLN, and LIL here
    $endgroup$
    – d.k.o.
    Jan 27 at 22:52








1




1




$begingroup$
The CLT is a refinement of the LLN. Namely, the latter says that the sample mean converges to the population mean, and the first gives you a more precise asymptotic result. That is, $bar X_ntomu$ and the difference is actually of the size $frac 1{sqrt n}$. After multiplying the difference by the sharp scale factor $sqrt{n}$ you obtain a limit profile, namely a standard normal profile.
$endgroup$
– GReyes
Jan 27 at 20:51






$begingroup$
The CLT is a refinement of the LLN. Namely, the latter says that the sample mean converges to the population mean, and the first gives you a more precise asymptotic result. That is, $bar X_ntomu$ and the difference is actually of the size $frac 1{sqrt n}$. After multiplying the difference by the sharp scale factor $sqrt{n}$ you obtain a limit profile, namely a standard normal profile.
$endgroup$
– GReyes
Jan 27 at 20:51














$begingroup$
The square root of n is the key, it takes the deviations of the sample mean from the population mean and "stretches" them to be of order 1 (neither going to zero nor blowing up) even as n goes to infinity. The advantage is that now you. can make good quantitative estimates of moderate deviations between the sample mean and population mean (moderate meaning of order $n^{-1/2}$).
$endgroup$
– Ian
Jan 27 at 20:52






$begingroup$
The square root of n is the key, it takes the deviations of the sample mean from the population mean and "stretches" them to be of order 1 (neither going to zero nor blowing up) even as n goes to infinity. The advantage is that now you. can make good quantitative estimates of moderate deviations between the sample mean and population mean (moderate meaning of order $n^{-1/2}$).
$endgroup$
– Ian
Jan 27 at 20:52














$begingroup$
Look at the relationship between the CLT, LLN, and LIL here
$endgroup$
– d.k.o.
Jan 27 at 22:52




$begingroup$
Look at the relationship between the CLT, LLN, and LIL here
$endgroup$
– d.k.o.
Jan 27 at 22:52










0






active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3090099%2flaw-of-large-numbers-contradicts-central-limit-theorem%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3090099%2flaw-of-large-numbers-contradicts-central-limit-theorem%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

Npm cannot find a required file even through it is in the searched directory

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith