Calculating the var(β) in a least square regression model












0












$begingroup$


The linear model that I'm working with is:
$$y_t =α +βx_t + ε_t$$



Based on my Lecture I have:



$$Var(hatβ) = Var(Σw_tε_t)$$
where ε is the error term and
$$w_t = frac{x_t-overline x}{Σ(x_t-overline x)^2}$$



That being said we then have:



$$begin{align}Var(hatβ)& = ΣVar(w_tε_t)+ΣΣCov(w_s,ε_t)\&=E[w_tε_t - E(w_tε_t)]^2\ &= E[w_tε_t]^2\&= E[w_t^2ε_t^2]\&= Σw_t^2Var(ε_t)\&=σ^2Σw_t^2 end{align}$$





What bothers me the most here is how come:
$$Var(Σw_tε_t) = ΣVar(w_tε_t)+ΣΣCov(w_s,ε_t)$$



and how do we get from this:
$$E[w_t^2ε_t^2]$$
to this:
$$Σw_t^2Var(ε_t)$$



Many thanks in advance!










share|cite|improve this question











$endgroup$












  • $begingroup$
    Which is exactly the linear model you have? Just one exogenous variable? Are you assuming that the values of $x$ are predetermined (that is, non random) or not?
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 22:39












  • $begingroup$
    Oops, very sorry I just added the linear equation. Thanks for your quick reply
    $endgroup$
    – Fozoro
    Jan 23 at 22:43










  • $begingroup$
    The term $w$ should not have a summation over $x - bar x$, should it?
    $endgroup$
    – Mark Viola
    Jan 23 at 22:45












  • $begingroup$
    @MarkViola Yes your right I put it there by accident ( just edited my question )
    $endgroup$
    – Fozoro
    Jan 23 at 22:48










  • $begingroup$
    Check out that there is a $w$ weight for each observation; so I added a $t$ subindex to your definition of those weights.
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:01
















0












$begingroup$


The linear model that I'm working with is:
$$y_t =α +βx_t + ε_t$$



Based on my Lecture I have:



$$Var(hatβ) = Var(Σw_tε_t)$$
where ε is the error term and
$$w_t = frac{x_t-overline x}{Σ(x_t-overline x)^2}$$



That being said we then have:



$$begin{align}Var(hatβ)& = ΣVar(w_tε_t)+ΣΣCov(w_s,ε_t)\&=E[w_tε_t - E(w_tε_t)]^2\ &= E[w_tε_t]^2\&= E[w_t^2ε_t^2]\&= Σw_t^2Var(ε_t)\&=σ^2Σw_t^2 end{align}$$





What bothers me the most here is how come:
$$Var(Σw_tε_t) = ΣVar(w_tε_t)+ΣΣCov(w_s,ε_t)$$



and how do we get from this:
$$E[w_t^2ε_t^2]$$
to this:
$$Σw_t^2Var(ε_t)$$



Many thanks in advance!










share|cite|improve this question











$endgroup$












  • $begingroup$
    Which is exactly the linear model you have? Just one exogenous variable? Are you assuming that the values of $x$ are predetermined (that is, non random) or not?
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 22:39












  • $begingroup$
    Oops, very sorry I just added the linear equation. Thanks for your quick reply
    $endgroup$
    – Fozoro
    Jan 23 at 22:43










  • $begingroup$
    The term $w$ should not have a summation over $x - bar x$, should it?
    $endgroup$
    – Mark Viola
    Jan 23 at 22:45












  • $begingroup$
    @MarkViola Yes your right I put it there by accident ( just edited my question )
    $endgroup$
    – Fozoro
    Jan 23 at 22:48










  • $begingroup$
    Check out that there is a $w$ weight for each observation; so I added a $t$ subindex to your definition of those weights.
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:01














0












0








0





$begingroup$


The linear model that I'm working with is:
$$y_t =α +βx_t + ε_t$$



Based on my Lecture I have:



$$Var(hatβ) = Var(Σw_tε_t)$$
where ε is the error term and
$$w_t = frac{x_t-overline x}{Σ(x_t-overline x)^2}$$



That being said we then have:



$$begin{align}Var(hatβ)& = ΣVar(w_tε_t)+ΣΣCov(w_s,ε_t)\&=E[w_tε_t - E(w_tε_t)]^2\ &= E[w_tε_t]^2\&= E[w_t^2ε_t^2]\&= Σw_t^2Var(ε_t)\&=σ^2Σw_t^2 end{align}$$





What bothers me the most here is how come:
$$Var(Σw_tε_t) = ΣVar(w_tε_t)+ΣΣCov(w_s,ε_t)$$



and how do we get from this:
$$E[w_t^2ε_t^2]$$
to this:
$$Σw_t^2Var(ε_t)$$



Many thanks in advance!










share|cite|improve this question











$endgroup$




The linear model that I'm working with is:
$$y_t =α +βx_t + ε_t$$



Based on my Lecture I have:



$$Var(hatβ) = Var(Σw_tε_t)$$
where ε is the error term and
$$w_t = frac{x_t-overline x}{Σ(x_t-overline x)^2}$$



That being said we then have:



$$begin{align}Var(hatβ)& = ΣVar(w_tε_t)+ΣΣCov(w_s,ε_t)\&=E[w_tε_t - E(w_tε_t)]^2\ &= E[w_tε_t]^2\&= E[w_t^2ε_t^2]\&= Σw_t^2Var(ε_t)\&=σ^2Σw_t^2 end{align}$$





What bothers me the most here is how come:
$$Var(Σw_tε_t) = ΣVar(w_tε_t)+ΣΣCov(w_s,ε_t)$$



and how do we get from this:
$$E[w_t^2ε_t^2]$$
to this:
$$Σw_t^2Var(ε_t)$$



Many thanks in advance!







regression least-squares linear-regression regression-analysis






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 23 at 22:58









Alejandro Nasif Salum

4,765118




4,765118










asked Jan 23 at 22:25









FozoroFozoro

1265




1265












  • $begingroup$
    Which is exactly the linear model you have? Just one exogenous variable? Are you assuming that the values of $x$ are predetermined (that is, non random) or not?
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 22:39












  • $begingroup$
    Oops, very sorry I just added the linear equation. Thanks for your quick reply
    $endgroup$
    – Fozoro
    Jan 23 at 22:43










  • $begingroup$
    The term $w$ should not have a summation over $x - bar x$, should it?
    $endgroup$
    – Mark Viola
    Jan 23 at 22:45












  • $begingroup$
    @MarkViola Yes your right I put it there by accident ( just edited my question )
    $endgroup$
    – Fozoro
    Jan 23 at 22:48










  • $begingroup$
    Check out that there is a $w$ weight for each observation; so I added a $t$ subindex to your definition of those weights.
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:01


















  • $begingroup$
    Which is exactly the linear model you have? Just one exogenous variable? Are you assuming that the values of $x$ are predetermined (that is, non random) or not?
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 22:39












  • $begingroup$
    Oops, very sorry I just added the linear equation. Thanks for your quick reply
    $endgroup$
    – Fozoro
    Jan 23 at 22:43










  • $begingroup$
    The term $w$ should not have a summation over $x - bar x$, should it?
    $endgroup$
    – Mark Viola
    Jan 23 at 22:45












  • $begingroup$
    @MarkViola Yes your right I put it there by accident ( just edited my question )
    $endgroup$
    – Fozoro
    Jan 23 at 22:48










  • $begingroup$
    Check out that there is a $w$ weight for each observation; so I added a $t$ subindex to your definition of those weights.
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:01
















$begingroup$
Which is exactly the linear model you have? Just one exogenous variable? Are you assuming that the values of $x$ are predetermined (that is, non random) or not?
$endgroup$
– Alejandro Nasif Salum
Jan 23 at 22:39






$begingroup$
Which is exactly the linear model you have? Just one exogenous variable? Are you assuming that the values of $x$ are predetermined (that is, non random) or not?
$endgroup$
– Alejandro Nasif Salum
Jan 23 at 22:39














$begingroup$
Oops, very sorry I just added the linear equation. Thanks for your quick reply
$endgroup$
– Fozoro
Jan 23 at 22:43




$begingroup$
Oops, very sorry I just added the linear equation. Thanks for your quick reply
$endgroup$
– Fozoro
Jan 23 at 22:43












$begingroup$
The term $w$ should not have a summation over $x - bar x$, should it?
$endgroup$
– Mark Viola
Jan 23 at 22:45






$begingroup$
The term $w$ should not have a summation over $x - bar x$, should it?
$endgroup$
– Mark Viola
Jan 23 at 22:45














$begingroup$
@MarkViola Yes your right I put it there by accident ( just edited my question )
$endgroup$
– Fozoro
Jan 23 at 22:48




$begingroup$
@MarkViola Yes your right I put it there by accident ( just edited my question )
$endgroup$
– Fozoro
Jan 23 at 22:48












$begingroup$
Check out that there is a $w$ weight for each observation; so I added a $t$ subindex to your definition of those weights.
$endgroup$
– Alejandro Nasif Salum
Jan 23 at 23:01




$begingroup$
Check out that there is a $w$ weight for each observation; so I added a $t$ subindex to your definition of those weights.
$endgroup$
– Alejandro Nasif Salum
Jan 23 at 23:01










1 Answer
1






active

oldest

votes


















0












$begingroup$

If you're not assuming that the $x_t$ values are random, and if you assume no serial correlation and homoskedasticty (that is
$$text{var}(e_t)=sigma^2$$
and
$$text{cov}(e_t,e_s)=0,quad tneq s$$
for any $t$ and $s$), then
$$text{var}(hat beta)=text{var}left(sum w_t e_tright)=sum w_t^2 text{var}(e_t)=sigma^2sum w_t^2.$$
and you can check that this amounts to
$$text{var}(hatbeta)=frac{sigma^2}{sum(x_t-bar x)^2}.$$





The formula with the covariance you give is just wrong. It is true although that
$$text{var}left(sum w_t e_tright)=sum text{var}(w_t e_t)+sum_{tneq s} text{cov}(w_te_t,w_s e_s)=$$
$$=sum w_t^2text{var}(e_t)+sum_{tneq s} w_t w_stext{cov}(e_t,e_s),$$
but our assumptions imply that the second term is zero anyway.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thank you very much for your answer. That beeing said I don't quite understand how we went from $E[w_t^2ε_t^2]$ to $Σw_t^2Var(ε_t)$ did we use some kind of formula?
    $endgroup$
    – Fozoro
    Jan 23 at 23:06










  • $begingroup$
    also how come we take out $w$ from the variance. Isn't w a variable?
    $endgroup$
    – Fozoro
    Jan 23 at 23:14










  • $begingroup$
    If $a$ is not random, then $$text{var}(aX)=a^2text{var}(X)$$ and if $text{cov}(X,Y)=0$ (in particular, if $X$ and $Y$ are independent), then $$text{var}(X+Y)=text{var}(X)+text{var}(Y).$$
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:17






  • 1




    $begingroup$
    I'm making the classical assumption that the $x_t$ values are not random but predetermined before measuring $Y_t$ (check if this is the background with which you're supposed to be working). In that case, since the $w_t$ variables only depend on the $x_t$ values, they're not random either and can be thought of as constants (in the probabilistic sense).
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:19








  • 1




    $begingroup$
    Great, I get it now thank you very much for your help!
    $endgroup$
    – Fozoro
    Jan 23 at 23:22











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3085166%2fcalculating-the-var%25ce%25b2-in-a-least-square-regression-model%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

If you're not assuming that the $x_t$ values are random, and if you assume no serial correlation and homoskedasticty (that is
$$text{var}(e_t)=sigma^2$$
and
$$text{cov}(e_t,e_s)=0,quad tneq s$$
for any $t$ and $s$), then
$$text{var}(hat beta)=text{var}left(sum w_t e_tright)=sum w_t^2 text{var}(e_t)=sigma^2sum w_t^2.$$
and you can check that this amounts to
$$text{var}(hatbeta)=frac{sigma^2}{sum(x_t-bar x)^2}.$$





The formula with the covariance you give is just wrong. It is true although that
$$text{var}left(sum w_t e_tright)=sum text{var}(w_t e_t)+sum_{tneq s} text{cov}(w_te_t,w_s e_s)=$$
$$=sum w_t^2text{var}(e_t)+sum_{tneq s} w_t w_stext{cov}(e_t,e_s),$$
but our assumptions imply that the second term is zero anyway.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thank you very much for your answer. That beeing said I don't quite understand how we went from $E[w_t^2ε_t^2]$ to $Σw_t^2Var(ε_t)$ did we use some kind of formula?
    $endgroup$
    – Fozoro
    Jan 23 at 23:06










  • $begingroup$
    also how come we take out $w$ from the variance. Isn't w a variable?
    $endgroup$
    – Fozoro
    Jan 23 at 23:14










  • $begingroup$
    If $a$ is not random, then $$text{var}(aX)=a^2text{var}(X)$$ and if $text{cov}(X,Y)=0$ (in particular, if $X$ and $Y$ are independent), then $$text{var}(X+Y)=text{var}(X)+text{var}(Y).$$
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:17






  • 1




    $begingroup$
    I'm making the classical assumption that the $x_t$ values are not random but predetermined before measuring $Y_t$ (check if this is the background with which you're supposed to be working). In that case, since the $w_t$ variables only depend on the $x_t$ values, they're not random either and can be thought of as constants (in the probabilistic sense).
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:19








  • 1




    $begingroup$
    Great, I get it now thank you very much for your help!
    $endgroup$
    – Fozoro
    Jan 23 at 23:22
















0












$begingroup$

If you're not assuming that the $x_t$ values are random, and if you assume no serial correlation and homoskedasticty (that is
$$text{var}(e_t)=sigma^2$$
and
$$text{cov}(e_t,e_s)=0,quad tneq s$$
for any $t$ and $s$), then
$$text{var}(hat beta)=text{var}left(sum w_t e_tright)=sum w_t^2 text{var}(e_t)=sigma^2sum w_t^2.$$
and you can check that this amounts to
$$text{var}(hatbeta)=frac{sigma^2}{sum(x_t-bar x)^2}.$$





The formula with the covariance you give is just wrong. It is true although that
$$text{var}left(sum w_t e_tright)=sum text{var}(w_t e_t)+sum_{tneq s} text{cov}(w_te_t,w_s e_s)=$$
$$=sum w_t^2text{var}(e_t)+sum_{tneq s} w_t w_stext{cov}(e_t,e_s),$$
but our assumptions imply that the second term is zero anyway.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thank you very much for your answer. That beeing said I don't quite understand how we went from $E[w_t^2ε_t^2]$ to $Σw_t^2Var(ε_t)$ did we use some kind of formula?
    $endgroup$
    – Fozoro
    Jan 23 at 23:06










  • $begingroup$
    also how come we take out $w$ from the variance. Isn't w a variable?
    $endgroup$
    – Fozoro
    Jan 23 at 23:14










  • $begingroup$
    If $a$ is not random, then $$text{var}(aX)=a^2text{var}(X)$$ and if $text{cov}(X,Y)=0$ (in particular, if $X$ and $Y$ are independent), then $$text{var}(X+Y)=text{var}(X)+text{var}(Y).$$
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:17






  • 1




    $begingroup$
    I'm making the classical assumption that the $x_t$ values are not random but predetermined before measuring $Y_t$ (check if this is the background with which you're supposed to be working). In that case, since the $w_t$ variables only depend on the $x_t$ values, they're not random either and can be thought of as constants (in the probabilistic sense).
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:19








  • 1




    $begingroup$
    Great, I get it now thank you very much for your help!
    $endgroup$
    – Fozoro
    Jan 23 at 23:22














0












0








0





$begingroup$

If you're not assuming that the $x_t$ values are random, and if you assume no serial correlation and homoskedasticty (that is
$$text{var}(e_t)=sigma^2$$
and
$$text{cov}(e_t,e_s)=0,quad tneq s$$
for any $t$ and $s$), then
$$text{var}(hat beta)=text{var}left(sum w_t e_tright)=sum w_t^2 text{var}(e_t)=sigma^2sum w_t^2.$$
and you can check that this amounts to
$$text{var}(hatbeta)=frac{sigma^2}{sum(x_t-bar x)^2}.$$





The formula with the covariance you give is just wrong. It is true although that
$$text{var}left(sum w_t e_tright)=sum text{var}(w_t e_t)+sum_{tneq s} text{cov}(w_te_t,w_s e_s)=$$
$$=sum w_t^2text{var}(e_t)+sum_{tneq s} w_t w_stext{cov}(e_t,e_s),$$
but our assumptions imply that the second term is zero anyway.






share|cite|improve this answer











$endgroup$



If you're not assuming that the $x_t$ values are random, and if you assume no serial correlation and homoskedasticty (that is
$$text{var}(e_t)=sigma^2$$
and
$$text{cov}(e_t,e_s)=0,quad tneq s$$
for any $t$ and $s$), then
$$text{var}(hat beta)=text{var}left(sum w_t e_tright)=sum w_t^2 text{var}(e_t)=sigma^2sum w_t^2.$$
and you can check that this amounts to
$$text{var}(hatbeta)=frac{sigma^2}{sum(x_t-bar x)^2}.$$





The formula with the covariance you give is just wrong. It is true although that
$$text{var}left(sum w_t e_tright)=sum text{var}(w_t e_t)+sum_{tneq s} text{cov}(w_te_t,w_s e_s)=$$
$$=sum w_t^2text{var}(e_t)+sum_{tneq s} w_t w_stext{cov}(e_t,e_s),$$
but our assumptions imply that the second term is zero anyway.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 23 at 23:08

























answered Jan 23 at 22:55









Alejandro Nasif SalumAlejandro Nasif Salum

4,765118




4,765118












  • $begingroup$
    Thank you very much for your answer. That beeing said I don't quite understand how we went from $E[w_t^2ε_t^2]$ to $Σw_t^2Var(ε_t)$ did we use some kind of formula?
    $endgroup$
    – Fozoro
    Jan 23 at 23:06










  • $begingroup$
    also how come we take out $w$ from the variance. Isn't w a variable?
    $endgroup$
    – Fozoro
    Jan 23 at 23:14










  • $begingroup$
    If $a$ is not random, then $$text{var}(aX)=a^2text{var}(X)$$ and if $text{cov}(X,Y)=0$ (in particular, if $X$ and $Y$ are independent), then $$text{var}(X+Y)=text{var}(X)+text{var}(Y).$$
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:17






  • 1




    $begingroup$
    I'm making the classical assumption that the $x_t$ values are not random but predetermined before measuring $Y_t$ (check if this is the background with which you're supposed to be working). In that case, since the $w_t$ variables only depend on the $x_t$ values, they're not random either and can be thought of as constants (in the probabilistic sense).
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:19








  • 1




    $begingroup$
    Great, I get it now thank you very much for your help!
    $endgroup$
    – Fozoro
    Jan 23 at 23:22


















  • $begingroup$
    Thank you very much for your answer. That beeing said I don't quite understand how we went from $E[w_t^2ε_t^2]$ to $Σw_t^2Var(ε_t)$ did we use some kind of formula?
    $endgroup$
    – Fozoro
    Jan 23 at 23:06










  • $begingroup$
    also how come we take out $w$ from the variance. Isn't w a variable?
    $endgroup$
    – Fozoro
    Jan 23 at 23:14










  • $begingroup$
    If $a$ is not random, then $$text{var}(aX)=a^2text{var}(X)$$ and if $text{cov}(X,Y)=0$ (in particular, if $X$ and $Y$ are independent), then $$text{var}(X+Y)=text{var}(X)+text{var}(Y).$$
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:17






  • 1




    $begingroup$
    I'm making the classical assumption that the $x_t$ values are not random but predetermined before measuring $Y_t$ (check if this is the background with which you're supposed to be working). In that case, since the $w_t$ variables only depend on the $x_t$ values, they're not random either and can be thought of as constants (in the probabilistic sense).
    $endgroup$
    – Alejandro Nasif Salum
    Jan 23 at 23:19








  • 1




    $begingroup$
    Great, I get it now thank you very much for your help!
    $endgroup$
    – Fozoro
    Jan 23 at 23:22
















$begingroup$
Thank you very much for your answer. That beeing said I don't quite understand how we went from $E[w_t^2ε_t^2]$ to $Σw_t^2Var(ε_t)$ did we use some kind of formula?
$endgroup$
– Fozoro
Jan 23 at 23:06




$begingroup$
Thank you very much for your answer. That beeing said I don't quite understand how we went from $E[w_t^2ε_t^2]$ to $Σw_t^2Var(ε_t)$ did we use some kind of formula?
$endgroup$
– Fozoro
Jan 23 at 23:06












$begingroup$
also how come we take out $w$ from the variance. Isn't w a variable?
$endgroup$
– Fozoro
Jan 23 at 23:14




$begingroup$
also how come we take out $w$ from the variance. Isn't w a variable?
$endgroup$
– Fozoro
Jan 23 at 23:14












$begingroup$
If $a$ is not random, then $$text{var}(aX)=a^2text{var}(X)$$ and if $text{cov}(X,Y)=0$ (in particular, if $X$ and $Y$ are independent), then $$text{var}(X+Y)=text{var}(X)+text{var}(Y).$$
$endgroup$
– Alejandro Nasif Salum
Jan 23 at 23:17




$begingroup$
If $a$ is not random, then $$text{var}(aX)=a^2text{var}(X)$$ and if $text{cov}(X,Y)=0$ (in particular, if $X$ and $Y$ are independent), then $$text{var}(X+Y)=text{var}(X)+text{var}(Y).$$
$endgroup$
– Alejandro Nasif Salum
Jan 23 at 23:17




1




1




$begingroup$
I'm making the classical assumption that the $x_t$ values are not random but predetermined before measuring $Y_t$ (check if this is the background with which you're supposed to be working). In that case, since the $w_t$ variables only depend on the $x_t$ values, they're not random either and can be thought of as constants (in the probabilistic sense).
$endgroup$
– Alejandro Nasif Salum
Jan 23 at 23:19






$begingroup$
I'm making the classical assumption that the $x_t$ values are not random but predetermined before measuring $Y_t$ (check if this is the background with which you're supposed to be working). In that case, since the $w_t$ variables only depend on the $x_t$ values, they're not random either and can be thought of as constants (in the probabilistic sense).
$endgroup$
– Alejandro Nasif Salum
Jan 23 at 23:19






1




1




$begingroup$
Great, I get it now thank you very much for your help!
$endgroup$
– Fozoro
Jan 23 at 23:22




$begingroup$
Great, I get it now thank you very much for your help!
$endgroup$
– Fozoro
Jan 23 at 23:22


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3085166%2fcalculating-the-var%25ce%25b2-in-a-least-square-regression-model%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

How to fix TextFormField cause rebuild widget in Flutter