Finding the MLE of expected value of two successions of normals.
$begingroup$
I have $X_1 ... X_{n_1}$ ~ Normal$(mu, sigma^2_1)$ and $Y_1 ... Y_{n_2}$ ~ Normal$(mu, sigma^2_2)$, with $sigma^2_i$ known. All random variables are independent. I want to find maximum likelihood estimate of $mu$.
Now, if I had just $X_i$, I would have said that $bar{mu} = frac{sum X_i}{n_1}$. However here I also have that $bar{mu} = frac{sum Y_i}{n_2}$. How can I combine this two results to get a better estimate of $mu$?
I tried just writing $bar{mu} = frac{1}{2} big( frac{sum Y_i}{n_2} + frac{sum Y_i}{n_2}big)$, but I don't think this leads to something correct.
Do you have any hints?
statistics maximum-likelihood
$endgroup$
add a comment |
$begingroup$
I have $X_1 ... X_{n_1}$ ~ Normal$(mu, sigma^2_1)$ and $Y_1 ... Y_{n_2}$ ~ Normal$(mu, sigma^2_2)$, with $sigma^2_i$ known. All random variables are independent. I want to find maximum likelihood estimate of $mu$.
Now, if I had just $X_i$, I would have said that $bar{mu} = frac{sum X_i}{n_1}$. However here I also have that $bar{mu} = frac{sum Y_i}{n_2}$. How can I combine this two results to get a better estimate of $mu$?
I tried just writing $bar{mu} = frac{1}{2} big( frac{sum Y_i}{n_2} + frac{sum Y_i}{n_2}big)$, but I don't think this leads to something correct.
Do you have any hints?
statistics maximum-likelihood
$endgroup$
add a comment |
$begingroup$
I have $X_1 ... X_{n_1}$ ~ Normal$(mu, sigma^2_1)$ and $Y_1 ... Y_{n_2}$ ~ Normal$(mu, sigma^2_2)$, with $sigma^2_i$ known. All random variables are independent. I want to find maximum likelihood estimate of $mu$.
Now, if I had just $X_i$, I would have said that $bar{mu} = frac{sum X_i}{n_1}$. However here I also have that $bar{mu} = frac{sum Y_i}{n_2}$. How can I combine this two results to get a better estimate of $mu$?
I tried just writing $bar{mu} = frac{1}{2} big( frac{sum Y_i}{n_2} + frac{sum Y_i}{n_2}big)$, but I don't think this leads to something correct.
Do you have any hints?
statistics maximum-likelihood
$endgroup$
I have $X_1 ... X_{n_1}$ ~ Normal$(mu, sigma^2_1)$ and $Y_1 ... Y_{n_2}$ ~ Normal$(mu, sigma^2_2)$, with $sigma^2_i$ known. All random variables are independent. I want to find maximum likelihood estimate of $mu$.
Now, if I had just $X_i$, I would have said that $bar{mu} = frac{sum X_i}{n_1}$. However here I also have that $bar{mu} = frac{sum Y_i}{n_2}$. How can I combine this two results to get a better estimate of $mu$?
I tried just writing $bar{mu} = frac{1}{2} big( frac{sum Y_i}{n_2} + frac{sum Y_i}{n_2}big)$, but I don't think this leads to something correct.
Do you have any hints?
statistics maximum-likelihood
statistics maximum-likelihood
edited Jan 19 at 9:33
qcc101
asked Jan 19 at 9:23
qcc101qcc101
627213
627213
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Well, intuition will get you closer. If $n_1$ is much smaller than $n_2$, and $sigma_1^2$ is greater than $sigma_2^2$, then the variation arising from the sample coming from $X$ is going to be substantially greater than the variation in the sample from $Y$, and your MLE would need to reflect this. Presently, your choice does not, as it gives equal weight to the sample means from each distribution regardless of how many observations are in each group.
So, let's reason more formally by actually constructing the likelihood function and maximizing it. Note that the joint likelihood of the combined sample $boldsymbol x = (x_1, ldots, x_{n_1})$ and $boldsymbol y = (y_1, ldots, y_{n_2})$ is simply $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto f_{boldsymbol X, boldsymbol Y}(boldsymbol x, boldsymbol y mid mu, sigma_1, sigma_2) = prod_{i=1}^{n_1} frac{e^{-(x_i - mu)^2/(2sigma_1^2)}}{sqrt{2pi}sigma_1} prod_{j=1}^{n_2} frac{e^{-(y_j - mu)^2/(2sigma_2^2)}}{sqrt{2pi}sigma_2}.$$ That is to say, the likelihood is proportional to the joint density of the sample. We can ignore any factors in $mathcal L$ not dependent on $mu$ as these are fixed with respect to $mu$: $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2 right) expleft(-frac{1}{2sigma_2^2} sum_{j=1}^{n_2} (y_j - mu)^2 right).$$ But we can write this likelihood in terms of $bar x$ and $bar y$ rather than the sample itself, if we partition the sum of squares like so: $$begin{align*} sum_{i=1}^{n_1} (x_i - mu)^2 &= sum_{i=1}^{n_1} (x_i - bar x + bar x - mu)^2 \ &= sum_{i=1}^{n_1} left((x_i - bar x)^2 + 2(x_i - bar x)(bar x - mu) + (bar x - mu)^2 right) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2 + 2(bar x - mu)sum_{i=1}^{n_1} (x_i - bar x) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2. end{align*}$$ The last equality is true because the last sum in the previous expression is zero (why?). The beauty of this partitioning is that now $$expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2right) = expleft(-frac{n_1(bar x - mu)^2}{2sigma_1^2} right)expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - bar x)^2right),$$ and the second $exp$ factor, having no $mu$, is constant with respect to $mu$ and can be eliminated. Handling the sample from $Y$ similarly, we get a greatly simplified likelihood: $$mathcal L propto expleft( - frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}right).$$ The log-likelihood is then $$ell(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) = -frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}.$$ This, being a quadratic function in $mu$, is easily maximized with respect to $mu$ using the basic techniques of calculus. I leave the remainder of this computation to you as an exercise.
One final remark: It is important to note that the likelihood we computed above does presume that the variances are known, because we dropped a number of factors from the likelihood that are functions of $sigma_1$ and $sigma_2$; if one were to try to obtain a joint maximum likelihood estimator for $mu$, $sigma_1$, and $sigma_2$, you cannot drop those factors since then the goal is to maximize $mathcal L$ with respect to all three parameters simultaneously. This is a much more complicated computation and although I have not tried it, I strongly suspect it is not possible to obtain a closed form solution in such a case.
$endgroup$
$begingroup$
Wow, very clear. Thank you.
$endgroup$
– qcc101
Jan 19 at 9:53
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3079149%2ffinding-the-mle-of-expected-value-of-two-successions-of-normals%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Well, intuition will get you closer. If $n_1$ is much smaller than $n_2$, and $sigma_1^2$ is greater than $sigma_2^2$, then the variation arising from the sample coming from $X$ is going to be substantially greater than the variation in the sample from $Y$, and your MLE would need to reflect this. Presently, your choice does not, as it gives equal weight to the sample means from each distribution regardless of how many observations are in each group.
So, let's reason more formally by actually constructing the likelihood function and maximizing it. Note that the joint likelihood of the combined sample $boldsymbol x = (x_1, ldots, x_{n_1})$ and $boldsymbol y = (y_1, ldots, y_{n_2})$ is simply $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto f_{boldsymbol X, boldsymbol Y}(boldsymbol x, boldsymbol y mid mu, sigma_1, sigma_2) = prod_{i=1}^{n_1} frac{e^{-(x_i - mu)^2/(2sigma_1^2)}}{sqrt{2pi}sigma_1} prod_{j=1}^{n_2} frac{e^{-(y_j - mu)^2/(2sigma_2^2)}}{sqrt{2pi}sigma_2}.$$ That is to say, the likelihood is proportional to the joint density of the sample. We can ignore any factors in $mathcal L$ not dependent on $mu$ as these are fixed with respect to $mu$: $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2 right) expleft(-frac{1}{2sigma_2^2} sum_{j=1}^{n_2} (y_j - mu)^2 right).$$ But we can write this likelihood in terms of $bar x$ and $bar y$ rather than the sample itself, if we partition the sum of squares like so: $$begin{align*} sum_{i=1}^{n_1} (x_i - mu)^2 &= sum_{i=1}^{n_1} (x_i - bar x + bar x - mu)^2 \ &= sum_{i=1}^{n_1} left((x_i - bar x)^2 + 2(x_i - bar x)(bar x - mu) + (bar x - mu)^2 right) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2 + 2(bar x - mu)sum_{i=1}^{n_1} (x_i - bar x) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2. end{align*}$$ The last equality is true because the last sum in the previous expression is zero (why?). The beauty of this partitioning is that now $$expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2right) = expleft(-frac{n_1(bar x - mu)^2}{2sigma_1^2} right)expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - bar x)^2right),$$ and the second $exp$ factor, having no $mu$, is constant with respect to $mu$ and can be eliminated. Handling the sample from $Y$ similarly, we get a greatly simplified likelihood: $$mathcal L propto expleft( - frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}right).$$ The log-likelihood is then $$ell(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) = -frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}.$$ This, being a quadratic function in $mu$, is easily maximized with respect to $mu$ using the basic techniques of calculus. I leave the remainder of this computation to you as an exercise.
One final remark: It is important to note that the likelihood we computed above does presume that the variances are known, because we dropped a number of factors from the likelihood that are functions of $sigma_1$ and $sigma_2$; if one were to try to obtain a joint maximum likelihood estimator for $mu$, $sigma_1$, and $sigma_2$, you cannot drop those factors since then the goal is to maximize $mathcal L$ with respect to all three parameters simultaneously. This is a much more complicated computation and although I have not tried it, I strongly suspect it is not possible to obtain a closed form solution in such a case.
$endgroup$
$begingroup$
Wow, very clear. Thank you.
$endgroup$
– qcc101
Jan 19 at 9:53
add a comment |
$begingroup$
Well, intuition will get you closer. If $n_1$ is much smaller than $n_2$, and $sigma_1^2$ is greater than $sigma_2^2$, then the variation arising from the sample coming from $X$ is going to be substantially greater than the variation in the sample from $Y$, and your MLE would need to reflect this. Presently, your choice does not, as it gives equal weight to the sample means from each distribution regardless of how many observations are in each group.
So, let's reason more formally by actually constructing the likelihood function and maximizing it. Note that the joint likelihood of the combined sample $boldsymbol x = (x_1, ldots, x_{n_1})$ and $boldsymbol y = (y_1, ldots, y_{n_2})$ is simply $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto f_{boldsymbol X, boldsymbol Y}(boldsymbol x, boldsymbol y mid mu, sigma_1, sigma_2) = prod_{i=1}^{n_1} frac{e^{-(x_i - mu)^2/(2sigma_1^2)}}{sqrt{2pi}sigma_1} prod_{j=1}^{n_2} frac{e^{-(y_j - mu)^2/(2sigma_2^2)}}{sqrt{2pi}sigma_2}.$$ That is to say, the likelihood is proportional to the joint density of the sample. We can ignore any factors in $mathcal L$ not dependent on $mu$ as these are fixed with respect to $mu$: $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2 right) expleft(-frac{1}{2sigma_2^2} sum_{j=1}^{n_2} (y_j - mu)^2 right).$$ But we can write this likelihood in terms of $bar x$ and $bar y$ rather than the sample itself, if we partition the sum of squares like so: $$begin{align*} sum_{i=1}^{n_1} (x_i - mu)^2 &= sum_{i=1}^{n_1} (x_i - bar x + bar x - mu)^2 \ &= sum_{i=1}^{n_1} left((x_i - bar x)^2 + 2(x_i - bar x)(bar x - mu) + (bar x - mu)^2 right) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2 + 2(bar x - mu)sum_{i=1}^{n_1} (x_i - bar x) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2. end{align*}$$ The last equality is true because the last sum in the previous expression is zero (why?). The beauty of this partitioning is that now $$expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2right) = expleft(-frac{n_1(bar x - mu)^2}{2sigma_1^2} right)expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - bar x)^2right),$$ and the second $exp$ factor, having no $mu$, is constant with respect to $mu$ and can be eliminated. Handling the sample from $Y$ similarly, we get a greatly simplified likelihood: $$mathcal L propto expleft( - frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}right).$$ The log-likelihood is then $$ell(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) = -frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}.$$ This, being a quadratic function in $mu$, is easily maximized with respect to $mu$ using the basic techniques of calculus. I leave the remainder of this computation to you as an exercise.
One final remark: It is important to note that the likelihood we computed above does presume that the variances are known, because we dropped a number of factors from the likelihood that are functions of $sigma_1$ and $sigma_2$; if one were to try to obtain a joint maximum likelihood estimator for $mu$, $sigma_1$, and $sigma_2$, you cannot drop those factors since then the goal is to maximize $mathcal L$ with respect to all three parameters simultaneously. This is a much more complicated computation and although I have not tried it, I strongly suspect it is not possible to obtain a closed form solution in such a case.
$endgroup$
$begingroup$
Wow, very clear. Thank you.
$endgroup$
– qcc101
Jan 19 at 9:53
add a comment |
$begingroup$
Well, intuition will get you closer. If $n_1$ is much smaller than $n_2$, and $sigma_1^2$ is greater than $sigma_2^2$, then the variation arising from the sample coming from $X$ is going to be substantially greater than the variation in the sample from $Y$, and your MLE would need to reflect this. Presently, your choice does not, as it gives equal weight to the sample means from each distribution regardless of how many observations are in each group.
So, let's reason more formally by actually constructing the likelihood function and maximizing it. Note that the joint likelihood of the combined sample $boldsymbol x = (x_1, ldots, x_{n_1})$ and $boldsymbol y = (y_1, ldots, y_{n_2})$ is simply $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto f_{boldsymbol X, boldsymbol Y}(boldsymbol x, boldsymbol y mid mu, sigma_1, sigma_2) = prod_{i=1}^{n_1} frac{e^{-(x_i - mu)^2/(2sigma_1^2)}}{sqrt{2pi}sigma_1} prod_{j=1}^{n_2} frac{e^{-(y_j - mu)^2/(2sigma_2^2)}}{sqrt{2pi}sigma_2}.$$ That is to say, the likelihood is proportional to the joint density of the sample. We can ignore any factors in $mathcal L$ not dependent on $mu$ as these are fixed with respect to $mu$: $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2 right) expleft(-frac{1}{2sigma_2^2} sum_{j=1}^{n_2} (y_j - mu)^2 right).$$ But we can write this likelihood in terms of $bar x$ and $bar y$ rather than the sample itself, if we partition the sum of squares like so: $$begin{align*} sum_{i=1}^{n_1} (x_i - mu)^2 &= sum_{i=1}^{n_1} (x_i - bar x + bar x - mu)^2 \ &= sum_{i=1}^{n_1} left((x_i - bar x)^2 + 2(x_i - bar x)(bar x - mu) + (bar x - mu)^2 right) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2 + 2(bar x - mu)sum_{i=1}^{n_1} (x_i - bar x) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2. end{align*}$$ The last equality is true because the last sum in the previous expression is zero (why?). The beauty of this partitioning is that now $$expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2right) = expleft(-frac{n_1(bar x - mu)^2}{2sigma_1^2} right)expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - bar x)^2right),$$ and the second $exp$ factor, having no $mu$, is constant with respect to $mu$ and can be eliminated. Handling the sample from $Y$ similarly, we get a greatly simplified likelihood: $$mathcal L propto expleft( - frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}right).$$ The log-likelihood is then $$ell(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) = -frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}.$$ This, being a quadratic function in $mu$, is easily maximized with respect to $mu$ using the basic techniques of calculus. I leave the remainder of this computation to you as an exercise.
One final remark: It is important to note that the likelihood we computed above does presume that the variances are known, because we dropped a number of factors from the likelihood that are functions of $sigma_1$ and $sigma_2$; if one were to try to obtain a joint maximum likelihood estimator for $mu$, $sigma_1$, and $sigma_2$, you cannot drop those factors since then the goal is to maximize $mathcal L$ with respect to all three parameters simultaneously. This is a much more complicated computation and although I have not tried it, I strongly suspect it is not possible to obtain a closed form solution in such a case.
$endgroup$
Well, intuition will get you closer. If $n_1$ is much smaller than $n_2$, and $sigma_1^2$ is greater than $sigma_2^2$, then the variation arising from the sample coming from $X$ is going to be substantially greater than the variation in the sample from $Y$, and your MLE would need to reflect this. Presently, your choice does not, as it gives equal weight to the sample means from each distribution regardless of how many observations are in each group.
So, let's reason more formally by actually constructing the likelihood function and maximizing it. Note that the joint likelihood of the combined sample $boldsymbol x = (x_1, ldots, x_{n_1})$ and $boldsymbol y = (y_1, ldots, y_{n_2})$ is simply $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto f_{boldsymbol X, boldsymbol Y}(boldsymbol x, boldsymbol y mid mu, sigma_1, sigma_2) = prod_{i=1}^{n_1} frac{e^{-(x_i - mu)^2/(2sigma_1^2)}}{sqrt{2pi}sigma_1} prod_{j=1}^{n_2} frac{e^{-(y_j - mu)^2/(2sigma_2^2)}}{sqrt{2pi}sigma_2}.$$ That is to say, the likelihood is proportional to the joint density of the sample. We can ignore any factors in $mathcal L$ not dependent on $mu$ as these are fixed with respect to $mu$: $$mathcal L(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) propto expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2 right) expleft(-frac{1}{2sigma_2^2} sum_{j=1}^{n_2} (y_j - mu)^2 right).$$ But we can write this likelihood in terms of $bar x$ and $bar y$ rather than the sample itself, if we partition the sum of squares like so: $$begin{align*} sum_{i=1}^{n_1} (x_i - mu)^2 &= sum_{i=1}^{n_1} (x_i - bar x + bar x - mu)^2 \ &= sum_{i=1}^{n_1} left((x_i - bar x)^2 + 2(x_i - bar x)(bar x - mu) + (bar x - mu)^2 right) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2 + 2(bar x - mu)sum_{i=1}^{n_1} (x_i - bar x) \ &= n_1 (bar x - mu)^2 + sum_{i=1}^{n_1} (x_i - bar x)^2. end{align*}$$ The last equality is true because the last sum in the previous expression is zero (why?). The beauty of this partitioning is that now $$expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - mu)^2right) = expleft(-frac{n_1(bar x - mu)^2}{2sigma_1^2} right)expleft(-frac{1}{2sigma_1^2} sum_{i=1}^{n_1} (x_i - bar x)^2right),$$ and the second $exp$ factor, having no $mu$, is constant with respect to $mu$ and can be eliminated. Handling the sample from $Y$ similarly, we get a greatly simplified likelihood: $$mathcal L propto expleft( - frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}right).$$ The log-likelihood is then $$ell(mu mid boldsymbol x, boldsymbol y, sigma_1, sigma_2) = -frac{n_1(bar x - mu)^2}{2sigma_1^2} - frac{n_2(bar y - mu)^2}{2sigma_2^2}.$$ This, being a quadratic function in $mu$, is easily maximized with respect to $mu$ using the basic techniques of calculus. I leave the remainder of this computation to you as an exercise.
One final remark: It is important to note that the likelihood we computed above does presume that the variances are known, because we dropped a number of factors from the likelihood that are functions of $sigma_1$ and $sigma_2$; if one were to try to obtain a joint maximum likelihood estimator for $mu$, $sigma_1$, and $sigma_2$, you cannot drop those factors since then the goal is to maximize $mathcal L$ with respect to all three parameters simultaneously. This is a much more complicated computation and although I have not tried it, I strongly suspect it is not possible to obtain a closed form solution in such a case.
edited Jan 19 at 9:53
answered Jan 19 at 9:47


heropupheropup
64k762102
64k762102
$begingroup$
Wow, very clear. Thank you.
$endgroup$
– qcc101
Jan 19 at 9:53
add a comment |
$begingroup$
Wow, very clear. Thank you.
$endgroup$
– qcc101
Jan 19 at 9:53
$begingroup$
Wow, very clear. Thank you.
$endgroup$
– qcc101
Jan 19 at 9:53
$begingroup$
Wow, very clear. Thank you.
$endgroup$
– qcc101
Jan 19 at 9:53
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3079149%2ffinding-the-mle-of-expected-value-of-two-successions-of-normals%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown