Proof of concavity via matrix differentiation
$begingroup$
I have found a maximum likelihood estimator which is given by:
$${{argmax }atop {vec{mu} = [mu_1 dots mu_2] atop mu_i geq 0}} -sum_{j=1}^m vec{p}_j^T vec{mu} + sum_{j=1}^m y_j ln(vec{p}_j^T vec{mu})$$
where $vec{p}_j$ and $y_j$ are constants for all j.
Now I am trying to prove the conclusion that the MLE obtained by maximizing this expression is a global maximum. My understanding is that I need to do this by obtaining the Hessian of:
$$g(mu) = -sum_{j=1}^m vec{p}_j^T vec{mu} + sum_{j=1}^m y_j ln(vec{p}_j^T vec{mu})$$
My multivariable calculus is admittedly rusty, and I have tried to obtain the Hessian, $frac{partial^2g(vec{mu})}{partialvec{mu}^2}$, using matrix differentation. Using the numerator layout I proceeded as follows:
Using linearity:
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}] + sum_{j=1}^m y_jfrac{partial}{partial vec{mu}}[ln(vec{p}_j^Tvec{mu})]$$
Using the chain rule: $frac{partial g(u(vec{x}))}{partial vec{x}} = frac{partial g}{partial u} frac{partial{u}}{partial{vec{x}}}$
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}] + sum_{j=1}^m y_j frac{1}{vec{p}_j^Tvec{mu}}frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}]$$
Using the common derivative: $frac{partial vec{a}^Tvec{x}}{partial vec{x}} = vec{a}^T$
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m vec{p}_j^T + sum_{j=1}^m y_j frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T$$
Taking the second derivative is where I am running into trouble.
Again beginning with linearity:
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^T] + sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T]$$
The first term is zero, since it is the derivative of a constant vector.
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T]$$
It is this final term that I do not know how to differentiate.
My intuition would tell me that since $vec{p}_j^T$ is constant it should be allowed to come out of the derivative.
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}]vec{p}_j^T$$
and then from the chain rule:
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{-1}{(vec{p}_j^Tvec{mu})^2}vec{p}_j^Tvec{p}_j^T$$
but the product $vec{p}_j^Tvec{p}_j^T$ clearly does not make sense.
Was hoping someone may have some wisdom on where I went wrong. I have been using the following as my multivariable differentiation refresher: https://www.comp.nus.edu.sg/~cs5240/lecture/matrix-differentiation.pdf
multivariable-calculus matrix-calculus maximum-likelihood
$endgroup$
add a comment |
$begingroup$
I have found a maximum likelihood estimator which is given by:
$${{argmax }atop {vec{mu} = [mu_1 dots mu_2] atop mu_i geq 0}} -sum_{j=1}^m vec{p}_j^T vec{mu} + sum_{j=1}^m y_j ln(vec{p}_j^T vec{mu})$$
where $vec{p}_j$ and $y_j$ are constants for all j.
Now I am trying to prove the conclusion that the MLE obtained by maximizing this expression is a global maximum. My understanding is that I need to do this by obtaining the Hessian of:
$$g(mu) = -sum_{j=1}^m vec{p}_j^T vec{mu} + sum_{j=1}^m y_j ln(vec{p}_j^T vec{mu})$$
My multivariable calculus is admittedly rusty, and I have tried to obtain the Hessian, $frac{partial^2g(vec{mu})}{partialvec{mu}^2}$, using matrix differentation. Using the numerator layout I proceeded as follows:
Using linearity:
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}] + sum_{j=1}^m y_jfrac{partial}{partial vec{mu}}[ln(vec{p}_j^Tvec{mu})]$$
Using the chain rule: $frac{partial g(u(vec{x}))}{partial vec{x}} = frac{partial g}{partial u} frac{partial{u}}{partial{vec{x}}}$
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}] + sum_{j=1}^m y_j frac{1}{vec{p}_j^Tvec{mu}}frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}]$$
Using the common derivative: $frac{partial vec{a}^Tvec{x}}{partial vec{x}} = vec{a}^T$
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m vec{p}_j^T + sum_{j=1}^m y_j frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T$$
Taking the second derivative is where I am running into trouble.
Again beginning with linearity:
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^T] + sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T]$$
The first term is zero, since it is the derivative of a constant vector.
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T]$$
It is this final term that I do not know how to differentiate.
My intuition would tell me that since $vec{p}_j^T$ is constant it should be allowed to come out of the derivative.
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}]vec{p}_j^T$$
and then from the chain rule:
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{-1}{(vec{p}_j^Tvec{mu})^2}vec{p}_j^Tvec{p}_j^T$$
but the product $vec{p}_j^Tvec{p}_j^T$ clearly does not make sense.
Was hoping someone may have some wisdom on where I went wrong. I have been using the following as my multivariable differentiation refresher: https://www.comp.nus.edu.sg/~cs5240/lecture/matrix-differentiation.pdf
multivariable-calculus matrix-calculus maximum-likelihood
$endgroup$
add a comment |
$begingroup$
I have found a maximum likelihood estimator which is given by:
$${{argmax }atop {vec{mu} = [mu_1 dots mu_2] atop mu_i geq 0}} -sum_{j=1}^m vec{p}_j^T vec{mu} + sum_{j=1}^m y_j ln(vec{p}_j^T vec{mu})$$
where $vec{p}_j$ and $y_j$ are constants for all j.
Now I am trying to prove the conclusion that the MLE obtained by maximizing this expression is a global maximum. My understanding is that I need to do this by obtaining the Hessian of:
$$g(mu) = -sum_{j=1}^m vec{p}_j^T vec{mu} + sum_{j=1}^m y_j ln(vec{p}_j^T vec{mu})$$
My multivariable calculus is admittedly rusty, and I have tried to obtain the Hessian, $frac{partial^2g(vec{mu})}{partialvec{mu}^2}$, using matrix differentation. Using the numerator layout I proceeded as follows:
Using linearity:
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}] + sum_{j=1}^m y_jfrac{partial}{partial vec{mu}}[ln(vec{p}_j^Tvec{mu})]$$
Using the chain rule: $frac{partial g(u(vec{x}))}{partial vec{x}} = frac{partial g}{partial u} frac{partial{u}}{partial{vec{x}}}$
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}] + sum_{j=1}^m y_j frac{1}{vec{p}_j^Tvec{mu}}frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}]$$
Using the common derivative: $frac{partial vec{a}^Tvec{x}}{partial vec{x}} = vec{a}^T$
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m vec{p}_j^T + sum_{j=1}^m y_j frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T$$
Taking the second derivative is where I am running into trouble.
Again beginning with linearity:
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^T] + sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T]$$
The first term is zero, since it is the derivative of a constant vector.
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T]$$
It is this final term that I do not know how to differentiate.
My intuition would tell me that since $vec{p}_j^T$ is constant it should be allowed to come out of the derivative.
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}]vec{p}_j^T$$
and then from the chain rule:
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{-1}{(vec{p}_j^Tvec{mu})^2}vec{p}_j^Tvec{p}_j^T$$
but the product $vec{p}_j^Tvec{p}_j^T$ clearly does not make sense.
Was hoping someone may have some wisdom on where I went wrong. I have been using the following as my multivariable differentiation refresher: https://www.comp.nus.edu.sg/~cs5240/lecture/matrix-differentiation.pdf
multivariable-calculus matrix-calculus maximum-likelihood
$endgroup$
I have found a maximum likelihood estimator which is given by:
$${{argmax }atop {vec{mu} = [mu_1 dots mu_2] atop mu_i geq 0}} -sum_{j=1}^m vec{p}_j^T vec{mu} + sum_{j=1}^m y_j ln(vec{p}_j^T vec{mu})$$
where $vec{p}_j$ and $y_j$ are constants for all j.
Now I am trying to prove the conclusion that the MLE obtained by maximizing this expression is a global maximum. My understanding is that I need to do this by obtaining the Hessian of:
$$g(mu) = -sum_{j=1}^m vec{p}_j^T vec{mu} + sum_{j=1}^m y_j ln(vec{p}_j^T vec{mu})$$
My multivariable calculus is admittedly rusty, and I have tried to obtain the Hessian, $frac{partial^2g(vec{mu})}{partialvec{mu}^2}$, using matrix differentation. Using the numerator layout I proceeded as follows:
Using linearity:
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}] + sum_{j=1}^m y_jfrac{partial}{partial vec{mu}}[ln(vec{p}_j^Tvec{mu})]$$
Using the chain rule: $frac{partial g(u(vec{x}))}{partial vec{x}} = frac{partial g}{partial u} frac{partial{u}}{partial{vec{x}}}$
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}] + sum_{j=1}^m y_j frac{1}{vec{p}_j^Tvec{mu}}frac{partial}{partial vec{mu}}[vec{p}_j^Tvec{mu}]$$
Using the common derivative: $frac{partial vec{a}^Tvec{x}}{partial vec{x}} = vec{a}^T$
$$frac{partial g(vec{mu})}{partial vec{mu}} = - sum_{j=1}^m vec{p}_j^T + sum_{j=1}^m y_j frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T$$
Taking the second derivative is where I am running into trouble.
Again beginning with linearity:
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = - sum_{j=1}^m frac{partial}{partial vec{mu}}[vec{p}_j^T] + sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T]$$
The first term is zero, since it is the derivative of a constant vector.
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}vec{p}_j^T]$$
It is this final term that I do not know how to differentiate.
My intuition would tell me that since $vec{p}_j^T$ is constant it should be allowed to come out of the derivative.
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{partial}{partial vec{mu}}[ frac{1}{vec{p}_j^Tvec{mu}}]vec{p}_j^T$$
and then from the chain rule:
$$frac{partial^2g(vec{mu})}{partialvec{mu}^2} = sum_{j=1}^m y_j frac{-1}{(vec{p}_j^Tvec{mu})^2}vec{p}_j^Tvec{p}_j^T$$
but the product $vec{p}_j^Tvec{p}_j^T$ clearly does not make sense.
Was hoping someone may have some wisdom on where I went wrong. I have been using the following as my multivariable differentiation refresher: https://www.comp.nus.edu.sg/~cs5240/lecture/matrix-differentiation.pdf
multivariable-calculus matrix-calculus maximum-likelihood
multivariable-calculus matrix-calculus maximum-likelihood
asked Jan 27 at 3:49
FilipFilip
177210
177210
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3089110%2fproof-of-concavity-via-matrix-differentiation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3089110%2fproof-of-concavity-via-matrix-differentiation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown