Newton's method - optimal step size
I am wondering how to prove the following affirmation:
We have : Min $f(x) = frac{1}{2}x^TDx - c^Tx$
with $f: R^n → R^1$ and $D$ a symetric positive matrix of size $nxn$.
Suppose $d_k = -nabla f(x_k)$ is a descent direction in $x_k$.
Show that the optimal solution to the problem $minlimits_{σ_k > 0} f(x_k + σ_kd_k)$ is:
$σ_k = -frac{∇f(x_k)^Td_k}{d_k^TDd_k}$.
Thanks for your help!
Louis
optimization proof-writing nonlinear-optimization
add a comment |
I am wondering how to prove the following affirmation:
We have : Min $f(x) = frac{1}{2}x^TDx - c^Tx$
with $f: R^n → R^1$ and $D$ a symetric positive matrix of size $nxn$.
Suppose $d_k = -nabla f(x_k)$ is a descent direction in $x_k$.
Show that the optimal solution to the problem $minlimits_{σ_k > 0} f(x_k + σ_kd_k)$ is:
$σ_k = -frac{∇f(x_k)^Td_k}{d_k^TDd_k}$.
Thanks for your help!
Louis
optimization proof-writing nonlinear-optimization
add a comment |
I am wondering how to prove the following affirmation:
We have : Min $f(x) = frac{1}{2}x^TDx - c^Tx$
with $f: R^n → R^1$ and $D$ a symetric positive matrix of size $nxn$.
Suppose $d_k = -nabla f(x_k)$ is a descent direction in $x_k$.
Show that the optimal solution to the problem $minlimits_{σ_k > 0} f(x_k + σ_kd_k)$ is:
$σ_k = -frac{∇f(x_k)^Td_k}{d_k^TDd_k}$.
Thanks for your help!
Louis
optimization proof-writing nonlinear-optimization
I am wondering how to prove the following affirmation:
We have : Min $f(x) = frac{1}{2}x^TDx - c^Tx$
with $f: R^n → R^1$ and $D$ a symetric positive matrix of size $nxn$.
Suppose $d_k = -nabla f(x_k)$ is a descent direction in $x_k$.
Show that the optimal solution to the problem $minlimits_{σ_k > 0} f(x_k + σ_kd_k)$ is:
$σ_k = -frac{∇f(x_k)^Td_k}{d_k^TDd_k}$.
Thanks for your help!
Louis
optimization proof-writing nonlinear-optimization
optimization proof-writing nonlinear-optimization
asked Nov 20 '18 at 21:50


Louis-Philippe Noël
123
123
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
As this is a smooth convex ($D$ is symetric positive) problem, to minimize $sigma_kmapsto f(x_k+sigma_kd_k)$ you can search for $sigma_k$ such that
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=0
$$
hint:
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=langle d_k,nabla f(x_k+sigma_kd_k)rangle
$$
where $langle .,.rangle$ is the usual scalar product and $nabla f(x_k+sigma_kd_k)=D(x_k+sigma_kd_k)-c=nabla f(x_k)+sigma_kDd_k$
note:
This method is the gradient descent method with Cauchy step, not the Newton method. The Newton method includes second order correction
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)
$$
where $H_f$ is the Hessian. With your $f$, $H_f(x_k)=D$ (constant) and $nabla f(x_k)=Dx_k-c$, thus:
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)=x_k-D^{-1}D(x_k-c)=D^{-1}c
$$
and the method converge in one iteration to the solution $x^*=D^{-1}c$
In practice, if we do not want to solve $Dx=c$, we still use first order methods but not the steepest descent one. Its convergence is very slow when $D$ is badly-conditioned. The conjugate gradients method is generally a better choice.
Thanks @Picaud Vincent! Exactly what I needed.
– Louis-Philippe Noël
Nov 21 '18 at 19:08
@Louis-PhilippeNoël so maybe you can voteup? :)
– Picaud Vincent
Nov 21 '18 at 19:15
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3006950%2fnewtons-method-optimal-step-size%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
As this is a smooth convex ($D$ is symetric positive) problem, to minimize $sigma_kmapsto f(x_k+sigma_kd_k)$ you can search for $sigma_k$ such that
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=0
$$
hint:
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=langle d_k,nabla f(x_k+sigma_kd_k)rangle
$$
where $langle .,.rangle$ is the usual scalar product and $nabla f(x_k+sigma_kd_k)=D(x_k+sigma_kd_k)-c=nabla f(x_k)+sigma_kDd_k$
note:
This method is the gradient descent method with Cauchy step, not the Newton method. The Newton method includes second order correction
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)
$$
where $H_f$ is the Hessian. With your $f$, $H_f(x_k)=D$ (constant) and $nabla f(x_k)=Dx_k-c$, thus:
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)=x_k-D^{-1}D(x_k-c)=D^{-1}c
$$
and the method converge in one iteration to the solution $x^*=D^{-1}c$
In practice, if we do not want to solve $Dx=c$, we still use first order methods but not the steepest descent one. Its convergence is very slow when $D$ is badly-conditioned. The conjugate gradients method is generally a better choice.
Thanks @Picaud Vincent! Exactly what I needed.
– Louis-Philippe Noël
Nov 21 '18 at 19:08
@Louis-PhilippeNoël so maybe you can voteup? :)
– Picaud Vincent
Nov 21 '18 at 19:15
add a comment |
As this is a smooth convex ($D$ is symetric positive) problem, to minimize $sigma_kmapsto f(x_k+sigma_kd_k)$ you can search for $sigma_k$ such that
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=0
$$
hint:
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=langle d_k,nabla f(x_k+sigma_kd_k)rangle
$$
where $langle .,.rangle$ is the usual scalar product and $nabla f(x_k+sigma_kd_k)=D(x_k+sigma_kd_k)-c=nabla f(x_k)+sigma_kDd_k$
note:
This method is the gradient descent method with Cauchy step, not the Newton method. The Newton method includes second order correction
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)
$$
where $H_f$ is the Hessian. With your $f$, $H_f(x_k)=D$ (constant) and $nabla f(x_k)=Dx_k-c$, thus:
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)=x_k-D^{-1}D(x_k-c)=D^{-1}c
$$
and the method converge in one iteration to the solution $x^*=D^{-1}c$
In practice, if we do not want to solve $Dx=c$, we still use first order methods but not the steepest descent one. Its convergence is very slow when $D$ is badly-conditioned. The conjugate gradients method is generally a better choice.
Thanks @Picaud Vincent! Exactly what I needed.
– Louis-Philippe Noël
Nov 21 '18 at 19:08
@Louis-PhilippeNoël so maybe you can voteup? :)
– Picaud Vincent
Nov 21 '18 at 19:15
add a comment |
As this is a smooth convex ($D$ is symetric positive) problem, to minimize $sigma_kmapsto f(x_k+sigma_kd_k)$ you can search for $sigma_k$ such that
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=0
$$
hint:
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=langle d_k,nabla f(x_k+sigma_kd_k)rangle
$$
where $langle .,.rangle$ is the usual scalar product and $nabla f(x_k+sigma_kd_k)=D(x_k+sigma_kd_k)-c=nabla f(x_k)+sigma_kDd_k$
note:
This method is the gradient descent method with Cauchy step, not the Newton method. The Newton method includes second order correction
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)
$$
where $H_f$ is the Hessian. With your $f$, $H_f(x_k)=D$ (constant) and $nabla f(x_k)=Dx_k-c$, thus:
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)=x_k-D^{-1}D(x_k-c)=D^{-1}c
$$
and the method converge in one iteration to the solution $x^*=D^{-1}c$
In practice, if we do not want to solve $Dx=c$, we still use first order methods but not the steepest descent one. Its convergence is very slow when $D$ is badly-conditioned. The conjugate gradients method is generally a better choice.
As this is a smooth convex ($D$ is symetric positive) problem, to minimize $sigma_kmapsto f(x_k+sigma_kd_k)$ you can search for $sigma_k$ such that
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=0
$$
hint:
$$
frac{d}{dsigma_k}(sigma_kmapsto f(x_k+sigma_kd_k))=langle d_k,nabla f(x_k+sigma_kd_k)rangle
$$
where $langle .,.rangle$ is the usual scalar product and $nabla f(x_k+sigma_kd_k)=D(x_k+sigma_kd_k)-c=nabla f(x_k)+sigma_kDd_k$
note:
This method is the gradient descent method with Cauchy step, not the Newton method. The Newton method includes second order correction
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)
$$
where $H_f$ is the Hessian. With your $f$, $H_f(x_k)=D$ (constant) and $nabla f(x_k)=Dx_k-c$, thus:
$$
x_{k+1}=x_k-H_f^{-1}(x_k)nabla f(x_k)=x_k-D^{-1}D(x_k-c)=D^{-1}c
$$
and the method converge in one iteration to the solution $x^*=D^{-1}c$
In practice, if we do not want to solve $Dx=c$, we still use first order methods but not the steepest descent one. Its convergence is very slow when $D$ is badly-conditioned. The conjugate gradients method is generally a better choice.
edited Nov 20 '18 at 22:46
answered Nov 20 '18 at 22:31


Picaud Vincent
1,21838
1,21838
Thanks @Picaud Vincent! Exactly what I needed.
– Louis-Philippe Noël
Nov 21 '18 at 19:08
@Louis-PhilippeNoël so maybe you can voteup? :)
– Picaud Vincent
Nov 21 '18 at 19:15
add a comment |
Thanks @Picaud Vincent! Exactly what I needed.
– Louis-Philippe Noël
Nov 21 '18 at 19:08
@Louis-PhilippeNoël so maybe you can voteup? :)
– Picaud Vincent
Nov 21 '18 at 19:15
Thanks @Picaud Vincent! Exactly what I needed.
– Louis-Philippe Noël
Nov 21 '18 at 19:08
Thanks @Picaud Vincent! Exactly what I needed.
– Louis-Philippe Noël
Nov 21 '18 at 19:08
@Louis-PhilippeNoël so maybe you can voteup? :)
– Picaud Vincent
Nov 21 '18 at 19:15
@Louis-PhilippeNoël so maybe you can voteup? :)
– Picaud Vincent
Nov 21 '18 at 19:15
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3006950%2fnewtons-method-optimal-step-size%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown