How to show convergence of Frank-Wolfe algorithm ( or conditional gradient method)?
$begingroup$
Let $g(w)$ be a differentiable convex function. Frank-Wolfe algorithm over a convex set $C in mathbb{R}^n$ is defined so as to find the local minimum of the function:
$$
s_{t+1}=argmin_{s in C} langle s, nabla g(w_t) rangle tag{1}
$$
$$
w_{t+1} = (1-eta_t)w_{t}+eta_ts_t tag{2}
$$
where $eta_t$ is the step size and is in $[0,1]$ and $t=1,2,cdots,T$.
The intuition behind the the first step of Frank-Wolfe algorithm is that we find the closest vector to the negative gradient (by closest I mean the projection of $s_{t+1}$ onto $-nabla g(w_t)$ is maximum). The intuition behind the second step arises from the fact that we start off from $s_1$ which is in $C$ and chose $w_{1}$ so as to be in $C$ using the convex combination.
I have drawn the following picture to better understanding.
Note: If $C$ was a polytope, then $s_1$ is at the vertex of the polytpoe because $(1)$ became Linear Programming.
My questions are:
1- Why this works?
2- What is the proof of convergence?
3- When $-nabla g(w_t)$ is not in $C$, why $s_{t+1}$ is on the boundary of $C$?
optimization convex-analysis convex-optimization numerical-optimization
$endgroup$
add a comment |
$begingroup$
Let $g(w)$ be a differentiable convex function. Frank-Wolfe algorithm over a convex set $C in mathbb{R}^n$ is defined so as to find the local minimum of the function:
$$
s_{t+1}=argmin_{s in C} langle s, nabla g(w_t) rangle tag{1}
$$
$$
w_{t+1} = (1-eta_t)w_{t}+eta_ts_t tag{2}
$$
where $eta_t$ is the step size and is in $[0,1]$ and $t=1,2,cdots,T$.
The intuition behind the the first step of Frank-Wolfe algorithm is that we find the closest vector to the negative gradient (by closest I mean the projection of $s_{t+1}$ onto $-nabla g(w_t)$ is maximum). The intuition behind the second step arises from the fact that we start off from $s_1$ which is in $C$ and chose $w_{1}$ so as to be in $C$ using the convex combination.
I have drawn the following picture to better understanding.
Note: If $C$ was a polytope, then $s_1$ is at the vertex of the polytpoe because $(1)$ became Linear Programming.
My questions are:
1- Why this works?
2- What is the proof of convergence?
3- When $-nabla g(w_t)$ is not in $C$, why $s_{t+1}$ is on the boundary of $C$?
optimization convex-analysis convex-optimization numerical-optimization
$endgroup$
$begingroup$
This blog post has a section on convergence theory of the FW algorithm that may help answer your question.
$endgroup$
– David M.
Feb 3 at 2:40
add a comment |
$begingroup$
Let $g(w)$ be a differentiable convex function. Frank-Wolfe algorithm over a convex set $C in mathbb{R}^n$ is defined so as to find the local minimum of the function:
$$
s_{t+1}=argmin_{s in C} langle s, nabla g(w_t) rangle tag{1}
$$
$$
w_{t+1} = (1-eta_t)w_{t}+eta_ts_t tag{2}
$$
where $eta_t$ is the step size and is in $[0,1]$ and $t=1,2,cdots,T$.
The intuition behind the the first step of Frank-Wolfe algorithm is that we find the closest vector to the negative gradient (by closest I mean the projection of $s_{t+1}$ onto $-nabla g(w_t)$ is maximum). The intuition behind the second step arises from the fact that we start off from $s_1$ which is in $C$ and chose $w_{1}$ so as to be in $C$ using the convex combination.
I have drawn the following picture to better understanding.
Note: If $C$ was a polytope, then $s_1$ is at the vertex of the polytpoe because $(1)$ became Linear Programming.
My questions are:
1- Why this works?
2- What is the proof of convergence?
3- When $-nabla g(w_t)$ is not in $C$, why $s_{t+1}$ is on the boundary of $C$?
optimization convex-analysis convex-optimization numerical-optimization
$endgroup$
Let $g(w)$ be a differentiable convex function. Frank-Wolfe algorithm over a convex set $C in mathbb{R}^n$ is defined so as to find the local minimum of the function:
$$
s_{t+1}=argmin_{s in C} langle s, nabla g(w_t) rangle tag{1}
$$
$$
w_{t+1} = (1-eta_t)w_{t}+eta_ts_t tag{2}
$$
where $eta_t$ is the step size and is in $[0,1]$ and $t=1,2,cdots,T$.
The intuition behind the the first step of Frank-Wolfe algorithm is that we find the closest vector to the negative gradient (by closest I mean the projection of $s_{t+1}$ onto $-nabla g(w_t)$ is maximum). The intuition behind the second step arises from the fact that we start off from $s_1$ which is in $C$ and chose $w_{1}$ so as to be in $C$ using the convex combination.
I have drawn the following picture to better understanding.
Note: If $C$ was a polytope, then $s_1$ is at the vertex of the polytpoe because $(1)$ became Linear Programming.
My questions are:
1- Why this works?
2- What is the proof of convergence?
3- When $-nabla g(w_t)$ is not in $C$, why $s_{t+1}$ is on the boundary of $C$?
optimization convex-analysis convex-optimization numerical-optimization
optimization convex-analysis convex-optimization numerical-optimization
asked Feb 1 at 17:56
SaeedSaeed
1,149310
1,149310
$begingroup$
This blog post has a section on convergence theory of the FW algorithm that may help answer your question.
$endgroup$
– David M.
Feb 3 at 2:40
add a comment |
$begingroup$
This blog post has a section on convergence theory of the FW algorithm that may help answer your question.
$endgroup$
– David M.
Feb 3 at 2:40
$begingroup$
This blog post has a section on convergence theory of the FW algorithm that may help answer your question.
$endgroup$
– David M.
Feb 3 at 2:40
$begingroup$
This blog post has a section on convergence theory of the FW algorithm that may help answer your question.
$endgroup$
– David M.
Feb 3 at 2:40
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3096539%2fhow-to-show-convergence-of-frank-wolfe-algorithm-or-conditional-gradient-metho%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3096539%2fhow-to-show-convergence-of-frank-wolfe-algorithm-or-conditional-gradient-metho%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
This blog post has a section on convergence theory of the FW algorithm that may help answer your question.
$endgroup$
– David M.
Feb 3 at 2:40