Is There an Induction-Free Proof of the 'Be The Leader' Lemma?












9












$begingroup$


This lemma is used in the context of online convex optimisation. It is sometimes called the 'Be the Leader Lemma'.




Lemma:
Suppose $f_1,f_2,ldots,f_N$ are real valued functions with the same domain. Define each $F_n = f_1 + ldots + f_n$ and let $y_n$ be a minimiser of $F_n$. Then we have $sum_{i=1}^N f_i(y_i) le sum_{i=1}^N f_i(x)$ for all $x$ in the domain.




At first glance I couldn't see any reason this should be true. Obviously if we have more control over the various inputs we can get the sum smaller. For example if each $y_n$ was a minimiser of $f_n$ the result would be obvious. But that's not how the $y_n$ are defined. In fact each $y_n$ seems to only depend very weakly on the function we're plugging it into, and weaker still as we make $n$ larger.



In the online optimisation context we receive enemy convex functions $f_1,f_2,ldots$ with the same (compact convex) domain and we must select inputs $x_1,x_2,ldots$ for $f_1,f_2,ldots$ respectively. The catch is we must select each $x_{n+1}$ knowing only $x_1,x_2,ldots, x_n$ and $f_1,f_2,ldots, f_n$ and without knowing the function $f_{n+1}$ we're going to plug $x_{n+1}$ into.
Our goal is to make the limit $frac{1}{n}sum_{i=1}^n big (f_i(x_i) - f_i(x) big )$ tend to zero for each $x$ in the domain as $n to infty$.



Since each $y_{n+1}$ in the lemma requires us to look ahead in time to see $f_{n+1}$, we cannot use the lemma directly to get a valid strategy. But there are some strategies which you can prove are valid by comparing them to the 'Be the Leader strategy'.



The proof of the lemma is surprisingly short. It uses induction and I find it completely unenlightening.



Suppose the lemma holds for some value of $N$. That is to say



$sum_{i=1}^N f_i(y_i) le sum_{i=1}^N f_i(x)$ for all $x$ in the domain.



Next we add $f_{N+1}(x)$ to both sides:



$sum_{i=1}^N f_i(y_i) + f_{N+1}(x) le sum_{i=1}^{N+1} f_i(x)$ for all $x$ in the domain.



In particular we have



$sum_{i=1}^N f_i(y_i) + f_{N+1}(y_{N+1}) le sum_{i=1}^{N+1} f_i(y_{N+1})$.



But since $y_{N+1}$ minimises the right-hand-side we can replace it with any $x$ in the domain to get.



$sum_{i=1}^{N+1} f_i(y_i) le sum_{i=1}^{N+1} f_i(x)$ for all $x$ in the domain.



This doesn't clear up the problems with the intuition about how each $y_n$ depends only very weakly on $f_n$. Can anyone see a way to prove the lemma without using induction? Perhaps by bundling together the $n$ many leapfrogging optimisations into a single optimisation of a function with some sort of complex domain?



The Be The Leader Lemma makes no assumptions about the domain or about the functions. In particular it does not require $f_i$ be continuous. If it allows for a more transparent proof, feel free to assume for example the functions $f$ are continuous and the domain is some compact convex subset of some $mathbb R^M$.










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    If $y$ is a minimizer of $sum_{i=1}^N f_i$ and we set $y_i= y$ for all $i$ then $y_i$ depends even less on $f_i$ and the lemma is trivial to prove, so I don't see where the intuition is coming from.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:55
















9












$begingroup$


This lemma is used in the context of online convex optimisation. It is sometimes called the 'Be the Leader Lemma'.




Lemma:
Suppose $f_1,f_2,ldots,f_N$ are real valued functions with the same domain. Define each $F_n = f_1 + ldots + f_n$ and let $y_n$ be a minimiser of $F_n$. Then we have $sum_{i=1}^N f_i(y_i) le sum_{i=1}^N f_i(x)$ for all $x$ in the domain.




At first glance I couldn't see any reason this should be true. Obviously if we have more control over the various inputs we can get the sum smaller. For example if each $y_n$ was a minimiser of $f_n$ the result would be obvious. But that's not how the $y_n$ are defined. In fact each $y_n$ seems to only depend very weakly on the function we're plugging it into, and weaker still as we make $n$ larger.



In the online optimisation context we receive enemy convex functions $f_1,f_2,ldots$ with the same (compact convex) domain and we must select inputs $x_1,x_2,ldots$ for $f_1,f_2,ldots$ respectively. The catch is we must select each $x_{n+1}$ knowing only $x_1,x_2,ldots, x_n$ and $f_1,f_2,ldots, f_n$ and without knowing the function $f_{n+1}$ we're going to plug $x_{n+1}$ into.
Our goal is to make the limit $frac{1}{n}sum_{i=1}^n big (f_i(x_i) - f_i(x) big )$ tend to zero for each $x$ in the domain as $n to infty$.



Since each $y_{n+1}$ in the lemma requires us to look ahead in time to see $f_{n+1}$, we cannot use the lemma directly to get a valid strategy. But there are some strategies which you can prove are valid by comparing them to the 'Be the Leader strategy'.



The proof of the lemma is surprisingly short. It uses induction and I find it completely unenlightening.



Suppose the lemma holds for some value of $N$. That is to say



$sum_{i=1}^N f_i(y_i) le sum_{i=1}^N f_i(x)$ for all $x$ in the domain.



Next we add $f_{N+1}(x)$ to both sides:



$sum_{i=1}^N f_i(y_i) + f_{N+1}(x) le sum_{i=1}^{N+1} f_i(x)$ for all $x$ in the domain.



In particular we have



$sum_{i=1}^N f_i(y_i) + f_{N+1}(y_{N+1}) le sum_{i=1}^{N+1} f_i(y_{N+1})$.



But since $y_{N+1}$ minimises the right-hand-side we can replace it with any $x$ in the domain to get.



$sum_{i=1}^{N+1} f_i(y_i) le sum_{i=1}^{N+1} f_i(x)$ for all $x$ in the domain.



This doesn't clear up the problems with the intuition about how each $y_n$ depends only very weakly on $f_n$. Can anyone see a way to prove the lemma without using induction? Perhaps by bundling together the $n$ many leapfrogging optimisations into a single optimisation of a function with some sort of complex domain?



The Be The Leader Lemma makes no assumptions about the domain or about the functions. In particular it does not require $f_i$ be continuous. If it allows for a more transparent proof, feel free to assume for example the functions $f$ are continuous and the domain is some compact convex subset of some $mathbb R^M$.










share|cite|improve this question











$endgroup$








  • 2




    $begingroup$
    If $y$ is a minimizer of $sum_{i=1}^N f_i$ and we set $y_i= y$ for all $i$ then $y_i$ depends even less on $f_i$ and the lemma is trivial to prove, so I don't see where the intuition is coming from.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:55














9












9








9


4



$begingroup$


This lemma is used in the context of online convex optimisation. It is sometimes called the 'Be the Leader Lemma'.




Lemma:
Suppose $f_1,f_2,ldots,f_N$ are real valued functions with the same domain. Define each $F_n = f_1 + ldots + f_n$ and let $y_n$ be a minimiser of $F_n$. Then we have $sum_{i=1}^N f_i(y_i) le sum_{i=1}^N f_i(x)$ for all $x$ in the domain.




At first glance I couldn't see any reason this should be true. Obviously if we have more control over the various inputs we can get the sum smaller. For example if each $y_n$ was a minimiser of $f_n$ the result would be obvious. But that's not how the $y_n$ are defined. In fact each $y_n$ seems to only depend very weakly on the function we're plugging it into, and weaker still as we make $n$ larger.



In the online optimisation context we receive enemy convex functions $f_1,f_2,ldots$ with the same (compact convex) domain and we must select inputs $x_1,x_2,ldots$ for $f_1,f_2,ldots$ respectively. The catch is we must select each $x_{n+1}$ knowing only $x_1,x_2,ldots, x_n$ and $f_1,f_2,ldots, f_n$ and without knowing the function $f_{n+1}$ we're going to plug $x_{n+1}$ into.
Our goal is to make the limit $frac{1}{n}sum_{i=1}^n big (f_i(x_i) - f_i(x) big )$ tend to zero for each $x$ in the domain as $n to infty$.



Since each $y_{n+1}$ in the lemma requires us to look ahead in time to see $f_{n+1}$, we cannot use the lemma directly to get a valid strategy. But there are some strategies which you can prove are valid by comparing them to the 'Be the Leader strategy'.



The proof of the lemma is surprisingly short. It uses induction and I find it completely unenlightening.



Suppose the lemma holds for some value of $N$. That is to say



$sum_{i=1}^N f_i(y_i) le sum_{i=1}^N f_i(x)$ for all $x$ in the domain.



Next we add $f_{N+1}(x)$ to both sides:



$sum_{i=1}^N f_i(y_i) + f_{N+1}(x) le sum_{i=1}^{N+1} f_i(x)$ for all $x$ in the domain.



In particular we have



$sum_{i=1}^N f_i(y_i) + f_{N+1}(y_{N+1}) le sum_{i=1}^{N+1} f_i(y_{N+1})$.



But since $y_{N+1}$ minimises the right-hand-side we can replace it with any $x$ in the domain to get.



$sum_{i=1}^{N+1} f_i(y_i) le sum_{i=1}^{N+1} f_i(x)$ for all $x$ in the domain.



This doesn't clear up the problems with the intuition about how each $y_n$ depends only very weakly on $f_n$. Can anyone see a way to prove the lemma without using induction? Perhaps by bundling together the $n$ many leapfrogging optimisations into a single optimisation of a function with some sort of complex domain?



The Be The Leader Lemma makes no assumptions about the domain or about the functions. In particular it does not require $f_i$ be continuous. If it allows for a more transparent proof, feel free to assume for example the functions $f$ are continuous and the domain is some compact convex subset of some $mathbb R^M$.










share|cite|improve this question











$endgroup$




This lemma is used in the context of online convex optimisation. It is sometimes called the 'Be the Leader Lemma'.




Lemma:
Suppose $f_1,f_2,ldots,f_N$ are real valued functions with the same domain. Define each $F_n = f_1 + ldots + f_n$ and let $y_n$ be a minimiser of $F_n$. Then we have $sum_{i=1}^N f_i(y_i) le sum_{i=1}^N f_i(x)$ for all $x$ in the domain.




At first glance I couldn't see any reason this should be true. Obviously if we have more control over the various inputs we can get the sum smaller. For example if each $y_n$ was a minimiser of $f_n$ the result would be obvious. But that's not how the $y_n$ are defined. In fact each $y_n$ seems to only depend very weakly on the function we're plugging it into, and weaker still as we make $n$ larger.



In the online optimisation context we receive enemy convex functions $f_1,f_2,ldots$ with the same (compact convex) domain and we must select inputs $x_1,x_2,ldots$ for $f_1,f_2,ldots$ respectively. The catch is we must select each $x_{n+1}$ knowing only $x_1,x_2,ldots, x_n$ and $f_1,f_2,ldots, f_n$ and without knowing the function $f_{n+1}$ we're going to plug $x_{n+1}$ into.
Our goal is to make the limit $frac{1}{n}sum_{i=1}^n big (f_i(x_i) - f_i(x) big )$ tend to zero for each $x$ in the domain as $n to infty$.



Since each $y_{n+1}$ in the lemma requires us to look ahead in time to see $f_{n+1}$, we cannot use the lemma directly to get a valid strategy. But there are some strategies which you can prove are valid by comparing them to the 'Be the Leader strategy'.



The proof of the lemma is surprisingly short. It uses induction and I find it completely unenlightening.



Suppose the lemma holds for some value of $N$. That is to say



$sum_{i=1}^N f_i(y_i) le sum_{i=1}^N f_i(x)$ for all $x$ in the domain.



Next we add $f_{N+1}(x)$ to both sides:



$sum_{i=1}^N f_i(y_i) + f_{N+1}(x) le sum_{i=1}^{N+1} f_i(x)$ for all $x$ in the domain.



In particular we have



$sum_{i=1}^N f_i(y_i) + f_{N+1}(y_{N+1}) le sum_{i=1}^{N+1} f_i(y_{N+1})$.



But since $y_{N+1}$ minimises the right-hand-side we can replace it with any $x$ in the domain to get.



$sum_{i=1}^{N+1} f_i(y_i) le sum_{i=1}^{N+1} f_i(x)$ for all $x$ in the domain.



This doesn't clear up the problems with the intuition about how each $y_n$ depends only very weakly on $f_n$. Can anyone see a way to prove the lemma without using induction? Perhaps by bundling together the $n$ many leapfrogging optimisations into a single optimisation of a function with some sort of complex domain?



The Be The Leader Lemma makes no assumptions about the domain or about the functions. In particular it does not require $f_i$ be continuous. If it allows for a more transparent proof, feel free to assume for example the functions $f$ are continuous and the domain is some compact convex subset of some $mathbb R^M$.







ca.classical-analysis-and-odes convex-optimization machine-learning induction






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 9 at 17:03









Yemon Choi

16.4k548105




16.4k548105










asked Jan 9 at 14:36









DaronDaron

455210




455210








  • 2




    $begingroup$
    If $y$ is a minimizer of $sum_{i=1}^N f_i$ and we set $y_i= y$ for all $i$ then $y_i$ depends even less on $f_i$ and the lemma is trivial to prove, so I don't see where the intuition is coming from.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:55














  • 2




    $begingroup$
    If $y$ is a minimizer of $sum_{i=1}^N f_i$ and we set $y_i= y$ for all $i$ then $y_i$ depends even less on $f_i$ and the lemma is trivial to prove, so I don't see where the intuition is coming from.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:55








2




2




$begingroup$
If $y$ is a minimizer of $sum_{i=1}^N f_i$ and we set $y_i= y$ for all $i$ then $y_i$ depends even less on $f_i$ and the lemma is trivial to prove, so I don't see where the intuition is coming from.
$endgroup$
– Will Sawin
Jan 9 at 15:55




$begingroup$
If $y$ is a minimizer of $sum_{i=1}^N f_i$ and we set $y_i= y$ for all $i$ then $y_i$ depends even less on $f_i$ and the lemma is trivial to prove, so I don't see where the intuition is coming from.
$endgroup$
– Will Sawin
Jan 9 at 15:55










1 Answer
1






active

oldest

votes


















22












$begingroup$

Is the following any clearer? (Read the subscripts carefully!)



$$begin{array}{ll}
f_1(y_1) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) &leq \
f_1(y_2) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) & leq \
f_1(y_3) + f_2(y_3) + f_3(y_3) + cdots + f_n(y_n) & leq \
qquad qquad qquad vdots & leq \
f_1(y_n) + f_2(y_n) + f_3(y_n) + cdots + f_n(y_n) & leq \
f_1(x) + f_2(x) + f_3(x) + cdots + f_n(x). &\
end{array}$$



I don't think this is different from the induction proof in any serious way, but sometimes "$cdots$" is more intuitive than a formal induction.





Here is a remark that might be useful: There are only $n(n+1)$ relevant variables: $f_i(y_j)$ and $f_i(x)$. The hypotheses are linear inequalities between them, and so is the conclusion.



One way to state Farkas' Lemma (aka Linear Programming Duality) is that, if we have a collection of inequalities $sum_i A_{ij} x_i geq 0$ in variables $x_i$, and they imply that $sum_i b_i x_i geq 0$, then there must be some nonnegative $c_j$ such that $sum b_i x_i = sum_j c_j left( sum_i A_{ij} x_i right)$. In words, if some collection of linear inequalities implies another linear inequality, then there must be a linear proof.



So there had to be a way to write the conclusion as a linear consequence of the hypotheses, and I just had to find it.






share|cite|improve this answer











$endgroup$









  • 8




    $begingroup$
    Note that this proof uses each hypothesis "$y_i$ is the minimizer of $F_i$" exactly once, on the $i$th line. This suggests the proof can't be simplified, as we can't avoid using any of the hypotheses as then the lemma is false.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:58








  • 4




    $begingroup$
    My professor was not pleased when I was asked to "prove X must exist" I rather produced "proof that proof of X must exist must exist" and then said since the proof must exist, so must X.
    $endgroup$
    – Joshua
    Jan 9 at 19:30












  • $begingroup$
    @Joshua Did you produce a proof that if a proof that proof of X must exist must exist exists, then X must exist?
    $endgroup$
    – Acccumulation
    Jan 9 at 23:04










  • $begingroup$
    @Acccumulation: Sounds like infinite regress.
    $endgroup$
    – Joshua
    Jan 9 at 23:58






  • 1




    $begingroup$
    @Joshua This discussion has been had before: en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
    $endgroup$
    – Ketil Tveiten
    Jan 10 at 12:23











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "504"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f320471%2fis-there-an-induction-free-proof-of-the-be-the-leader-lemma%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









22












$begingroup$

Is the following any clearer? (Read the subscripts carefully!)



$$begin{array}{ll}
f_1(y_1) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) &leq \
f_1(y_2) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) & leq \
f_1(y_3) + f_2(y_3) + f_3(y_3) + cdots + f_n(y_n) & leq \
qquad qquad qquad vdots & leq \
f_1(y_n) + f_2(y_n) + f_3(y_n) + cdots + f_n(y_n) & leq \
f_1(x) + f_2(x) + f_3(x) + cdots + f_n(x). &\
end{array}$$



I don't think this is different from the induction proof in any serious way, but sometimes "$cdots$" is more intuitive than a formal induction.





Here is a remark that might be useful: There are only $n(n+1)$ relevant variables: $f_i(y_j)$ and $f_i(x)$. The hypotheses are linear inequalities between them, and so is the conclusion.



One way to state Farkas' Lemma (aka Linear Programming Duality) is that, if we have a collection of inequalities $sum_i A_{ij} x_i geq 0$ in variables $x_i$, and they imply that $sum_i b_i x_i geq 0$, then there must be some nonnegative $c_j$ such that $sum b_i x_i = sum_j c_j left( sum_i A_{ij} x_i right)$. In words, if some collection of linear inequalities implies another linear inequality, then there must be a linear proof.



So there had to be a way to write the conclusion as a linear consequence of the hypotheses, and I just had to find it.






share|cite|improve this answer











$endgroup$









  • 8




    $begingroup$
    Note that this proof uses each hypothesis "$y_i$ is the minimizer of $F_i$" exactly once, on the $i$th line. This suggests the proof can't be simplified, as we can't avoid using any of the hypotheses as then the lemma is false.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:58








  • 4




    $begingroup$
    My professor was not pleased when I was asked to "prove X must exist" I rather produced "proof that proof of X must exist must exist" and then said since the proof must exist, so must X.
    $endgroup$
    – Joshua
    Jan 9 at 19:30












  • $begingroup$
    @Joshua Did you produce a proof that if a proof that proof of X must exist must exist exists, then X must exist?
    $endgroup$
    – Acccumulation
    Jan 9 at 23:04










  • $begingroup$
    @Acccumulation: Sounds like infinite regress.
    $endgroup$
    – Joshua
    Jan 9 at 23:58






  • 1




    $begingroup$
    @Joshua This discussion has been had before: en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
    $endgroup$
    – Ketil Tveiten
    Jan 10 at 12:23
















22












$begingroup$

Is the following any clearer? (Read the subscripts carefully!)



$$begin{array}{ll}
f_1(y_1) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) &leq \
f_1(y_2) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) & leq \
f_1(y_3) + f_2(y_3) + f_3(y_3) + cdots + f_n(y_n) & leq \
qquad qquad qquad vdots & leq \
f_1(y_n) + f_2(y_n) + f_3(y_n) + cdots + f_n(y_n) & leq \
f_1(x) + f_2(x) + f_3(x) + cdots + f_n(x). &\
end{array}$$



I don't think this is different from the induction proof in any serious way, but sometimes "$cdots$" is more intuitive than a formal induction.





Here is a remark that might be useful: There are only $n(n+1)$ relevant variables: $f_i(y_j)$ and $f_i(x)$. The hypotheses are linear inequalities between them, and so is the conclusion.



One way to state Farkas' Lemma (aka Linear Programming Duality) is that, if we have a collection of inequalities $sum_i A_{ij} x_i geq 0$ in variables $x_i$, and they imply that $sum_i b_i x_i geq 0$, then there must be some nonnegative $c_j$ such that $sum b_i x_i = sum_j c_j left( sum_i A_{ij} x_i right)$. In words, if some collection of linear inequalities implies another linear inequality, then there must be a linear proof.



So there had to be a way to write the conclusion as a linear consequence of the hypotheses, and I just had to find it.






share|cite|improve this answer











$endgroup$









  • 8




    $begingroup$
    Note that this proof uses each hypothesis "$y_i$ is the minimizer of $F_i$" exactly once, on the $i$th line. This suggests the proof can't be simplified, as we can't avoid using any of the hypotheses as then the lemma is false.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:58








  • 4




    $begingroup$
    My professor was not pleased when I was asked to "prove X must exist" I rather produced "proof that proof of X must exist must exist" and then said since the proof must exist, so must X.
    $endgroup$
    – Joshua
    Jan 9 at 19:30












  • $begingroup$
    @Joshua Did you produce a proof that if a proof that proof of X must exist must exist exists, then X must exist?
    $endgroup$
    – Acccumulation
    Jan 9 at 23:04










  • $begingroup$
    @Acccumulation: Sounds like infinite regress.
    $endgroup$
    – Joshua
    Jan 9 at 23:58






  • 1




    $begingroup$
    @Joshua This discussion has been had before: en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
    $endgroup$
    – Ketil Tveiten
    Jan 10 at 12:23














22












22








22





$begingroup$

Is the following any clearer? (Read the subscripts carefully!)



$$begin{array}{ll}
f_1(y_1) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) &leq \
f_1(y_2) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) & leq \
f_1(y_3) + f_2(y_3) + f_3(y_3) + cdots + f_n(y_n) & leq \
qquad qquad qquad vdots & leq \
f_1(y_n) + f_2(y_n) + f_3(y_n) + cdots + f_n(y_n) & leq \
f_1(x) + f_2(x) + f_3(x) + cdots + f_n(x). &\
end{array}$$



I don't think this is different from the induction proof in any serious way, but sometimes "$cdots$" is more intuitive than a formal induction.





Here is a remark that might be useful: There are only $n(n+1)$ relevant variables: $f_i(y_j)$ and $f_i(x)$. The hypotheses are linear inequalities between them, and so is the conclusion.



One way to state Farkas' Lemma (aka Linear Programming Duality) is that, if we have a collection of inequalities $sum_i A_{ij} x_i geq 0$ in variables $x_i$, and they imply that $sum_i b_i x_i geq 0$, then there must be some nonnegative $c_j$ such that $sum b_i x_i = sum_j c_j left( sum_i A_{ij} x_i right)$. In words, if some collection of linear inequalities implies another linear inequality, then there must be a linear proof.



So there had to be a way to write the conclusion as a linear consequence of the hypotheses, and I just had to find it.






share|cite|improve this answer











$endgroup$



Is the following any clearer? (Read the subscripts carefully!)



$$begin{array}{ll}
f_1(y_1) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) &leq \
f_1(y_2) + f_2(y_2) + f_3(y_3) + cdots + f_n(y_n) & leq \
f_1(y_3) + f_2(y_3) + f_3(y_3) + cdots + f_n(y_n) & leq \
qquad qquad qquad vdots & leq \
f_1(y_n) + f_2(y_n) + f_3(y_n) + cdots + f_n(y_n) & leq \
f_1(x) + f_2(x) + f_3(x) + cdots + f_n(x). &\
end{array}$$



I don't think this is different from the induction proof in any serious way, but sometimes "$cdots$" is more intuitive than a formal induction.





Here is a remark that might be useful: There are only $n(n+1)$ relevant variables: $f_i(y_j)$ and $f_i(x)$. The hypotheses are linear inequalities between them, and so is the conclusion.



One way to state Farkas' Lemma (aka Linear Programming Duality) is that, if we have a collection of inequalities $sum_i A_{ij} x_i geq 0$ in variables $x_i$, and they imply that $sum_i b_i x_i geq 0$, then there must be some nonnegative $c_j$ such that $sum b_i x_i = sum_j c_j left( sum_i A_{ij} x_i right)$. In words, if some collection of linear inequalities implies another linear inequality, then there must be a linear proof.



So there had to be a way to write the conclusion as a linear consequence of the hypotheses, and I just had to find it.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 9 at 16:22

























answered Jan 9 at 15:56









David E SpeyerDavid E Speyer

106k8276536




106k8276536








  • 8




    $begingroup$
    Note that this proof uses each hypothesis "$y_i$ is the minimizer of $F_i$" exactly once, on the $i$th line. This suggests the proof can't be simplified, as we can't avoid using any of the hypotheses as then the lemma is false.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:58








  • 4




    $begingroup$
    My professor was not pleased when I was asked to "prove X must exist" I rather produced "proof that proof of X must exist must exist" and then said since the proof must exist, so must X.
    $endgroup$
    – Joshua
    Jan 9 at 19:30












  • $begingroup$
    @Joshua Did you produce a proof that if a proof that proof of X must exist must exist exists, then X must exist?
    $endgroup$
    – Acccumulation
    Jan 9 at 23:04










  • $begingroup$
    @Acccumulation: Sounds like infinite regress.
    $endgroup$
    – Joshua
    Jan 9 at 23:58






  • 1




    $begingroup$
    @Joshua This discussion has been had before: en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
    $endgroup$
    – Ketil Tveiten
    Jan 10 at 12:23














  • 8




    $begingroup$
    Note that this proof uses each hypothesis "$y_i$ is the minimizer of $F_i$" exactly once, on the $i$th line. This suggests the proof can't be simplified, as we can't avoid using any of the hypotheses as then the lemma is false.
    $endgroup$
    – Will Sawin
    Jan 9 at 15:58








  • 4




    $begingroup$
    My professor was not pleased when I was asked to "prove X must exist" I rather produced "proof that proof of X must exist must exist" and then said since the proof must exist, so must X.
    $endgroup$
    – Joshua
    Jan 9 at 19:30












  • $begingroup$
    @Joshua Did you produce a proof that if a proof that proof of X must exist must exist exists, then X must exist?
    $endgroup$
    – Acccumulation
    Jan 9 at 23:04










  • $begingroup$
    @Acccumulation: Sounds like infinite regress.
    $endgroup$
    – Joshua
    Jan 9 at 23:58






  • 1




    $begingroup$
    @Joshua This discussion has been had before: en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
    $endgroup$
    – Ketil Tveiten
    Jan 10 at 12:23








8




8




$begingroup$
Note that this proof uses each hypothesis "$y_i$ is the minimizer of $F_i$" exactly once, on the $i$th line. This suggests the proof can't be simplified, as we can't avoid using any of the hypotheses as then the lemma is false.
$endgroup$
– Will Sawin
Jan 9 at 15:58






$begingroup$
Note that this proof uses each hypothesis "$y_i$ is the minimizer of $F_i$" exactly once, on the $i$th line. This suggests the proof can't be simplified, as we can't avoid using any of the hypotheses as then the lemma is false.
$endgroup$
– Will Sawin
Jan 9 at 15:58






4




4




$begingroup$
My professor was not pleased when I was asked to "prove X must exist" I rather produced "proof that proof of X must exist must exist" and then said since the proof must exist, so must X.
$endgroup$
– Joshua
Jan 9 at 19:30






$begingroup$
My professor was not pleased when I was asked to "prove X must exist" I rather produced "proof that proof of X must exist must exist" and then said since the proof must exist, so must X.
$endgroup$
– Joshua
Jan 9 at 19:30














$begingroup$
@Joshua Did you produce a proof that if a proof that proof of X must exist must exist exists, then X must exist?
$endgroup$
– Acccumulation
Jan 9 at 23:04




$begingroup$
@Joshua Did you produce a proof that if a proof that proof of X must exist must exist exists, then X must exist?
$endgroup$
– Acccumulation
Jan 9 at 23:04












$begingroup$
@Acccumulation: Sounds like infinite regress.
$endgroup$
– Joshua
Jan 9 at 23:58




$begingroup$
@Acccumulation: Sounds like infinite regress.
$endgroup$
– Joshua
Jan 9 at 23:58




1




1




$begingroup$
@Joshua This discussion has been had before: en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
$endgroup$
– Ketil Tveiten
Jan 10 at 12:23




$begingroup$
@Joshua This discussion has been had before: en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
$endgroup$
– Ketil Tveiten
Jan 10 at 12:23


















draft saved

draft discarded




















































Thanks for contributing an answer to MathOverflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathoverflow.net%2fquestions%2f320471%2fis-there-an-induction-free-proof-of-the-be-the-leader-lemma%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

'app-layout' is not a known element: how to share Component with different Modules

android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

WPF add header to Image with URL pettitions [duplicate]