Solve $min_{x} frac{1}{N}sum_{i=1}^{N} f_i(x_i) + g(x)$ $ {rm s.t.} Ax = b$; $x = [x_1,ldots,x_N]^T$ and $A...












3












$begingroup$


An optimization problem:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + g(x) \
& text{subject to}
& & A x = b ,
%&&& X succeq 0.
end{aligned}
end{equation*}

where $x = [x_1,ldots,x_N]^T$, $A in mathbb{R}^{M times N}$, and $f_i$ is $m$ strongly convex and $L$ smooth, and $g(x)$ is twice differentiable.



Question(s):




  • How to solve such problem, say if $N geq 10^{10}$? Can we use second-order method, e.g., Newton's method, for such large scale?

  • (But I think for $N leq 5000$ we can use Newton like method, right?)


I am sure there must be tons of methods to solve such problems. But I would be happy to hear few methods from you experts for my learning purpose.



ADD:
Constraint that $M leq N$. So, $A$ is a fat matrix.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Maybe solving this problem in two steps?
    $endgroup$
    – Bertrand
    Jan 26 at 9:45










  • $begingroup$
    Please suggest how to do it in two steps
    $endgroup$
    – learning
    Jan 26 at 10:00










  • $begingroup$
    If $rk(A)=Nleq M$ the unique solution to your system is $x=(A^TA)^{-1}A^Tb$ and so you do not have any degree of freedom left to minimize your objective function. If $rk(A)<Nleq M$ you may be able to use generalized inverses of $A$ to narrow down the set of solutions and at the same time eliminate the linear constraints.
    $endgroup$
    – Bertrand
    Jan 26 at 10:28










  • $begingroup$
    Sorry, I should have said that $rk(A) = M leq N$.
    $endgroup$
    – learning
    Jan 26 at 10:38
















3












$begingroup$


An optimization problem:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + g(x) \
& text{subject to}
& & A x = b ,
%&&& X succeq 0.
end{aligned}
end{equation*}

where $x = [x_1,ldots,x_N]^T$, $A in mathbb{R}^{M times N}$, and $f_i$ is $m$ strongly convex and $L$ smooth, and $g(x)$ is twice differentiable.



Question(s):




  • How to solve such problem, say if $N geq 10^{10}$? Can we use second-order method, e.g., Newton's method, for such large scale?

  • (But I think for $N leq 5000$ we can use Newton like method, right?)


I am sure there must be tons of methods to solve such problems. But I would be happy to hear few methods from you experts for my learning purpose.



ADD:
Constraint that $M leq N$. So, $A$ is a fat matrix.










share|cite|improve this question











$endgroup$












  • $begingroup$
    Maybe solving this problem in two steps?
    $endgroup$
    – Bertrand
    Jan 26 at 9:45










  • $begingroup$
    Please suggest how to do it in two steps
    $endgroup$
    – learning
    Jan 26 at 10:00










  • $begingroup$
    If $rk(A)=Nleq M$ the unique solution to your system is $x=(A^TA)^{-1}A^Tb$ and so you do not have any degree of freedom left to minimize your objective function. If $rk(A)<Nleq M$ you may be able to use generalized inverses of $A$ to narrow down the set of solutions and at the same time eliminate the linear constraints.
    $endgroup$
    – Bertrand
    Jan 26 at 10:28










  • $begingroup$
    Sorry, I should have said that $rk(A) = M leq N$.
    $endgroup$
    – learning
    Jan 26 at 10:38














3












3








3


1



$begingroup$


An optimization problem:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + g(x) \
& text{subject to}
& & A x = b ,
%&&& X succeq 0.
end{aligned}
end{equation*}

where $x = [x_1,ldots,x_N]^T$, $A in mathbb{R}^{M times N}$, and $f_i$ is $m$ strongly convex and $L$ smooth, and $g(x)$ is twice differentiable.



Question(s):




  • How to solve such problem, say if $N geq 10^{10}$? Can we use second-order method, e.g., Newton's method, for such large scale?

  • (But I think for $N leq 5000$ we can use Newton like method, right?)


I am sure there must be tons of methods to solve such problems. But I would be happy to hear few methods from you experts for my learning purpose.



ADD:
Constraint that $M leq N$. So, $A$ is a fat matrix.










share|cite|improve this question











$endgroup$




An optimization problem:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + g(x) \
& text{subject to}
& & A x = b ,
%&&& X succeq 0.
end{aligned}
end{equation*}

where $x = [x_1,ldots,x_N]^T$, $A in mathbb{R}^{M times N}$, and $f_i$ is $m$ strongly convex and $L$ smooth, and $g(x)$ is twice differentiable.



Question(s):




  • How to solve such problem, say if $N geq 10^{10}$? Can we use second-order method, e.g., Newton's method, for such large scale?

  • (But I think for $N leq 5000$ we can use Newton like method, right?)


I am sure there must be tons of methods to solve such problems. But I would be happy to hear few methods from you experts for my learning purpose.



ADD:
Constraint that $M leq N$. So, $A$ is a fat matrix.







optimization convex-optimization nonlinear-optimization numerical-optimization






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 26 at 10:33







learning

















asked Jan 26 at 8:40









learninglearning

547




547












  • $begingroup$
    Maybe solving this problem in two steps?
    $endgroup$
    – Bertrand
    Jan 26 at 9:45










  • $begingroup$
    Please suggest how to do it in two steps
    $endgroup$
    – learning
    Jan 26 at 10:00










  • $begingroup$
    If $rk(A)=Nleq M$ the unique solution to your system is $x=(A^TA)^{-1}A^Tb$ and so you do not have any degree of freedom left to minimize your objective function. If $rk(A)<Nleq M$ you may be able to use generalized inverses of $A$ to narrow down the set of solutions and at the same time eliminate the linear constraints.
    $endgroup$
    – Bertrand
    Jan 26 at 10:28










  • $begingroup$
    Sorry, I should have said that $rk(A) = M leq N$.
    $endgroup$
    – learning
    Jan 26 at 10:38


















  • $begingroup$
    Maybe solving this problem in two steps?
    $endgroup$
    – Bertrand
    Jan 26 at 9:45










  • $begingroup$
    Please suggest how to do it in two steps
    $endgroup$
    – learning
    Jan 26 at 10:00










  • $begingroup$
    If $rk(A)=Nleq M$ the unique solution to your system is $x=(A^TA)^{-1}A^Tb$ and so you do not have any degree of freedom left to minimize your objective function. If $rk(A)<Nleq M$ you may be able to use generalized inverses of $A$ to narrow down the set of solutions and at the same time eliminate the linear constraints.
    $endgroup$
    – Bertrand
    Jan 26 at 10:28










  • $begingroup$
    Sorry, I should have said that $rk(A) = M leq N$.
    $endgroup$
    – learning
    Jan 26 at 10:38
















$begingroup$
Maybe solving this problem in two steps?
$endgroup$
– Bertrand
Jan 26 at 9:45




$begingroup$
Maybe solving this problem in two steps?
$endgroup$
– Bertrand
Jan 26 at 9:45












$begingroup$
Please suggest how to do it in two steps
$endgroup$
– learning
Jan 26 at 10:00




$begingroup$
Please suggest how to do it in two steps
$endgroup$
– learning
Jan 26 at 10:00












$begingroup$
If $rk(A)=Nleq M$ the unique solution to your system is $x=(A^TA)^{-1}A^Tb$ and so you do not have any degree of freedom left to minimize your objective function. If $rk(A)<Nleq M$ you may be able to use generalized inverses of $A$ to narrow down the set of solutions and at the same time eliminate the linear constraints.
$endgroup$
– Bertrand
Jan 26 at 10:28




$begingroup$
If $rk(A)=Nleq M$ the unique solution to your system is $x=(A^TA)^{-1}A^Tb$ and so you do not have any degree of freedom left to minimize your objective function. If $rk(A)<Nleq M$ you may be able to use generalized inverses of $A$ to narrow down the set of solutions and at the same time eliminate the linear constraints.
$endgroup$
– Bertrand
Jan 26 at 10:28












$begingroup$
Sorry, I should have said that $rk(A) = M leq N$.
$endgroup$
– learning
Jan 26 at 10:38




$begingroup$
Sorry, I should have said that $rk(A) = M leq N$.
$endgroup$
– learning
Jan 26 at 10:38










1 Answer
1






active

oldest

votes


















1












$begingroup$

Maybe solving this problem in two steps?

1) Add the "fictive" constraint $g(x) = y$ to your original problem and obtain the conditional solution $x^*(y)$.

EDIT: This gives:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y \
& text{subject to}
& & A x = b text{ and } g(x) = y.
end{aligned}
end{equation*}

which yields $x^*(y)$. Note that $A x^*(y) = b$ and $g(x^*(y))=y.$

2) Then, relax the "fictive" constraint and minimize over $y$:
begin{equation*}
begin{aligned}
& underset{ y in mathbb{R} }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_i^*(y)right) + y \
end{aligned}
end{equation*}



This yields the overall solution(s) $x^{**}$, which also satisfy(ies) the requirement $y=g(x^{**})$.

For existence and unicity of the solution, however, it would be helpful to know a bit more about $f$ and $g$ (are they monotone, convex, single peaked?)

Of course, the equivalence between the one step and two step optimization procedure does not work in all situations. But under suitable assumptions on $f$ and $g$, they are equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    What is $y$? And how is the constraint $Ax = b$ incorporated?
    $endgroup$
    – learning
    Jan 26 at 10:34












  • $begingroup$
    say $g(x) = |z-x|_^2$ then how $g(x) = y$ relate with $Ax - b = 0$?
    $endgroup$
    – learning
    Jan 26 at 10:37










  • $begingroup$
    See the EDIT lines which try to answer this point.
    $endgroup$
    – Bertrand
    Jan 26 at 10:46












  • $begingroup$
    I am still not able to figure out. Say, I form the Lagrangian in step 1, that is $L(x(y),nu) = frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y + nu^T left(A x - b right)$, then solve $nabla_{x(y), nu} L= 0$. Then obtain $x^*(y)$. After computing $x^*(y)$, then solve the problem in step 2 for $y$. Can this be solved for large-scale $N geq 10^{10}$?
    $endgroup$
    – learning
    Jan 26 at 12:29












  • $begingroup$
    I think that numerical methods are not able to solve this, say for $N>10000$. You should find an explicite solution, at least to step 1, and ideally to the whole problem. What are the expressions of $f_i$ and $g$? May be should you also able to say something about your "empirical mean" function $N^{-1}sum_{i=1}^{N} f_ileft(x_iright)$, can you apply the law of large numbers here?
    $endgroup$
    – Bertrand
    Jan 26 at 12:42











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3088030%2fsolve-min-x-frac1n-sum-i-1n-f-ix-i-gx-rm-s-t-ax-b%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Maybe solving this problem in two steps?

1) Add the "fictive" constraint $g(x) = y$ to your original problem and obtain the conditional solution $x^*(y)$.

EDIT: This gives:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y \
& text{subject to}
& & A x = b text{ and } g(x) = y.
end{aligned}
end{equation*}

which yields $x^*(y)$. Note that $A x^*(y) = b$ and $g(x^*(y))=y.$

2) Then, relax the "fictive" constraint and minimize over $y$:
begin{equation*}
begin{aligned}
& underset{ y in mathbb{R} }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_i^*(y)right) + y \
end{aligned}
end{equation*}



This yields the overall solution(s) $x^{**}$, which also satisfy(ies) the requirement $y=g(x^{**})$.

For existence and unicity of the solution, however, it would be helpful to know a bit more about $f$ and $g$ (are they monotone, convex, single peaked?)

Of course, the equivalence between the one step and two step optimization procedure does not work in all situations. But under suitable assumptions on $f$ and $g$, they are equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    What is $y$? And how is the constraint $Ax = b$ incorporated?
    $endgroup$
    – learning
    Jan 26 at 10:34












  • $begingroup$
    say $g(x) = |z-x|_^2$ then how $g(x) = y$ relate with $Ax - b = 0$?
    $endgroup$
    – learning
    Jan 26 at 10:37










  • $begingroup$
    See the EDIT lines which try to answer this point.
    $endgroup$
    – Bertrand
    Jan 26 at 10:46












  • $begingroup$
    I am still not able to figure out. Say, I form the Lagrangian in step 1, that is $L(x(y),nu) = frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y + nu^T left(A x - b right)$, then solve $nabla_{x(y), nu} L= 0$. Then obtain $x^*(y)$. After computing $x^*(y)$, then solve the problem in step 2 for $y$. Can this be solved for large-scale $N geq 10^{10}$?
    $endgroup$
    – learning
    Jan 26 at 12:29












  • $begingroup$
    I think that numerical methods are not able to solve this, say for $N>10000$. You should find an explicite solution, at least to step 1, and ideally to the whole problem. What are the expressions of $f_i$ and $g$? May be should you also able to say something about your "empirical mean" function $N^{-1}sum_{i=1}^{N} f_ileft(x_iright)$, can you apply the law of large numbers here?
    $endgroup$
    – Bertrand
    Jan 26 at 12:42
















1












$begingroup$

Maybe solving this problem in two steps?

1) Add the "fictive" constraint $g(x) = y$ to your original problem and obtain the conditional solution $x^*(y)$.

EDIT: This gives:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y \
& text{subject to}
& & A x = b text{ and } g(x) = y.
end{aligned}
end{equation*}

which yields $x^*(y)$. Note that $A x^*(y) = b$ and $g(x^*(y))=y.$

2) Then, relax the "fictive" constraint and minimize over $y$:
begin{equation*}
begin{aligned}
& underset{ y in mathbb{R} }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_i^*(y)right) + y \
end{aligned}
end{equation*}



This yields the overall solution(s) $x^{**}$, which also satisfy(ies) the requirement $y=g(x^{**})$.

For existence and unicity of the solution, however, it would be helpful to know a bit more about $f$ and $g$ (are they monotone, convex, single peaked?)

Of course, the equivalence between the one step and two step optimization procedure does not work in all situations. But under suitable assumptions on $f$ and $g$, they are equivalent.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    What is $y$? And how is the constraint $Ax = b$ incorporated?
    $endgroup$
    – learning
    Jan 26 at 10:34












  • $begingroup$
    say $g(x) = |z-x|_^2$ then how $g(x) = y$ relate with $Ax - b = 0$?
    $endgroup$
    – learning
    Jan 26 at 10:37










  • $begingroup$
    See the EDIT lines which try to answer this point.
    $endgroup$
    – Bertrand
    Jan 26 at 10:46












  • $begingroup$
    I am still not able to figure out. Say, I form the Lagrangian in step 1, that is $L(x(y),nu) = frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y + nu^T left(A x - b right)$, then solve $nabla_{x(y), nu} L= 0$. Then obtain $x^*(y)$. After computing $x^*(y)$, then solve the problem in step 2 for $y$. Can this be solved for large-scale $N geq 10^{10}$?
    $endgroup$
    – learning
    Jan 26 at 12:29












  • $begingroup$
    I think that numerical methods are not able to solve this, say for $N>10000$. You should find an explicite solution, at least to step 1, and ideally to the whole problem. What are the expressions of $f_i$ and $g$? May be should you also able to say something about your "empirical mean" function $N^{-1}sum_{i=1}^{N} f_ileft(x_iright)$, can you apply the law of large numbers here?
    $endgroup$
    – Bertrand
    Jan 26 at 12:42














1












1








1





$begingroup$

Maybe solving this problem in two steps?

1) Add the "fictive" constraint $g(x) = y$ to your original problem and obtain the conditional solution $x^*(y)$.

EDIT: This gives:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y \
& text{subject to}
& & A x = b text{ and } g(x) = y.
end{aligned}
end{equation*}

which yields $x^*(y)$. Note that $A x^*(y) = b$ and $g(x^*(y))=y.$

2) Then, relax the "fictive" constraint and minimize over $y$:
begin{equation*}
begin{aligned}
& underset{ y in mathbb{R} }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_i^*(y)right) + y \
end{aligned}
end{equation*}



This yields the overall solution(s) $x^{**}$, which also satisfy(ies) the requirement $y=g(x^{**})$.

For existence and unicity of the solution, however, it would be helpful to know a bit more about $f$ and $g$ (are they monotone, convex, single peaked?)

Of course, the equivalence between the one step and two step optimization procedure does not work in all situations. But under suitable assumptions on $f$ and $g$, they are equivalent.






share|cite|improve this answer











$endgroup$



Maybe solving this problem in two steps?

1) Add the "fictive" constraint $g(x) = y$ to your original problem and obtain the conditional solution $x^*(y)$.

EDIT: This gives:
begin{equation*}
begin{aligned}
& underset{ x in mathbb{R}^N }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y \
& text{subject to}
& & A x = b text{ and } g(x) = y.
end{aligned}
end{equation*}

which yields $x^*(y)$. Note that $A x^*(y) = b$ and $g(x^*(y))=y.$

2) Then, relax the "fictive" constraint and minimize over $y$:
begin{equation*}
begin{aligned}
& underset{ y in mathbb{R} }{text{minimize}}
& & frac{1}{N} sum_{i=1}^{N} f_ileft(x_i^*(y)right) + y \
end{aligned}
end{equation*}



This yields the overall solution(s) $x^{**}$, which also satisfy(ies) the requirement $y=g(x^{**})$.

For existence and unicity of the solution, however, it would be helpful to know a bit more about $f$ and $g$ (are they monotone, convex, single peaked?)

Of course, the equivalence between the one step and two step optimization procedure does not work in all situations. But under suitable assumptions on $f$ and $g$, they are equivalent.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 27 at 15:00

























answered Jan 26 at 10:02









BertrandBertrand

45815




45815












  • $begingroup$
    What is $y$? And how is the constraint $Ax = b$ incorporated?
    $endgroup$
    – learning
    Jan 26 at 10:34












  • $begingroup$
    say $g(x) = |z-x|_^2$ then how $g(x) = y$ relate with $Ax - b = 0$?
    $endgroup$
    – learning
    Jan 26 at 10:37










  • $begingroup$
    See the EDIT lines which try to answer this point.
    $endgroup$
    – Bertrand
    Jan 26 at 10:46












  • $begingroup$
    I am still not able to figure out. Say, I form the Lagrangian in step 1, that is $L(x(y),nu) = frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y + nu^T left(A x - b right)$, then solve $nabla_{x(y), nu} L= 0$. Then obtain $x^*(y)$. After computing $x^*(y)$, then solve the problem in step 2 for $y$. Can this be solved for large-scale $N geq 10^{10}$?
    $endgroup$
    – learning
    Jan 26 at 12:29












  • $begingroup$
    I think that numerical methods are not able to solve this, say for $N>10000$. You should find an explicite solution, at least to step 1, and ideally to the whole problem. What are the expressions of $f_i$ and $g$? May be should you also able to say something about your "empirical mean" function $N^{-1}sum_{i=1}^{N} f_ileft(x_iright)$, can you apply the law of large numbers here?
    $endgroup$
    – Bertrand
    Jan 26 at 12:42


















  • $begingroup$
    What is $y$? And how is the constraint $Ax = b$ incorporated?
    $endgroup$
    – learning
    Jan 26 at 10:34












  • $begingroup$
    say $g(x) = |z-x|_^2$ then how $g(x) = y$ relate with $Ax - b = 0$?
    $endgroup$
    – learning
    Jan 26 at 10:37










  • $begingroup$
    See the EDIT lines which try to answer this point.
    $endgroup$
    – Bertrand
    Jan 26 at 10:46












  • $begingroup$
    I am still not able to figure out. Say, I form the Lagrangian in step 1, that is $L(x(y),nu) = frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y + nu^T left(A x - b right)$, then solve $nabla_{x(y), nu} L= 0$. Then obtain $x^*(y)$. After computing $x^*(y)$, then solve the problem in step 2 for $y$. Can this be solved for large-scale $N geq 10^{10}$?
    $endgroup$
    – learning
    Jan 26 at 12:29












  • $begingroup$
    I think that numerical methods are not able to solve this, say for $N>10000$. You should find an explicite solution, at least to step 1, and ideally to the whole problem. What are the expressions of $f_i$ and $g$? May be should you also able to say something about your "empirical mean" function $N^{-1}sum_{i=1}^{N} f_ileft(x_iright)$, can you apply the law of large numbers here?
    $endgroup$
    – Bertrand
    Jan 26 at 12:42
















$begingroup$
What is $y$? And how is the constraint $Ax = b$ incorporated?
$endgroup$
– learning
Jan 26 at 10:34






$begingroup$
What is $y$? And how is the constraint $Ax = b$ incorporated?
$endgroup$
– learning
Jan 26 at 10:34














$begingroup$
say $g(x) = |z-x|_^2$ then how $g(x) = y$ relate with $Ax - b = 0$?
$endgroup$
– learning
Jan 26 at 10:37




$begingroup$
say $g(x) = |z-x|_^2$ then how $g(x) = y$ relate with $Ax - b = 0$?
$endgroup$
– learning
Jan 26 at 10:37












$begingroup$
See the EDIT lines which try to answer this point.
$endgroup$
– Bertrand
Jan 26 at 10:46






$begingroup$
See the EDIT lines which try to answer this point.
$endgroup$
– Bertrand
Jan 26 at 10:46














$begingroup$
I am still not able to figure out. Say, I form the Lagrangian in step 1, that is $L(x(y),nu) = frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y + nu^T left(A x - b right)$, then solve $nabla_{x(y), nu} L= 0$. Then obtain $x^*(y)$. After computing $x^*(y)$, then solve the problem in step 2 for $y$. Can this be solved for large-scale $N geq 10^{10}$?
$endgroup$
– learning
Jan 26 at 12:29






$begingroup$
I am still not able to figure out. Say, I form the Lagrangian in step 1, that is $L(x(y),nu) = frac{1}{N} sum_{i=1}^{N} f_ileft(x_iright) + y + nu^T left(A x - b right)$, then solve $nabla_{x(y), nu} L= 0$. Then obtain $x^*(y)$. After computing $x^*(y)$, then solve the problem in step 2 for $y$. Can this be solved for large-scale $N geq 10^{10}$?
$endgroup$
– learning
Jan 26 at 12:29














$begingroup$
I think that numerical methods are not able to solve this, say for $N>10000$. You should find an explicite solution, at least to step 1, and ideally to the whole problem. What are the expressions of $f_i$ and $g$? May be should you also able to say something about your "empirical mean" function $N^{-1}sum_{i=1}^{N} f_ileft(x_iright)$, can you apply the law of large numbers here?
$endgroup$
– Bertrand
Jan 26 at 12:42




$begingroup$
I think that numerical methods are not able to solve this, say for $N>10000$. You should find an explicite solution, at least to step 1, and ideally to the whole problem. What are the expressions of $f_i$ and $g$? May be should you also able to say something about your "empirical mean" function $N^{-1}sum_{i=1}^{N} f_ileft(x_iright)$, can you apply the law of large numbers here?
$endgroup$
– Bertrand
Jan 26 at 12:42


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3088030%2fsolve-min-x-frac1n-sum-i-1n-f-ix-i-gx-rm-s-t-ax-b%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

'app-layout' is not a known element: how to share Component with different Modules

android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

WPF add header to Image with URL pettitions [duplicate]