Linear programming algorithm that minimizes number of non-zero variables?












6












$begingroup$


I have real world problems I'm trying to programmatically solve in the form of



$$Z = c_1 x_1 + c_2 x_2 + cdots + c_n x_n$$



Subject to



begin{align}
& a_{11} x_1 + a_{21} x_2 + cdots + a_{n1} = b_1 \[6pt]
& 0 le a_{12} x_1 le b_2 \[6pt]
& 0 le a_{23} x_2 le b_3 \
& {}qquadvdots \
& 0 le a_{nm} x_n le b_m
end{align}



Right now I'm using the simplex method. Because of the first constraint there are many solutions and I don't really care to min/max to any particular coefficients. What I really care about is to have as few non-zero variables in the objective function as possible.



A good way to look at it is, I have packages to deliver and $x$s are trucks. I don't care how long they take, I just want to use as few trucks as possible.



One strategy I came up with was to minimize $Z$ and set the objective coefficients to something like:



$$c_n = 10n$$



so that:



$$Z = 10 x_1 + 20 x_2 + cdots + 10 n x_n$$



This mostly works but is a little hacky and, depending on what $a_{nm}$ is, at times inconsistent.



I'm not married to the simplex method, but it's fast and gives me a solution so I'm using it.










share|cite|improve this question











$endgroup$












  • $begingroup$
    I am a little confused by the description. Are the $c_{i}$'s given to you or you introduced them to promote sparsity? By sparsity I mean small number of nonzero entries in $x$. Also, the simplex method is just a method to solve general linear programs. I am not sure why we are talking about the method here. The question is what is the right linear program to formulate. In general, the standard approach is to solve something like $min |x|_{1}$, subject to your equality constraints. Vaguely speaking, minimizing the $|x|_{1}$ pushes the number of nonzero entries lower.
    $endgroup$
    – megas
    Dec 25 '14 at 4:53












  • $begingroup$
    Yes the $c_i$s are to promote sparsity. At one point I did need the simplex method because I thought I did want to minimize the objective function, but found given the constraints I cared less about the minimum and more about just finding a solution. It probably isn't the right linear program to use anymore, hence the question.
    $endgroup$
    – Nathanial
    Dec 25 '14 at 5:10










  • $begingroup$
    I drafted an answer to your question. I hope it helps.
    $endgroup$
    – megas
    Dec 25 '14 at 5:23










  • $begingroup$
    Penalizing the $L_1$ norm of $x$ in order to promote sparsity is a very popular technique in optimization.
    $endgroup$
    – littleO
    Dec 25 '14 at 5:49


















6












$begingroup$


I have real world problems I'm trying to programmatically solve in the form of



$$Z = c_1 x_1 + c_2 x_2 + cdots + c_n x_n$$



Subject to



begin{align}
& a_{11} x_1 + a_{21} x_2 + cdots + a_{n1} = b_1 \[6pt]
& 0 le a_{12} x_1 le b_2 \[6pt]
& 0 le a_{23} x_2 le b_3 \
& {}qquadvdots \
& 0 le a_{nm} x_n le b_m
end{align}



Right now I'm using the simplex method. Because of the first constraint there are many solutions and I don't really care to min/max to any particular coefficients. What I really care about is to have as few non-zero variables in the objective function as possible.



A good way to look at it is, I have packages to deliver and $x$s are trucks. I don't care how long they take, I just want to use as few trucks as possible.



One strategy I came up with was to minimize $Z$ and set the objective coefficients to something like:



$$c_n = 10n$$



so that:



$$Z = 10 x_1 + 20 x_2 + cdots + 10 n x_n$$



This mostly works but is a little hacky and, depending on what $a_{nm}$ is, at times inconsistent.



I'm not married to the simplex method, but it's fast and gives me a solution so I'm using it.










share|cite|improve this question











$endgroup$












  • $begingroup$
    I am a little confused by the description. Are the $c_{i}$'s given to you or you introduced them to promote sparsity? By sparsity I mean small number of nonzero entries in $x$. Also, the simplex method is just a method to solve general linear programs. I am not sure why we are talking about the method here. The question is what is the right linear program to formulate. In general, the standard approach is to solve something like $min |x|_{1}$, subject to your equality constraints. Vaguely speaking, minimizing the $|x|_{1}$ pushes the number of nonzero entries lower.
    $endgroup$
    – megas
    Dec 25 '14 at 4:53












  • $begingroup$
    Yes the $c_i$s are to promote sparsity. At one point I did need the simplex method because I thought I did want to minimize the objective function, but found given the constraints I cared less about the minimum and more about just finding a solution. It probably isn't the right linear program to use anymore, hence the question.
    $endgroup$
    – Nathanial
    Dec 25 '14 at 5:10










  • $begingroup$
    I drafted an answer to your question. I hope it helps.
    $endgroup$
    – megas
    Dec 25 '14 at 5:23










  • $begingroup$
    Penalizing the $L_1$ norm of $x$ in order to promote sparsity is a very popular technique in optimization.
    $endgroup$
    – littleO
    Dec 25 '14 at 5:49
















6












6








6


2



$begingroup$


I have real world problems I'm trying to programmatically solve in the form of



$$Z = c_1 x_1 + c_2 x_2 + cdots + c_n x_n$$



Subject to



begin{align}
& a_{11} x_1 + a_{21} x_2 + cdots + a_{n1} = b_1 \[6pt]
& 0 le a_{12} x_1 le b_2 \[6pt]
& 0 le a_{23} x_2 le b_3 \
& {}qquadvdots \
& 0 le a_{nm} x_n le b_m
end{align}



Right now I'm using the simplex method. Because of the first constraint there are many solutions and I don't really care to min/max to any particular coefficients. What I really care about is to have as few non-zero variables in the objective function as possible.



A good way to look at it is, I have packages to deliver and $x$s are trucks. I don't care how long they take, I just want to use as few trucks as possible.



One strategy I came up with was to minimize $Z$ and set the objective coefficients to something like:



$$c_n = 10n$$



so that:



$$Z = 10 x_1 + 20 x_2 + cdots + 10 n x_n$$



This mostly works but is a little hacky and, depending on what $a_{nm}$ is, at times inconsistent.



I'm not married to the simplex method, but it's fast and gives me a solution so I'm using it.










share|cite|improve this question











$endgroup$




I have real world problems I'm trying to programmatically solve in the form of



$$Z = c_1 x_1 + c_2 x_2 + cdots + c_n x_n$$



Subject to



begin{align}
& a_{11} x_1 + a_{21} x_2 + cdots + a_{n1} = b_1 \[6pt]
& 0 le a_{12} x_1 le b_2 \[6pt]
& 0 le a_{23} x_2 le b_3 \
& {}qquadvdots \
& 0 le a_{nm} x_n le b_m
end{align}



Right now I'm using the simplex method. Because of the first constraint there are many solutions and I don't really care to min/max to any particular coefficients. What I really care about is to have as few non-zero variables in the objective function as possible.



A good way to look at it is, I have packages to deliver and $x$s are trucks. I don't care how long they take, I just want to use as few trucks as possible.



One strategy I came up with was to minimize $Z$ and set the objective coefficients to something like:



$$c_n = 10n$$



so that:



$$Z = 10 x_1 + 20 x_2 + cdots + 10 n x_n$$



This mostly works but is a little hacky and, depending on what $a_{nm}$ is, at times inconsistent.



I'm not married to the simplex method, but it's fast and gives me a solution so I'm using it.







linear-algebra optimization linear-programming simplex






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 25 '14 at 5:36









Michael Hardy

1




1










asked Dec 25 '14 at 4:34









NathanialNathanial

1334




1334












  • $begingroup$
    I am a little confused by the description. Are the $c_{i}$'s given to you or you introduced them to promote sparsity? By sparsity I mean small number of nonzero entries in $x$. Also, the simplex method is just a method to solve general linear programs. I am not sure why we are talking about the method here. The question is what is the right linear program to formulate. In general, the standard approach is to solve something like $min |x|_{1}$, subject to your equality constraints. Vaguely speaking, minimizing the $|x|_{1}$ pushes the number of nonzero entries lower.
    $endgroup$
    – megas
    Dec 25 '14 at 4:53












  • $begingroup$
    Yes the $c_i$s are to promote sparsity. At one point I did need the simplex method because I thought I did want to minimize the objective function, but found given the constraints I cared less about the minimum and more about just finding a solution. It probably isn't the right linear program to use anymore, hence the question.
    $endgroup$
    – Nathanial
    Dec 25 '14 at 5:10










  • $begingroup$
    I drafted an answer to your question. I hope it helps.
    $endgroup$
    – megas
    Dec 25 '14 at 5:23










  • $begingroup$
    Penalizing the $L_1$ norm of $x$ in order to promote sparsity is a very popular technique in optimization.
    $endgroup$
    – littleO
    Dec 25 '14 at 5:49




















  • $begingroup$
    I am a little confused by the description. Are the $c_{i}$'s given to you or you introduced them to promote sparsity? By sparsity I mean small number of nonzero entries in $x$. Also, the simplex method is just a method to solve general linear programs. I am not sure why we are talking about the method here. The question is what is the right linear program to formulate. In general, the standard approach is to solve something like $min |x|_{1}$, subject to your equality constraints. Vaguely speaking, minimizing the $|x|_{1}$ pushes the number of nonzero entries lower.
    $endgroup$
    – megas
    Dec 25 '14 at 4:53












  • $begingroup$
    Yes the $c_i$s are to promote sparsity. At one point I did need the simplex method because I thought I did want to minimize the objective function, but found given the constraints I cared less about the minimum and more about just finding a solution. It probably isn't the right linear program to use anymore, hence the question.
    $endgroup$
    – Nathanial
    Dec 25 '14 at 5:10










  • $begingroup$
    I drafted an answer to your question. I hope it helps.
    $endgroup$
    – megas
    Dec 25 '14 at 5:23










  • $begingroup$
    Penalizing the $L_1$ norm of $x$ in order to promote sparsity is a very popular technique in optimization.
    $endgroup$
    – littleO
    Dec 25 '14 at 5:49


















$begingroup$
I am a little confused by the description. Are the $c_{i}$'s given to you or you introduced them to promote sparsity? By sparsity I mean small number of nonzero entries in $x$. Also, the simplex method is just a method to solve general linear programs. I am not sure why we are talking about the method here. The question is what is the right linear program to formulate. In general, the standard approach is to solve something like $min |x|_{1}$, subject to your equality constraints. Vaguely speaking, minimizing the $|x|_{1}$ pushes the number of nonzero entries lower.
$endgroup$
– megas
Dec 25 '14 at 4:53






$begingroup$
I am a little confused by the description. Are the $c_{i}$'s given to you or you introduced them to promote sparsity? By sparsity I mean small number of nonzero entries in $x$. Also, the simplex method is just a method to solve general linear programs. I am not sure why we are talking about the method here. The question is what is the right linear program to formulate. In general, the standard approach is to solve something like $min |x|_{1}$, subject to your equality constraints. Vaguely speaking, minimizing the $|x|_{1}$ pushes the number of nonzero entries lower.
$endgroup$
– megas
Dec 25 '14 at 4:53














$begingroup$
Yes the $c_i$s are to promote sparsity. At one point I did need the simplex method because I thought I did want to minimize the objective function, but found given the constraints I cared less about the minimum and more about just finding a solution. It probably isn't the right linear program to use anymore, hence the question.
$endgroup$
– Nathanial
Dec 25 '14 at 5:10




$begingroup$
Yes the $c_i$s are to promote sparsity. At one point I did need the simplex method because I thought I did want to minimize the objective function, but found given the constraints I cared less about the minimum and more about just finding a solution. It probably isn't the right linear program to use anymore, hence the question.
$endgroup$
– Nathanial
Dec 25 '14 at 5:10












$begingroup$
I drafted an answer to your question. I hope it helps.
$endgroup$
– megas
Dec 25 '14 at 5:23




$begingroup$
I drafted an answer to your question. I hope it helps.
$endgroup$
– megas
Dec 25 '14 at 5:23












$begingroup$
Penalizing the $L_1$ norm of $x$ in order to promote sparsity is a very popular technique in optimization.
$endgroup$
– littleO
Dec 25 '14 at 5:49






$begingroup$
Penalizing the $L_1$ norm of $x$ in order to promote sparsity is a very popular technique in optimization.
$endgroup$
– littleO
Dec 25 '14 at 5:49












2 Answers
2






active

oldest

votes


















3












$begingroup$

You want to find a solution $x$ in your linear system (set of equality and inequality constraints) that has the smallest number of nonzero entries.
That would correspond to minimizing the $l_{0}$ norm $|x|_{0}$ which is exactly the number of nonzero entries in $x$.
Unfortunately, this is not a linear program anymore. In fact it is a very hard problem to solve.



Instead you can "relax" the problem a little bit and solve the following problem
begin{align}
min quad & |x|_{1}\
text{subject to:} quad& text{your linear constraints}
end{align}
Note that
$$
|x|_{1} = sum_{i=1}^{n} |x_{i}|
$$
and as you can see it intuitively promotes sparsity because every nonzero entry adds to the penalty. More formally, we say that $|x|_{1}$ is the convex hull of $|x|_{0}$, but don't worry about it now.
The bottomline is that you can solve the above problem and hope that you will get a sparse solution.



Note that the above minimization problem is actually a linear program (although you might not be able to say because of the absolute value operation in the $|x|_{1}$. If you are using some generic solver like CVX then you can describe directly the objective as minimization of $|x|_{1}$ (something like $text{norm}(x, 1)$). If not, then you can lookup online how the minimization of $l_{1}$ norm corresponds to a linear program.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    It's for software I'm writing. It looks like I may be able to use GLPK for $l_1$ norm minimization. Thank you.
    $endgroup$
    – Nathanial
    Dec 25 '14 at 6:34










  • $begingroup$
    good luck! if you are using python, keep in mind that CVXOPT is also a package for python.
    $endgroup$
    – megas
    Dec 25 '14 at 6:44










  • $begingroup$
    So I want the CVXOPT $l_1$ norm approximation, right? cvxopt.org/examples/mlbook/l1.html
    $endgroup$
    – Nathanial
    Jan 14 '15 at 0:53












  • $begingroup$
    I guess... with $P$ set to identity matrix and $q$ set to the $0$ vector.
    $endgroup$
    – megas
    Jan 14 '15 at 3:36










  • $begingroup$
    That's what I'm doing, I just wanted to make sure $l_1$ norm approximation is the same as what we are talking about. Thanks!
    $endgroup$
    – Nathanial
    Jan 14 '15 at 18:22



















0












$begingroup$

This problem is studied here:




On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, by Edoardo Amaldi and Viggo Kann, Theoretical Computer Science, December 1998




In short: the smallest number of variables cannot be found in polynomial time, and cannot even be approximated to any constant factor, unless P=NP.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1080339%2flinear-programming-algorithm-that-minimizes-number-of-non-zero-variables%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    You want to find a solution $x$ in your linear system (set of equality and inequality constraints) that has the smallest number of nonzero entries.
    That would correspond to minimizing the $l_{0}$ norm $|x|_{0}$ which is exactly the number of nonzero entries in $x$.
    Unfortunately, this is not a linear program anymore. In fact it is a very hard problem to solve.



    Instead you can "relax" the problem a little bit and solve the following problem
    begin{align}
    min quad & |x|_{1}\
    text{subject to:} quad& text{your linear constraints}
    end{align}
    Note that
    $$
    |x|_{1} = sum_{i=1}^{n} |x_{i}|
    $$
    and as you can see it intuitively promotes sparsity because every nonzero entry adds to the penalty. More formally, we say that $|x|_{1}$ is the convex hull of $|x|_{0}$, but don't worry about it now.
    The bottomline is that you can solve the above problem and hope that you will get a sparse solution.



    Note that the above minimization problem is actually a linear program (although you might not be able to say because of the absolute value operation in the $|x|_{1}$. If you are using some generic solver like CVX then you can describe directly the objective as minimization of $|x|_{1}$ (something like $text{norm}(x, 1)$). If not, then you can lookup online how the minimization of $l_{1}$ norm corresponds to a linear program.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      It's for software I'm writing. It looks like I may be able to use GLPK for $l_1$ norm minimization. Thank you.
      $endgroup$
      – Nathanial
      Dec 25 '14 at 6:34










    • $begingroup$
      good luck! if you are using python, keep in mind that CVXOPT is also a package for python.
      $endgroup$
      – megas
      Dec 25 '14 at 6:44










    • $begingroup$
      So I want the CVXOPT $l_1$ norm approximation, right? cvxopt.org/examples/mlbook/l1.html
      $endgroup$
      – Nathanial
      Jan 14 '15 at 0:53












    • $begingroup$
      I guess... with $P$ set to identity matrix and $q$ set to the $0$ vector.
      $endgroup$
      – megas
      Jan 14 '15 at 3:36










    • $begingroup$
      That's what I'm doing, I just wanted to make sure $l_1$ norm approximation is the same as what we are talking about. Thanks!
      $endgroup$
      – Nathanial
      Jan 14 '15 at 18:22
















    3












    $begingroup$

    You want to find a solution $x$ in your linear system (set of equality and inequality constraints) that has the smallest number of nonzero entries.
    That would correspond to minimizing the $l_{0}$ norm $|x|_{0}$ which is exactly the number of nonzero entries in $x$.
    Unfortunately, this is not a linear program anymore. In fact it is a very hard problem to solve.



    Instead you can "relax" the problem a little bit and solve the following problem
    begin{align}
    min quad & |x|_{1}\
    text{subject to:} quad& text{your linear constraints}
    end{align}
    Note that
    $$
    |x|_{1} = sum_{i=1}^{n} |x_{i}|
    $$
    and as you can see it intuitively promotes sparsity because every nonzero entry adds to the penalty. More formally, we say that $|x|_{1}$ is the convex hull of $|x|_{0}$, but don't worry about it now.
    The bottomline is that you can solve the above problem and hope that you will get a sparse solution.



    Note that the above minimization problem is actually a linear program (although you might not be able to say because of the absolute value operation in the $|x|_{1}$. If you are using some generic solver like CVX then you can describe directly the objective as minimization of $|x|_{1}$ (something like $text{norm}(x, 1)$). If not, then you can lookup online how the minimization of $l_{1}$ norm corresponds to a linear program.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      It's for software I'm writing. It looks like I may be able to use GLPK for $l_1$ norm minimization. Thank you.
      $endgroup$
      – Nathanial
      Dec 25 '14 at 6:34










    • $begingroup$
      good luck! if you are using python, keep in mind that CVXOPT is also a package for python.
      $endgroup$
      – megas
      Dec 25 '14 at 6:44










    • $begingroup$
      So I want the CVXOPT $l_1$ norm approximation, right? cvxopt.org/examples/mlbook/l1.html
      $endgroup$
      – Nathanial
      Jan 14 '15 at 0:53












    • $begingroup$
      I guess... with $P$ set to identity matrix and $q$ set to the $0$ vector.
      $endgroup$
      – megas
      Jan 14 '15 at 3:36










    • $begingroup$
      That's what I'm doing, I just wanted to make sure $l_1$ norm approximation is the same as what we are talking about. Thanks!
      $endgroup$
      – Nathanial
      Jan 14 '15 at 18:22














    3












    3








    3





    $begingroup$

    You want to find a solution $x$ in your linear system (set of equality and inequality constraints) that has the smallest number of nonzero entries.
    That would correspond to minimizing the $l_{0}$ norm $|x|_{0}$ which is exactly the number of nonzero entries in $x$.
    Unfortunately, this is not a linear program anymore. In fact it is a very hard problem to solve.



    Instead you can "relax" the problem a little bit and solve the following problem
    begin{align}
    min quad & |x|_{1}\
    text{subject to:} quad& text{your linear constraints}
    end{align}
    Note that
    $$
    |x|_{1} = sum_{i=1}^{n} |x_{i}|
    $$
    and as you can see it intuitively promotes sparsity because every nonzero entry adds to the penalty. More formally, we say that $|x|_{1}$ is the convex hull of $|x|_{0}$, but don't worry about it now.
    The bottomline is that you can solve the above problem and hope that you will get a sparse solution.



    Note that the above minimization problem is actually a linear program (although you might not be able to say because of the absolute value operation in the $|x|_{1}$. If you are using some generic solver like CVX then you can describe directly the objective as minimization of $|x|_{1}$ (something like $text{norm}(x, 1)$). If not, then you can lookup online how the minimization of $l_{1}$ norm corresponds to a linear program.






    share|cite|improve this answer









    $endgroup$



    You want to find a solution $x$ in your linear system (set of equality and inequality constraints) that has the smallest number of nonzero entries.
    That would correspond to minimizing the $l_{0}$ norm $|x|_{0}$ which is exactly the number of nonzero entries in $x$.
    Unfortunately, this is not a linear program anymore. In fact it is a very hard problem to solve.



    Instead you can "relax" the problem a little bit and solve the following problem
    begin{align}
    min quad & |x|_{1}\
    text{subject to:} quad& text{your linear constraints}
    end{align}
    Note that
    $$
    |x|_{1} = sum_{i=1}^{n} |x_{i}|
    $$
    and as you can see it intuitively promotes sparsity because every nonzero entry adds to the penalty. More formally, we say that $|x|_{1}$ is the convex hull of $|x|_{0}$, but don't worry about it now.
    The bottomline is that you can solve the above problem and hope that you will get a sparse solution.



    Note that the above minimization problem is actually a linear program (although you might not be able to say because of the absolute value operation in the $|x|_{1}$. If you are using some generic solver like CVX then you can describe directly the objective as minimization of $|x|_{1}$ (something like $text{norm}(x, 1)$). If not, then you can lookup online how the minimization of $l_{1}$ norm corresponds to a linear program.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Dec 25 '14 at 5:22









    megasmegas

    1,796614




    1,796614












    • $begingroup$
      It's for software I'm writing. It looks like I may be able to use GLPK for $l_1$ norm minimization. Thank you.
      $endgroup$
      – Nathanial
      Dec 25 '14 at 6:34










    • $begingroup$
      good luck! if you are using python, keep in mind that CVXOPT is also a package for python.
      $endgroup$
      – megas
      Dec 25 '14 at 6:44










    • $begingroup$
      So I want the CVXOPT $l_1$ norm approximation, right? cvxopt.org/examples/mlbook/l1.html
      $endgroup$
      – Nathanial
      Jan 14 '15 at 0:53












    • $begingroup$
      I guess... with $P$ set to identity matrix and $q$ set to the $0$ vector.
      $endgroup$
      – megas
      Jan 14 '15 at 3:36










    • $begingroup$
      That's what I'm doing, I just wanted to make sure $l_1$ norm approximation is the same as what we are talking about. Thanks!
      $endgroup$
      – Nathanial
      Jan 14 '15 at 18:22


















    • $begingroup$
      It's for software I'm writing. It looks like I may be able to use GLPK for $l_1$ norm minimization. Thank you.
      $endgroup$
      – Nathanial
      Dec 25 '14 at 6:34










    • $begingroup$
      good luck! if you are using python, keep in mind that CVXOPT is also a package for python.
      $endgroup$
      – megas
      Dec 25 '14 at 6:44










    • $begingroup$
      So I want the CVXOPT $l_1$ norm approximation, right? cvxopt.org/examples/mlbook/l1.html
      $endgroup$
      – Nathanial
      Jan 14 '15 at 0:53












    • $begingroup$
      I guess... with $P$ set to identity matrix and $q$ set to the $0$ vector.
      $endgroup$
      – megas
      Jan 14 '15 at 3:36










    • $begingroup$
      That's what I'm doing, I just wanted to make sure $l_1$ norm approximation is the same as what we are talking about. Thanks!
      $endgroup$
      – Nathanial
      Jan 14 '15 at 18:22
















    $begingroup$
    It's for software I'm writing. It looks like I may be able to use GLPK for $l_1$ norm minimization. Thank you.
    $endgroup$
    – Nathanial
    Dec 25 '14 at 6:34




    $begingroup$
    It's for software I'm writing. It looks like I may be able to use GLPK for $l_1$ norm minimization. Thank you.
    $endgroup$
    – Nathanial
    Dec 25 '14 at 6:34












    $begingroup$
    good luck! if you are using python, keep in mind that CVXOPT is also a package for python.
    $endgroup$
    – megas
    Dec 25 '14 at 6:44




    $begingroup$
    good luck! if you are using python, keep in mind that CVXOPT is also a package for python.
    $endgroup$
    – megas
    Dec 25 '14 at 6:44












    $begingroup$
    So I want the CVXOPT $l_1$ norm approximation, right? cvxopt.org/examples/mlbook/l1.html
    $endgroup$
    – Nathanial
    Jan 14 '15 at 0:53






    $begingroup$
    So I want the CVXOPT $l_1$ norm approximation, right? cvxopt.org/examples/mlbook/l1.html
    $endgroup$
    – Nathanial
    Jan 14 '15 at 0:53














    $begingroup$
    I guess... with $P$ set to identity matrix and $q$ set to the $0$ vector.
    $endgroup$
    – megas
    Jan 14 '15 at 3:36




    $begingroup$
    I guess... with $P$ set to identity matrix and $q$ set to the $0$ vector.
    $endgroup$
    – megas
    Jan 14 '15 at 3:36












    $begingroup$
    That's what I'm doing, I just wanted to make sure $l_1$ norm approximation is the same as what we are talking about. Thanks!
    $endgroup$
    – Nathanial
    Jan 14 '15 at 18:22




    $begingroup$
    That's what I'm doing, I just wanted to make sure $l_1$ norm approximation is the same as what we are talking about. Thanks!
    $endgroup$
    – Nathanial
    Jan 14 '15 at 18:22











    0












    $begingroup$

    This problem is studied here:




    On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, by Edoardo Amaldi and Viggo Kann, Theoretical Computer Science, December 1998




    In short: the smallest number of variables cannot be found in polynomial time, and cannot even be approximated to any constant factor, unless P=NP.






    share|cite|improve this answer











    $endgroup$


















      0












      $begingroup$

      This problem is studied here:




      On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, by Edoardo Amaldi and Viggo Kann, Theoretical Computer Science, December 1998




      In short: the smallest number of variables cannot be found in polynomial time, and cannot even be approximated to any constant factor, unless P=NP.






      share|cite|improve this answer











      $endgroup$
















        0












        0








        0





        $begingroup$

        This problem is studied here:




        On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, by Edoardo Amaldi and Viggo Kann, Theoretical Computer Science, December 1998




        In short: the smallest number of variables cannot be found in polynomial time, and cannot even be approximated to any constant factor, unless P=NP.






        share|cite|improve this answer











        $endgroup$



        This problem is studied here:




        On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, by Edoardo Amaldi and Viggo Kann, Theoretical Computer Science, December 1998




        In short: the smallest number of variables cannot be found in polynomial time, and cannot even be approximated to any constant factor, unless P=NP.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Jan 1 at 11:30

























        answered Jan 1 at 9:39









        Erel Segal-HaleviErel Segal-Halevi

        4,23611760




        4,23611760






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1080339%2flinear-programming-algorithm-that-minimizes-number-of-non-zero-variables%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            Npm cannot find a required file even through it is in the searched directory

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith