How to solve the following combinatorial optimization problem?
$begingroup$
Is there some efficient method to solve the following optimization problem? If $x_i$ is in a continuous set, is there some efficient method? Thanks.
$min$ $x_1+x_2+dots+x_n$
subject to:
$a_1log x_1 +a_2log x_2 + dots +a_nlog x_n geq c$;
$x_i in {b_1,b_2,dots, b_m}$, where $b_iin mathbb{R}^+$
optimization discrete-optimization
$endgroup$
|
show 2 more comments
$begingroup$
Is there some efficient method to solve the following optimization problem? If $x_i$ is in a continuous set, is there some efficient method? Thanks.
$min$ $x_1+x_2+dots+x_n$
subject to:
$a_1log x_1 +a_2log x_2 + dots +a_nlog x_n geq c$;
$x_i in {b_1,b_2,dots, b_m}$, where $b_iin mathbb{R}^+$
optimization discrete-optimization
$endgroup$
$begingroup$
Is $m^n$ large? Have you considered brute force?
$endgroup$
– Rodrigo de Azevedo
Jan 31 at 20:58
$begingroup$
Brute force is not an efficient method.
$endgroup$
– Timothy
Feb 1 at 10:42
2
$begingroup$
Better an inefficient method than no method. Of course, you can always transform the constraints into polynomial equalities $(x_i - b_1) (x_i - b_2) cdots (x_i - b_m) = 0$ and then use Lagrange multipliers.
$endgroup$
– Rodrigo de Azevedo
Feb 1 at 22:12
$begingroup$
The values of $x_i$ are discrete. Can the numerical optimization methods be applied to the problem?
$endgroup$
– Timothy
Feb 2 at 0:33
1
$begingroup$
If these polynomials are really a great idea, we don't need MIP solvers anymore. We could just use $x(x-1)=0$ to implement a binary variable.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 3:26
|
show 2 more comments
$begingroup$
Is there some efficient method to solve the following optimization problem? If $x_i$ is in a continuous set, is there some efficient method? Thanks.
$min$ $x_1+x_2+dots+x_n$
subject to:
$a_1log x_1 +a_2log x_2 + dots +a_nlog x_n geq c$;
$x_i in {b_1,b_2,dots, b_m}$, where $b_iin mathbb{R}^+$
optimization discrete-optimization
$endgroup$
Is there some efficient method to solve the following optimization problem? If $x_i$ is in a continuous set, is there some efficient method? Thanks.
$min$ $x_1+x_2+dots+x_n$
subject to:
$a_1log x_1 +a_2log x_2 + dots +a_nlog x_n geq c$;
$x_i in {b_1,b_2,dots, b_m}$, where $b_iin mathbb{R}^+$
optimization discrete-optimization
optimization discrete-optimization
edited Feb 6 at 20:58
Rodrigo de Azevedo
13.2k41960
13.2k41960
asked Jan 28 at 4:53
TimothyTimothy
1088
1088
$begingroup$
Is $m^n$ large? Have you considered brute force?
$endgroup$
– Rodrigo de Azevedo
Jan 31 at 20:58
$begingroup$
Brute force is not an efficient method.
$endgroup$
– Timothy
Feb 1 at 10:42
2
$begingroup$
Better an inefficient method than no method. Of course, you can always transform the constraints into polynomial equalities $(x_i - b_1) (x_i - b_2) cdots (x_i - b_m) = 0$ and then use Lagrange multipliers.
$endgroup$
– Rodrigo de Azevedo
Feb 1 at 22:12
$begingroup$
The values of $x_i$ are discrete. Can the numerical optimization methods be applied to the problem?
$endgroup$
– Timothy
Feb 2 at 0:33
1
$begingroup$
If these polynomials are really a great idea, we don't need MIP solvers anymore. We could just use $x(x-1)=0$ to implement a binary variable.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 3:26
|
show 2 more comments
$begingroup$
Is $m^n$ large? Have you considered brute force?
$endgroup$
– Rodrigo de Azevedo
Jan 31 at 20:58
$begingroup$
Brute force is not an efficient method.
$endgroup$
– Timothy
Feb 1 at 10:42
2
$begingroup$
Better an inefficient method than no method. Of course, you can always transform the constraints into polynomial equalities $(x_i - b_1) (x_i - b_2) cdots (x_i - b_m) = 0$ and then use Lagrange multipliers.
$endgroup$
– Rodrigo de Azevedo
Feb 1 at 22:12
$begingroup$
The values of $x_i$ are discrete. Can the numerical optimization methods be applied to the problem?
$endgroup$
– Timothy
Feb 2 at 0:33
1
$begingroup$
If these polynomials are really a great idea, we don't need MIP solvers anymore. We could just use $x(x-1)=0$ to implement a binary variable.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 3:26
$begingroup$
Is $m^n$ large? Have you considered brute force?
$endgroup$
– Rodrigo de Azevedo
Jan 31 at 20:58
$begingroup$
Is $m^n$ large? Have you considered brute force?
$endgroup$
– Rodrigo de Azevedo
Jan 31 at 20:58
$begingroup$
Brute force is not an efficient method.
$endgroup$
– Timothy
Feb 1 at 10:42
$begingroup$
Brute force is not an efficient method.
$endgroup$
– Timothy
Feb 1 at 10:42
2
2
$begingroup$
Better an inefficient method than no method. Of course, you can always transform the constraints into polynomial equalities $(x_i - b_1) (x_i - b_2) cdots (x_i - b_m) = 0$ and then use Lagrange multipliers.
$endgroup$
– Rodrigo de Azevedo
Feb 1 at 22:12
$begingroup$
Better an inefficient method than no method. Of course, you can always transform the constraints into polynomial equalities $(x_i - b_1) (x_i - b_2) cdots (x_i - b_m) = 0$ and then use Lagrange multipliers.
$endgroup$
– Rodrigo de Azevedo
Feb 1 at 22:12
$begingroup$
The values of $x_i$ are discrete. Can the numerical optimization methods be applied to the problem?
$endgroup$
– Timothy
Feb 2 at 0:33
$begingroup$
The values of $x_i$ are discrete. Can the numerical optimization methods be applied to the problem?
$endgroup$
– Timothy
Feb 2 at 0:33
1
1
$begingroup$
If these polynomials are really a great idea, we don't need MIP solvers anymore. We could just use $x(x-1)=0$ to implement a binary variable.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 3:26
$begingroup$
If these polynomials are really a great idea, we don't need MIP solvers anymore. We could just use $x(x-1)=0$ to implement a binary variable.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 3:26
|
show 2 more comments
1 Answer
1
active
oldest
votes
$begingroup$
I think this can be formulated as a linear MIP model. Not sure if that counts as efficient.
First we introduce binary variables
$$ y_{i,j} = begin{cases} 1 & text{if $x_i=b_j$}\
0 & text{otherwise}end{cases}$$
Then we can formulate:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& mathit{logx}_i = sum_j y_{i,j} log(b_j)\
& sum_j y_{i,j} = 1 && forall i\
& sum_i a_i mathit{logx}_i ge c \
& y_{i,j} in {0,1} \
& x_i, mathit{logx}_i in mathbb{R}
end{align}$$
If you want to save a few variables and constraints, you can substitute out the variable $mathit{logx}$. (I am usually not so stingy in that respect). The more compact model would look like:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& sum_j y_{i,j} = 1 && forall i\
& sum_{i,j} a_i log(b_j)> y_{i,j} ge c \
& y_{i,j} in {0,1} \
& x_i in mathbb{R}
end{align}$$
We can even substitute out $x_i$, but you would need to recover them afterwards from the optimal values $y_{i,j}^*$
I am quite sure this will do much better than complete enumeration. Throw this at a high-performance MIP solver on a parallel machine and you can solve large models hopefully quickly. On my laptop with random data: for a problem with $n=m=100$ just a few seconds (of course different data may give different timings).
$endgroup$
$begingroup$
Well done! Any benefit from formulating $sum_j y_{ij} = 1$ as SOS1 (since $b_j$ are ordered)? A MINLP solver can solve the initial formulation btw (since the relaxation it is convex).
$endgroup$
– LinAlg
Feb 6 at 20:51
$begingroup$
(1) SOS1 does not do bounding. In modern solvers binary variables often are much better (e.g. cut generation). (2) In an MINLP you still need the $y_{i,j}$ to model the allowed values. I get basically the logs for free as I already have the $y$'s anyway.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 21:10
$begingroup$
What do you mean by "SOS1 does not do bounding"? On MINLP: you are right; now that I think about it, it is weird that MINLP solvers do not accept a discrete set as input. The branch&bound process is just as complex as for a set $[a,b] cap mathbb{Z}$.
$endgroup$
– LinAlg
Feb 6 at 21:53
$begingroup$
SOS1/SOS2 does very little to tighten the lower bound. Binary variables often do a very good job: cut generation is best when using binary variables. SOS2 has some value for modeling, but even for interpolation binary variables often work better.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 22:36
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3090493%2fhow-to-solve-the-following-combinatorial-optimization-problem%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I think this can be formulated as a linear MIP model. Not sure if that counts as efficient.
First we introduce binary variables
$$ y_{i,j} = begin{cases} 1 & text{if $x_i=b_j$}\
0 & text{otherwise}end{cases}$$
Then we can formulate:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& mathit{logx}_i = sum_j y_{i,j} log(b_j)\
& sum_j y_{i,j} = 1 && forall i\
& sum_i a_i mathit{logx}_i ge c \
& y_{i,j} in {0,1} \
& x_i, mathit{logx}_i in mathbb{R}
end{align}$$
If you want to save a few variables and constraints, you can substitute out the variable $mathit{logx}$. (I am usually not so stingy in that respect). The more compact model would look like:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& sum_j y_{i,j} = 1 && forall i\
& sum_{i,j} a_i log(b_j)> y_{i,j} ge c \
& y_{i,j} in {0,1} \
& x_i in mathbb{R}
end{align}$$
We can even substitute out $x_i$, but you would need to recover them afterwards from the optimal values $y_{i,j}^*$
I am quite sure this will do much better than complete enumeration. Throw this at a high-performance MIP solver on a parallel machine and you can solve large models hopefully quickly. On my laptop with random data: for a problem with $n=m=100$ just a few seconds (of course different data may give different timings).
$endgroup$
$begingroup$
Well done! Any benefit from formulating $sum_j y_{ij} = 1$ as SOS1 (since $b_j$ are ordered)? A MINLP solver can solve the initial formulation btw (since the relaxation it is convex).
$endgroup$
– LinAlg
Feb 6 at 20:51
$begingroup$
(1) SOS1 does not do bounding. In modern solvers binary variables often are much better (e.g. cut generation). (2) In an MINLP you still need the $y_{i,j}$ to model the allowed values. I get basically the logs for free as I already have the $y$'s anyway.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 21:10
$begingroup$
What do you mean by "SOS1 does not do bounding"? On MINLP: you are right; now that I think about it, it is weird that MINLP solvers do not accept a discrete set as input. The branch&bound process is just as complex as for a set $[a,b] cap mathbb{Z}$.
$endgroup$
– LinAlg
Feb 6 at 21:53
$begingroup$
SOS1/SOS2 does very little to tighten the lower bound. Binary variables often do a very good job: cut generation is best when using binary variables. SOS2 has some value for modeling, but even for interpolation binary variables often work better.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 22:36
add a comment |
$begingroup$
I think this can be formulated as a linear MIP model. Not sure if that counts as efficient.
First we introduce binary variables
$$ y_{i,j} = begin{cases} 1 & text{if $x_i=b_j$}\
0 & text{otherwise}end{cases}$$
Then we can formulate:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& mathit{logx}_i = sum_j y_{i,j} log(b_j)\
& sum_j y_{i,j} = 1 && forall i\
& sum_i a_i mathit{logx}_i ge c \
& y_{i,j} in {0,1} \
& x_i, mathit{logx}_i in mathbb{R}
end{align}$$
If you want to save a few variables and constraints, you can substitute out the variable $mathit{logx}$. (I am usually not so stingy in that respect). The more compact model would look like:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& sum_j y_{i,j} = 1 && forall i\
& sum_{i,j} a_i log(b_j)> y_{i,j} ge c \
& y_{i,j} in {0,1} \
& x_i in mathbb{R}
end{align}$$
We can even substitute out $x_i$, but you would need to recover them afterwards from the optimal values $y_{i,j}^*$
I am quite sure this will do much better than complete enumeration. Throw this at a high-performance MIP solver on a parallel machine and you can solve large models hopefully quickly. On my laptop with random data: for a problem with $n=m=100$ just a few seconds (of course different data may give different timings).
$endgroup$
$begingroup$
Well done! Any benefit from formulating $sum_j y_{ij} = 1$ as SOS1 (since $b_j$ are ordered)? A MINLP solver can solve the initial formulation btw (since the relaxation it is convex).
$endgroup$
– LinAlg
Feb 6 at 20:51
$begingroup$
(1) SOS1 does not do bounding. In modern solvers binary variables often are much better (e.g. cut generation). (2) In an MINLP you still need the $y_{i,j}$ to model the allowed values. I get basically the logs for free as I already have the $y$'s anyway.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 21:10
$begingroup$
What do you mean by "SOS1 does not do bounding"? On MINLP: you are right; now that I think about it, it is weird that MINLP solvers do not accept a discrete set as input. The branch&bound process is just as complex as for a set $[a,b] cap mathbb{Z}$.
$endgroup$
– LinAlg
Feb 6 at 21:53
$begingroup$
SOS1/SOS2 does very little to tighten the lower bound. Binary variables often do a very good job: cut generation is best when using binary variables. SOS2 has some value for modeling, but even for interpolation binary variables often work better.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 22:36
add a comment |
$begingroup$
I think this can be formulated as a linear MIP model. Not sure if that counts as efficient.
First we introduce binary variables
$$ y_{i,j} = begin{cases} 1 & text{if $x_i=b_j$}\
0 & text{otherwise}end{cases}$$
Then we can formulate:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& mathit{logx}_i = sum_j y_{i,j} log(b_j)\
& sum_j y_{i,j} = 1 && forall i\
& sum_i a_i mathit{logx}_i ge c \
& y_{i,j} in {0,1} \
& x_i, mathit{logx}_i in mathbb{R}
end{align}$$
If you want to save a few variables and constraints, you can substitute out the variable $mathit{logx}$. (I am usually not so stingy in that respect). The more compact model would look like:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& sum_j y_{i,j} = 1 && forall i\
& sum_{i,j} a_i log(b_j)> y_{i,j} ge c \
& y_{i,j} in {0,1} \
& x_i in mathbb{R}
end{align}$$
We can even substitute out $x_i$, but you would need to recover them afterwards from the optimal values $y_{i,j}^*$
I am quite sure this will do much better than complete enumeration. Throw this at a high-performance MIP solver on a parallel machine and you can solve large models hopefully quickly. On my laptop with random data: for a problem with $n=m=100$ just a few seconds (of course different data may give different timings).
$endgroup$
I think this can be formulated as a linear MIP model. Not sure if that counts as efficient.
First we introduce binary variables
$$ y_{i,j} = begin{cases} 1 & text{if $x_i=b_j$}\
0 & text{otherwise}end{cases}$$
Then we can formulate:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& mathit{logx}_i = sum_j y_{i,j} log(b_j)\
& sum_j y_{i,j} = 1 && forall i\
& sum_i a_i mathit{logx}_i ge c \
& y_{i,j} in {0,1} \
& x_i, mathit{logx}_i in mathbb{R}
end{align}$$
If you want to save a few variables and constraints, you can substitute out the variable $mathit{logx}$. (I am usually not so stingy in that respect). The more compact model would look like:
$$begin{align} min & sum_i x_i \
& x_i = sum_j y_{i,j} b_j\
& sum_j y_{i,j} = 1 && forall i\
& sum_{i,j} a_i log(b_j)> y_{i,j} ge c \
& y_{i,j} in {0,1} \
& x_i in mathbb{R}
end{align}$$
We can even substitute out $x_i$, but you would need to recover them afterwards from the optimal values $y_{i,j}^*$
I am quite sure this will do much better than complete enumeration. Throw this at a high-performance MIP solver on a parallel machine and you can solve large models hopefully quickly. On my laptop with random data: for a problem with $n=m=100$ just a few seconds (of course different data may give different timings).
edited Feb 6 at 14:37
answered Feb 5 at 19:37


Erwin KalvelagenErwin Kalvelagen
3,2542512
3,2542512
$begingroup$
Well done! Any benefit from formulating $sum_j y_{ij} = 1$ as SOS1 (since $b_j$ are ordered)? A MINLP solver can solve the initial formulation btw (since the relaxation it is convex).
$endgroup$
– LinAlg
Feb 6 at 20:51
$begingroup$
(1) SOS1 does not do bounding. In modern solvers binary variables often are much better (e.g. cut generation). (2) In an MINLP you still need the $y_{i,j}$ to model the allowed values. I get basically the logs for free as I already have the $y$'s anyway.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 21:10
$begingroup$
What do you mean by "SOS1 does not do bounding"? On MINLP: you are right; now that I think about it, it is weird that MINLP solvers do not accept a discrete set as input. The branch&bound process is just as complex as for a set $[a,b] cap mathbb{Z}$.
$endgroup$
– LinAlg
Feb 6 at 21:53
$begingroup$
SOS1/SOS2 does very little to tighten the lower bound. Binary variables often do a very good job: cut generation is best when using binary variables. SOS2 has some value for modeling, but even for interpolation binary variables often work better.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 22:36
add a comment |
$begingroup$
Well done! Any benefit from formulating $sum_j y_{ij} = 1$ as SOS1 (since $b_j$ are ordered)? A MINLP solver can solve the initial formulation btw (since the relaxation it is convex).
$endgroup$
– LinAlg
Feb 6 at 20:51
$begingroup$
(1) SOS1 does not do bounding. In modern solvers binary variables often are much better (e.g. cut generation). (2) In an MINLP you still need the $y_{i,j}$ to model the allowed values. I get basically the logs for free as I already have the $y$'s anyway.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 21:10
$begingroup$
What do you mean by "SOS1 does not do bounding"? On MINLP: you are right; now that I think about it, it is weird that MINLP solvers do not accept a discrete set as input. The branch&bound process is just as complex as for a set $[a,b] cap mathbb{Z}$.
$endgroup$
– LinAlg
Feb 6 at 21:53
$begingroup$
SOS1/SOS2 does very little to tighten the lower bound. Binary variables often do a very good job: cut generation is best when using binary variables. SOS2 has some value for modeling, but even for interpolation binary variables often work better.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 22:36
$begingroup$
Well done! Any benefit from formulating $sum_j y_{ij} = 1$ as SOS1 (since $b_j$ are ordered)? A MINLP solver can solve the initial formulation btw (since the relaxation it is convex).
$endgroup$
– LinAlg
Feb 6 at 20:51
$begingroup$
Well done! Any benefit from formulating $sum_j y_{ij} = 1$ as SOS1 (since $b_j$ are ordered)? A MINLP solver can solve the initial formulation btw (since the relaxation it is convex).
$endgroup$
– LinAlg
Feb 6 at 20:51
$begingroup$
(1) SOS1 does not do bounding. In modern solvers binary variables often are much better (e.g. cut generation). (2) In an MINLP you still need the $y_{i,j}$ to model the allowed values. I get basically the logs for free as I already have the $y$'s anyway.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 21:10
$begingroup$
(1) SOS1 does not do bounding. In modern solvers binary variables often are much better (e.g. cut generation). (2) In an MINLP you still need the $y_{i,j}$ to model the allowed values. I get basically the logs for free as I already have the $y$'s anyway.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 21:10
$begingroup$
What do you mean by "SOS1 does not do bounding"? On MINLP: you are right; now that I think about it, it is weird that MINLP solvers do not accept a discrete set as input. The branch&bound process is just as complex as for a set $[a,b] cap mathbb{Z}$.
$endgroup$
– LinAlg
Feb 6 at 21:53
$begingroup$
What do you mean by "SOS1 does not do bounding"? On MINLP: you are right; now that I think about it, it is weird that MINLP solvers do not accept a discrete set as input. The branch&bound process is just as complex as for a set $[a,b] cap mathbb{Z}$.
$endgroup$
– LinAlg
Feb 6 at 21:53
$begingroup$
SOS1/SOS2 does very little to tighten the lower bound. Binary variables often do a very good job: cut generation is best when using binary variables. SOS2 has some value for modeling, but even for interpolation binary variables often work better.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 22:36
$begingroup$
SOS1/SOS2 does very little to tighten the lower bound. Binary variables often do a very good job: cut generation is best when using binary variables. SOS2 has some value for modeling, but even for interpolation binary variables often work better.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 22:36
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3090493%2fhow-to-solve-the-following-combinatorial-optimization-problem%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Is $m^n$ large? Have you considered brute force?
$endgroup$
– Rodrigo de Azevedo
Jan 31 at 20:58
$begingroup$
Brute force is not an efficient method.
$endgroup$
– Timothy
Feb 1 at 10:42
2
$begingroup$
Better an inefficient method than no method. Of course, you can always transform the constraints into polynomial equalities $(x_i - b_1) (x_i - b_2) cdots (x_i - b_m) = 0$ and then use Lagrange multipliers.
$endgroup$
– Rodrigo de Azevedo
Feb 1 at 22:12
$begingroup$
The values of $x_i$ are discrete. Can the numerical optimization methods be applied to the problem?
$endgroup$
– Timothy
Feb 2 at 0:33
1
$begingroup$
If these polynomials are really a great idea, we don't need MIP solvers anymore. We could just use $x(x-1)=0$ to implement a binary variable.
$endgroup$
– Erwin Kalvelagen
Feb 6 at 3:26