Fast algorithm for solving system of linear equations
up vote
20
down vote
favorite
I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).
If I am interested in the solution for only one of the unknowns, what are the best approaches?
For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?
linear-algebra systems-of-equations numerical-linear-algebra
add a comment |
up vote
20
down vote
favorite
I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).
If I am interested in the solution for only one of the unknowns, what are the best approaches?
For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?
linear-algebra systems-of-equations numerical-linear-algebra
2
Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
– joriki
Apr 1 '11 at 16:56
A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
– mghandi
Apr 2 '11 at 3:37
About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
– mghandi
Apr 2 '11 at 3:51
1
You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
– Dirk
Dec 15 '12 at 17:43
add a comment |
up vote
20
down vote
favorite
up vote
20
down vote
favorite
I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).
If I am interested in the solution for only one of the unknowns, what are the best approaches?
For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?
linear-algebra systems-of-equations numerical-linear-algebra
I have a system of $N$ linear equations, $Ax=b$, in $N$ unknowns (where $N$ is large).
If I am interested in the solution for only one of the unknowns, what are the best approaches?
For example, assume $N=50,000$. We want the solution for $x_1$ through $x_{100}$ only. Is there any trick that does not require $O(n^{3})$ (or $O$(matrix inversion))?
linear-algebra systems-of-equations numerical-linear-algebra
linear-algebra systems-of-equations numerical-linear-algebra
edited Jun 16 at 21:07
Rodrigo de Azevedo
12.7k41753
12.7k41753
asked Apr 1 '11 at 14:47
mghandi
8231625
8231625
2
Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
– joriki
Apr 1 '11 at 16:56
A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
– mghandi
Apr 2 '11 at 3:37
About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
– mghandi
Apr 2 '11 at 3:51
1
You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
– Dirk
Dec 15 '12 at 17:43
add a comment |
2
Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
– joriki
Apr 1 '11 at 16:56
A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
– mghandi
Apr 2 '11 at 3:37
About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
– mghandi
Apr 2 '11 at 3:51
1
You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
– Dirk
Dec 15 '12 at 17:43
2
2
Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
– joriki
Apr 1 '11 at 16:56
Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
– joriki
Apr 1 '11 at 16:56
A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
– mghandi
Apr 2 '11 at 3:37
A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
– mghandi
Apr 2 '11 at 3:37
About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
– mghandi
Apr 2 '11 at 3:51
About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
– mghandi
Apr 2 '11 at 3:51
1
1
You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
– Dirk
Dec 15 '12 at 17:43
You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
– Dirk
Dec 15 '12 at 17:43
add a comment |
3 Answers
3
active
oldest
votes
up vote
3
down vote
accepted
There is a way to reduce the complexity and make the system solvable in parallel.
It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.
The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.
In short:
Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.
$mathbf A_1^{-1}=mathbf B_1$
$mathbf A_2^{-1}=mathbf B_2$
The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
begin{align}
mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
end{align}
For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.
$mathbf x_{th}=x_{1m}-x_{2n}$
$mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$
begin{align}
mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
end{bmatrix}^T
end{align}
The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
begin{align}
mathbf x_n=begin{bmatrix}x_1\x_2
end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
end{bmatrix}begin{bmatrix}b_{th}
end{bmatrix}
end{align}
I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.
add a comment |
up vote
9
down vote
It is an open important mathematical question
The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:
This begs the question: how close can we get to O(n^2)
, which is of course a lower bound since we have to read 2 * n^2
inputs at least once?
Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".
The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.
Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?
Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
– Novice C
Jul 4 '17 at 20:37
add a comment |
up vote
8
down vote
Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.
add a comment |
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
accepted
There is a way to reduce the complexity and make the system solvable in parallel.
It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.
The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.
In short:
Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.
$mathbf A_1^{-1}=mathbf B_1$
$mathbf A_2^{-1}=mathbf B_2$
The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
begin{align}
mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
end{align}
For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.
$mathbf x_{th}=x_{1m}-x_{2n}$
$mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$
begin{align}
mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
end{bmatrix}^T
end{align}
The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
begin{align}
mathbf x_n=begin{bmatrix}x_1\x_2
end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
end{bmatrix}begin{bmatrix}b_{th}
end{bmatrix}
end{align}
I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.
add a comment |
up vote
3
down vote
accepted
There is a way to reduce the complexity and make the system solvable in parallel.
It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.
The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.
In short:
Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.
$mathbf A_1^{-1}=mathbf B_1$
$mathbf A_2^{-1}=mathbf B_2$
The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
begin{align}
mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
end{align}
For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.
$mathbf x_{th}=x_{1m}-x_{2n}$
$mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$
begin{align}
mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
end{bmatrix}^T
end{align}
The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
begin{align}
mathbf x_n=begin{bmatrix}x_1\x_2
end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
end{bmatrix}begin{bmatrix}b_{th}
end{bmatrix}
end{align}
I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.
add a comment |
up vote
3
down vote
accepted
up vote
3
down vote
accepted
There is a way to reduce the complexity and make the system solvable in parallel.
It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.
The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.
In short:
Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.
$mathbf A_1^{-1}=mathbf B_1$
$mathbf A_2^{-1}=mathbf B_2$
The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
begin{align}
mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
end{align}
For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.
$mathbf x_{th}=x_{1m}-x_{2n}$
$mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$
begin{align}
mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
end{bmatrix}^T
end{align}
The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
begin{align}
mathbf x_n=begin{bmatrix}x_1\x_2
end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
end{bmatrix}begin{bmatrix}b_{th}
end{bmatrix}
end{align}
I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.
There is a way to reduce the complexity and make the system solvable in parallel.
It is called Diakoptics (a method invented by Gabriel Kron). The methods primary use is for large electrical networks that have few interconnections like power grids. But you should be able to adapt it.
The complexity (for the case below) is reduced from $O(n^3)$ to $O(2(frac{n}{2})^3)$ or $O(frac{1}{4}n^3)$, the impact can be much greater if the system is divided it into more subsystems. For that case the complexity is ($s$-subsysems, $c$-interconnection points) $O(c^3)+O((frac{n^3}{s²}))$, if the systems is divided into equaly sized subsystems. I'm not sure about the notation for multiple variables, but you should get the point.
In short:
Lets assume you have a $N times N$ system, lets say you can divide the system into two systems with 1 connection point(plus reference when you look at electrical systems). The connection points are $m$ and $n$. Lets assume these systems are of the size $N_1=N/2$ and $N_2=N/2$ (for simplicitys sake). You should now solve them separately.
$mathbf A_1^{-1}=mathbf B_1$
$mathbf A_2^{-1}=mathbf B_2$
The next step is to put them back together, that is done with the help of the so called "Thevenin Matrix"(in our case it is 1$times$1). You can look up the exact principle for higher orders(more connection points), but for this example it looks like:
begin{align}
mathbf{B_{TH}}=B_{1mm}+B_{2nn}-2B_{mn}
end{align}
For our case we have $B_{mn}=0$. Now we need the solutions $x_1$ and $x_2$ to form the coefficients $b_{th}$.
$mathbf x_{th}=x_{1m}-x_{2n}$
$mathbf b_{p}=mathbf{B_{TH}}^{-1} mathbf x_{th}$
begin{align}
mathbf b_{th}=begin{bmatrix}0&cdots&b_{p}&cdots&-b_{p}&cdots&0
end{bmatrix}^T
end{align}
The $mathbf b_{th}$ matrix only has nonzero elements at $m$ an $N/2 +n$. Now we can finally find the solution $x_n$ for the whole system:
begin{align}
mathbf x_n=begin{bmatrix}x_1\x_2
end{bmatrix}-begin{bmatrix}B_1&0\0&B_2
end{bmatrix}begin{bmatrix}b_{th}
end{bmatrix}
end{align}
I'm more used to the engineering notation with $Z, I, U$ and so on, so excuse for non-standard symbol usage.
answered Apr 7 '14 at 14:55
WalyKu
279112
279112
add a comment |
add a comment |
up vote
9
down vote
It is an open important mathematical question
The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:
This begs the question: how close can we get to O(n^2)
, which is of course a lower bound since we have to read 2 * n^2
inputs at least once?
Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".
The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.
Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?
Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
– Novice C
Jul 4 '17 at 20:37
add a comment |
up vote
9
down vote
It is an open important mathematical question
The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:
This begs the question: how close can we get to O(n^2)
, which is of course a lower bound since we have to read 2 * n^2
inputs at least once?
Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".
The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.
Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?
Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
– Novice C
Jul 4 '17 at 20:37
add a comment |
up vote
9
down vote
up vote
9
down vote
It is an open important mathematical question
The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:
This begs the question: how close can we get to O(n^2)
, which is of course a lower bound since we have to read 2 * n^2
inputs at least once?
Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".
The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.
Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?
It is an open important mathematical question
The Wikipedia page https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm#Sub-cubic_algorithms shows how the fast known asymptotic algorithm has evolved through time:
This begs the question: how close can we get to O(n^2)
, which is of course a lower bound since we have to read 2 * n^2
inputs at least once?
Proving the upper lower bound, even non constructively, would make you instantly famous. Wikipedia lists it as an "Unsolved problem in computer science".
The constant factor for the better algorithms is so large though that it makes them impractical for most matrix sizes found in practice today, and I doubt this will change if new algorithms are found. Therefore any practical real world answer will come down to optimizing against a given machine model.
Personal guess about solving for single variables: I don't think you can reduce complexity like that in general, since the entire system is coupled. How would you know that your solution for some variables also satisfies the entire global solution?
edited 2 days ago
answered Dec 15 '12 at 16:41
Ciro Santilli 新疆改造中心 六四事件 法轮功
36137
36137
Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
– Novice C
Jul 4 '17 at 20:37
add a comment |
Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
– Novice C
Jul 4 '17 at 20:37
Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
– Novice C
Jul 4 '17 at 20:37
Here is the wiki to the family of algorithms you are referring to: en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm Also worth looking at en.wikipedia.org/wiki/Strassen_algorithm
– Novice C
Jul 4 '17 at 20:37
add a comment |
up vote
8
down vote
Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.
add a comment |
up vote
8
down vote
Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.
add a comment |
up vote
8
down vote
up vote
8
down vote
Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.
Unless your matrix is sparse or structured (e.g. Vandermonde, Hankel, or those other named matrix families that admit a fast solution method), there is not much hope of doing things better than $O(n^3)$ effort. Even if one were to restrict himself to solving for just one of the 50,000 variables, Cramer will demand computing two determinants for your answer, and the effort for computing a determinant is at least as much as decomposing/inverting a matrix to begin with.
answered Apr 4 '11 at 3:12
Juan Joder
1614
1614
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f30330%2ffast-algorithm-for-solving-system-of-linear-equations%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
Is $A$ full or sparse? Are you doing this once or many times with the same $A$?
– joriki
Apr 1 '11 at 16:56
A is not sparse and has many nonzero elements, however, the coefficients themselves are derived from a smaller set of variables.
– mghandi
Apr 2 '11 at 3:37
About your second question, yes I'm doing this many times. I am familiar with the LU decomposition trick to speed up when the coeffs matrix is unchanged. Is there any other tricks to do that even more efficiently?
– mghandi
Apr 2 '11 at 3:51
1
You may apply iterative methods (CG, if you matrix is spd, GMRES something similar otherwise). You may also want to ask at scicomp.stackexchange.com.
– Dirk
Dec 15 '12 at 17:43