Which one is most cost expensive to solve a linear equation? LU or inverse?












2












$begingroup$


Which one is the most expensive way to solve for linear equation?
LU-decomposition
$$A = LU$$



Or finding the inverse
$$A^{-1} = frac{1}{det(A)} operatorname{adj}(A)$$



If I have to choose, I would say that this equation
$$A^{-1} = frac{1}{det(A)} operatorname{adj}(A)$$



Is much faster than compute LU-factorization. Why? Well, I wrote a linear algebra library in C, which I'm going to upload on GitHub very soon.
First I find the determiannt, by using gaussian elemination. Very smooth and easy method. If I divide with zero, I just change that zero to a very small number so I don't get an error.



Then I find the minor matrix. Which also is a easy method to do, nummericaly.
Example:



$$M_{2,3} = det begin{bmatrix}
,,1 & 4 & Box, \
,Box & Box & Box, \
-1 & 9 & Box, \
end{bmatrix}= det begin{bmatrix}
,,,1 & 4, \
-1 & 9, \
end{bmatrix} = (9-(-4)) = 13$$



Then I change the minor matrix to cofactor matrix just by multiply them with $-1$. Easy! Divide $1$ with the determiant and multiply it with the cofactor matrix. Then I got the inverse.



But with LU-factorization, in code, I first need to find $L$ and $U$ from $A$, then I need to solve this linear equation
$$Ax = b$$



Which can be described as



$$LUx = b$$



and then I need to solve these two equations



$$Ly = b$$
$$Ux = y$$



That requries more for-loops in C code, than finding the determiant with Gaussian elimination and finding the minor matrix.



Or am I wrong?










share|cite|improve this question









$endgroup$












  • $begingroup$
    To fairly compare, I think you'd need to include the matrix multiplication $A^{-1}b$, which for large $A$ could be significant. Solving the forward and backward substitutions using the $LU$ factorization is generally faster than the matrix multiplication (especially for large matrices) and there are a number of other reasons to prefer $LU$ factorizations numerically.
    $endgroup$
    – nathan.j.mcdougall
    Jan 17 at 22:35












  • $begingroup$
    @nathan.j.mcdougall Okey. Thank you for the answer. If you got the answer, you can post it here and I will accept it.
    $endgroup$
    – Daniel Mårtensson
    Jan 17 at 22:36










  • $begingroup$
    Can I compute the determinant too with LU? Even the inverse too?
    $endgroup$
    – Daniel Mårtensson
    Jan 17 at 22:37










  • $begingroup$
    Well, I wouldn't say it's a complete answer, which is why I posted it as a comment. There is potentially much more to discuss. $DeclareMathOperator{tr}{tr}$ The determinant is easy: $det (A)=det(L)det(U)=tr(L)tr(U)$. Finding the inverse is a matter of solving the system $LUx=b$ for each of the unit vectors $e_i$, which is rather inexpensive.
    $endgroup$
    – nathan.j.mcdougall
    Jan 17 at 22:39












  • $begingroup$
    @nathan.j.mcdougall $det(L)det(U) = operatorname{tr}(L)operatorname{tr}(U)$ sounds wrong... What if $L$ and $U$ are both the unit matrix?
    $endgroup$
    – 0x539
    Jan 17 at 22:50


















2












$begingroup$


Which one is the most expensive way to solve for linear equation?
LU-decomposition
$$A = LU$$



Or finding the inverse
$$A^{-1} = frac{1}{det(A)} operatorname{adj}(A)$$



If I have to choose, I would say that this equation
$$A^{-1} = frac{1}{det(A)} operatorname{adj}(A)$$



Is much faster than compute LU-factorization. Why? Well, I wrote a linear algebra library in C, which I'm going to upload on GitHub very soon.
First I find the determiannt, by using gaussian elemination. Very smooth and easy method. If I divide with zero, I just change that zero to a very small number so I don't get an error.



Then I find the minor matrix. Which also is a easy method to do, nummericaly.
Example:



$$M_{2,3} = det begin{bmatrix}
,,1 & 4 & Box, \
,Box & Box & Box, \
-1 & 9 & Box, \
end{bmatrix}= det begin{bmatrix}
,,,1 & 4, \
-1 & 9, \
end{bmatrix} = (9-(-4)) = 13$$



Then I change the minor matrix to cofactor matrix just by multiply them with $-1$. Easy! Divide $1$ with the determiant and multiply it with the cofactor matrix. Then I got the inverse.



But with LU-factorization, in code, I first need to find $L$ and $U$ from $A$, then I need to solve this linear equation
$$Ax = b$$



Which can be described as



$$LUx = b$$



and then I need to solve these two equations



$$Ly = b$$
$$Ux = y$$



That requries more for-loops in C code, than finding the determiant with Gaussian elimination and finding the minor matrix.



Or am I wrong?










share|cite|improve this question









$endgroup$












  • $begingroup$
    To fairly compare, I think you'd need to include the matrix multiplication $A^{-1}b$, which for large $A$ could be significant. Solving the forward and backward substitutions using the $LU$ factorization is generally faster than the matrix multiplication (especially for large matrices) and there are a number of other reasons to prefer $LU$ factorizations numerically.
    $endgroup$
    – nathan.j.mcdougall
    Jan 17 at 22:35












  • $begingroup$
    @nathan.j.mcdougall Okey. Thank you for the answer. If you got the answer, you can post it here and I will accept it.
    $endgroup$
    – Daniel Mårtensson
    Jan 17 at 22:36










  • $begingroup$
    Can I compute the determinant too with LU? Even the inverse too?
    $endgroup$
    – Daniel Mårtensson
    Jan 17 at 22:37










  • $begingroup$
    Well, I wouldn't say it's a complete answer, which is why I posted it as a comment. There is potentially much more to discuss. $DeclareMathOperator{tr}{tr}$ The determinant is easy: $det (A)=det(L)det(U)=tr(L)tr(U)$. Finding the inverse is a matter of solving the system $LUx=b$ for each of the unit vectors $e_i$, which is rather inexpensive.
    $endgroup$
    – nathan.j.mcdougall
    Jan 17 at 22:39












  • $begingroup$
    @nathan.j.mcdougall $det(L)det(U) = operatorname{tr}(L)operatorname{tr}(U)$ sounds wrong... What if $L$ and $U$ are both the unit matrix?
    $endgroup$
    – 0x539
    Jan 17 at 22:50
















2












2








2





$begingroup$


Which one is the most expensive way to solve for linear equation?
LU-decomposition
$$A = LU$$



Or finding the inverse
$$A^{-1} = frac{1}{det(A)} operatorname{adj}(A)$$



If I have to choose, I would say that this equation
$$A^{-1} = frac{1}{det(A)} operatorname{adj}(A)$$



Is much faster than compute LU-factorization. Why? Well, I wrote a linear algebra library in C, which I'm going to upload on GitHub very soon.
First I find the determiannt, by using gaussian elemination. Very smooth and easy method. If I divide with zero, I just change that zero to a very small number so I don't get an error.



Then I find the minor matrix. Which also is a easy method to do, nummericaly.
Example:



$$M_{2,3} = det begin{bmatrix}
,,1 & 4 & Box, \
,Box & Box & Box, \
-1 & 9 & Box, \
end{bmatrix}= det begin{bmatrix}
,,,1 & 4, \
-1 & 9, \
end{bmatrix} = (9-(-4)) = 13$$



Then I change the minor matrix to cofactor matrix just by multiply them with $-1$. Easy! Divide $1$ with the determiant and multiply it with the cofactor matrix. Then I got the inverse.



But with LU-factorization, in code, I first need to find $L$ and $U$ from $A$, then I need to solve this linear equation
$$Ax = b$$



Which can be described as



$$LUx = b$$



and then I need to solve these two equations



$$Ly = b$$
$$Ux = y$$



That requries more for-loops in C code, than finding the determiant with Gaussian elimination and finding the minor matrix.



Or am I wrong?










share|cite|improve this question









$endgroup$




Which one is the most expensive way to solve for linear equation?
LU-decomposition
$$A = LU$$



Or finding the inverse
$$A^{-1} = frac{1}{det(A)} operatorname{adj}(A)$$



If I have to choose, I would say that this equation
$$A^{-1} = frac{1}{det(A)} operatorname{adj}(A)$$



Is much faster than compute LU-factorization. Why? Well, I wrote a linear algebra library in C, which I'm going to upload on GitHub very soon.
First I find the determiannt, by using gaussian elemination. Very smooth and easy method. If I divide with zero, I just change that zero to a very small number so I don't get an error.



Then I find the minor matrix. Which also is a easy method to do, nummericaly.
Example:



$$M_{2,3} = det begin{bmatrix}
,,1 & 4 & Box, \
,Box & Box & Box, \
-1 & 9 & Box, \
end{bmatrix}= det begin{bmatrix}
,,,1 & 4, \
-1 & 9, \
end{bmatrix} = (9-(-4)) = 13$$



Then I change the minor matrix to cofactor matrix just by multiply them with $-1$. Easy! Divide $1$ with the determiant and multiply it with the cofactor matrix. Then I got the inverse.



But with LU-factorization, in code, I first need to find $L$ and $U$ from $A$, then I need to solve this linear equation
$$Ax = b$$



Which can be described as



$$LUx = b$$



and then I need to solve these two equations



$$Ly = b$$
$$Ux = y$$



That requries more for-loops in C code, than finding the determiant with Gaussian elimination and finding the minor matrix.



Or am I wrong?







linear-algebra inverse gaussian-elimination lu-decomposition






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 17 at 22:29









Daniel MårtenssonDaniel Mårtensson

974418




974418












  • $begingroup$
    To fairly compare, I think you'd need to include the matrix multiplication $A^{-1}b$, which for large $A$ could be significant. Solving the forward and backward substitutions using the $LU$ factorization is generally faster than the matrix multiplication (especially for large matrices) and there are a number of other reasons to prefer $LU$ factorizations numerically.
    $endgroup$
    – nathan.j.mcdougall
    Jan 17 at 22:35












  • $begingroup$
    @nathan.j.mcdougall Okey. Thank you for the answer. If you got the answer, you can post it here and I will accept it.
    $endgroup$
    – Daniel Mårtensson
    Jan 17 at 22:36










  • $begingroup$
    Can I compute the determinant too with LU? Even the inverse too?
    $endgroup$
    – Daniel Mårtensson
    Jan 17 at 22:37










  • $begingroup$
    Well, I wouldn't say it's a complete answer, which is why I posted it as a comment. There is potentially much more to discuss. $DeclareMathOperator{tr}{tr}$ The determinant is easy: $det (A)=det(L)det(U)=tr(L)tr(U)$. Finding the inverse is a matter of solving the system $LUx=b$ for each of the unit vectors $e_i$, which is rather inexpensive.
    $endgroup$
    – nathan.j.mcdougall
    Jan 17 at 22:39












  • $begingroup$
    @nathan.j.mcdougall $det(L)det(U) = operatorname{tr}(L)operatorname{tr}(U)$ sounds wrong... What if $L$ and $U$ are both the unit matrix?
    $endgroup$
    – 0x539
    Jan 17 at 22:50




















  • $begingroup$
    To fairly compare, I think you'd need to include the matrix multiplication $A^{-1}b$, which for large $A$ could be significant. Solving the forward and backward substitutions using the $LU$ factorization is generally faster than the matrix multiplication (especially for large matrices) and there are a number of other reasons to prefer $LU$ factorizations numerically.
    $endgroup$
    – nathan.j.mcdougall
    Jan 17 at 22:35












  • $begingroup$
    @nathan.j.mcdougall Okey. Thank you for the answer. If you got the answer, you can post it here and I will accept it.
    $endgroup$
    – Daniel Mårtensson
    Jan 17 at 22:36










  • $begingroup$
    Can I compute the determinant too with LU? Even the inverse too?
    $endgroup$
    – Daniel Mårtensson
    Jan 17 at 22:37










  • $begingroup$
    Well, I wouldn't say it's a complete answer, which is why I posted it as a comment. There is potentially much more to discuss. $DeclareMathOperator{tr}{tr}$ The determinant is easy: $det (A)=det(L)det(U)=tr(L)tr(U)$. Finding the inverse is a matter of solving the system $LUx=b$ for each of the unit vectors $e_i$, which is rather inexpensive.
    $endgroup$
    – nathan.j.mcdougall
    Jan 17 at 22:39












  • $begingroup$
    @nathan.j.mcdougall $det(L)det(U) = operatorname{tr}(L)operatorname{tr}(U)$ sounds wrong... What if $L$ and $U$ are both the unit matrix?
    $endgroup$
    – 0x539
    Jan 17 at 22:50


















$begingroup$
To fairly compare, I think you'd need to include the matrix multiplication $A^{-1}b$, which for large $A$ could be significant. Solving the forward and backward substitutions using the $LU$ factorization is generally faster than the matrix multiplication (especially for large matrices) and there are a number of other reasons to prefer $LU$ factorizations numerically.
$endgroup$
– nathan.j.mcdougall
Jan 17 at 22:35






$begingroup$
To fairly compare, I think you'd need to include the matrix multiplication $A^{-1}b$, which for large $A$ could be significant. Solving the forward and backward substitutions using the $LU$ factorization is generally faster than the matrix multiplication (especially for large matrices) and there are a number of other reasons to prefer $LU$ factorizations numerically.
$endgroup$
– nathan.j.mcdougall
Jan 17 at 22:35














$begingroup$
@nathan.j.mcdougall Okey. Thank you for the answer. If you got the answer, you can post it here and I will accept it.
$endgroup$
– Daniel Mårtensson
Jan 17 at 22:36




$begingroup$
@nathan.j.mcdougall Okey. Thank you for the answer. If you got the answer, you can post it here and I will accept it.
$endgroup$
– Daniel Mårtensson
Jan 17 at 22:36












$begingroup$
Can I compute the determinant too with LU? Even the inverse too?
$endgroup$
– Daniel Mårtensson
Jan 17 at 22:37




$begingroup$
Can I compute the determinant too with LU? Even the inverse too?
$endgroup$
– Daniel Mårtensson
Jan 17 at 22:37












$begingroup$
Well, I wouldn't say it's a complete answer, which is why I posted it as a comment. There is potentially much more to discuss. $DeclareMathOperator{tr}{tr}$ The determinant is easy: $det (A)=det(L)det(U)=tr(L)tr(U)$. Finding the inverse is a matter of solving the system $LUx=b$ for each of the unit vectors $e_i$, which is rather inexpensive.
$endgroup$
– nathan.j.mcdougall
Jan 17 at 22:39






$begingroup$
Well, I wouldn't say it's a complete answer, which is why I posted it as a comment. There is potentially much more to discuss. $DeclareMathOperator{tr}{tr}$ The determinant is easy: $det (A)=det(L)det(U)=tr(L)tr(U)$. Finding the inverse is a matter of solving the system $LUx=b$ for each of the unit vectors $e_i$, which is rather inexpensive.
$endgroup$
– nathan.j.mcdougall
Jan 17 at 22:39














$begingroup$
@nathan.j.mcdougall $det(L)det(U) = operatorname{tr}(L)operatorname{tr}(U)$ sounds wrong... What if $L$ and $U$ are both the unit matrix?
$endgroup$
– 0x539
Jan 17 at 22:50






$begingroup$
@nathan.j.mcdougall $det(L)det(U) = operatorname{tr}(L)operatorname{tr}(U)$ sounds wrong... What if $L$ and $U$ are both the unit matrix?
$endgroup$
– 0x539
Jan 17 at 22:50












1 Answer
1






active

oldest

votes


















3












$begingroup$

Performing Gaussian elimination on a matrix of size $m times m$ takes $mathcal{O}(m^3)$ operations. To find the cofactor matrix of a $n times n$ matrix you need to calculate $n^2$ determinants of $(n-1)times(n-1)$ matrices. If you don't find any way to take advantage of previous work that takes $n^2 mathcal{O}((n-1)^3) = mathcal{O}(n^5)$ operations.



Computing a LU-decomposition of that $n times n$ matrixtakes $frac23 n^3$ operations in leading order (so in particular $mathcal{O}(n^3)$ operations). Therefore calculating the LU-decomposition is clearly faster than the method you are currently using for calculating the inverse.

(My argument only shows this for large enough $n$, but I'm very sure a detailed analysis will show that LU-decomposition is faster for basically any size.)



Note however that you can use Gauß elimination to directly calculate the inverse of a matrix, without the costly computation of the adjoint, see here. If my calculation is correct this requires $frac56 n^3$ operations in leading order so it is still a bit slower than LU-decomposition.



In theory the Strassen algorithm or even faster algorithms for matrix multiplication give rise to matrix inversion algorithms that is even faster than $mathcal{O}(n^3)$, but only for very large matrices.



Summary: LU-decomposition is the fastest way to solve a reasonably sized system of linear equations and superior to calculating inverses in almost all aspects. In particular performing the multiplication $A^{-1} x$ is as expensive as solving the linear equations $L y = b$ and $U x = y$, since this can be done very efficiently using forward/backward substitution.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3077604%2fwhich-one-is-most-cost-expensive-to-solve-a-linear-equation-lu-or-inverse%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    Performing Gaussian elimination on a matrix of size $m times m$ takes $mathcal{O}(m^3)$ operations. To find the cofactor matrix of a $n times n$ matrix you need to calculate $n^2$ determinants of $(n-1)times(n-1)$ matrices. If you don't find any way to take advantage of previous work that takes $n^2 mathcal{O}((n-1)^3) = mathcal{O}(n^5)$ operations.



    Computing a LU-decomposition of that $n times n$ matrixtakes $frac23 n^3$ operations in leading order (so in particular $mathcal{O}(n^3)$ operations). Therefore calculating the LU-decomposition is clearly faster than the method you are currently using for calculating the inverse.

    (My argument only shows this for large enough $n$, but I'm very sure a detailed analysis will show that LU-decomposition is faster for basically any size.)



    Note however that you can use Gauß elimination to directly calculate the inverse of a matrix, without the costly computation of the adjoint, see here. If my calculation is correct this requires $frac56 n^3$ operations in leading order so it is still a bit slower than LU-decomposition.



    In theory the Strassen algorithm or even faster algorithms for matrix multiplication give rise to matrix inversion algorithms that is even faster than $mathcal{O}(n^3)$, but only for very large matrices.



    Summary: LU-decomposition is the fastest way to solve a reasonably sized system of linear equations and superior to calculating inverses in almost all aspects. In particular performing the multiplication $A^{-1} x$ is as expensive as solving the linear equations $L y = b$ and $U x = y$, since this can be done very efficiently using forward/backward substitution.






    share|cite|improve this answer











    $endgroup$


















      3












      $begingroup$

      Performing Gaussian elimination on a matrix of size $m times m$ takes $mathcal{O}(m^3)$ operations. To find the cofactor matrix of a $n times n$ matrix you need to calculate $n^2$ determinants of $(n-1)times(n-1)$ matrices. If you don't find any way to take advantage of previous work that takes $n^2 mathcal{O}((n-1)^3) = mathcal{O}(n^5)$ operations.



      Computing a LU-decomposition of that $n times n$ matrixtakes $frac23 n^3$ operations in leading order (so in particular $mathcal{O}(n^3)$ operations). Therefore calculating the LU-decomposition is clearly faster than the method you are currently using for calculating the inverse.

      (My argument only shows this for large enough $n$, but I'm very sure a detailed analysis will show that LU-decomposition is faster for basically any size.)



      Note however that you can use Gauß elimination to directly calculate the inverse of a matrix, without the costly computation of the adjoint, see here. If my calculation is correct this requires $frac56 n^3$ operations in leading order so it is still a bit slower than LU-decomposition.



      In theory the Strassen algorithm or even faster algorithms for matrix multiplication give rise to matrix inversion algorithms that is even faster than $mathcal{O}(n^3)$, but only for very large matrices.



      Summary: LU-decomposition is the fastest way to solve a reasonably sized system of linear equations and superior to calculating inverses in almost all aspects. In particular performing the multiplication $A^{-1} x$ is as expensive as solving the linear equations $L y = b$ and $U x = y$, since this can be done very efficiently using forward/backward substitution.






      share|cite|improve this answer











      $endgroup$
















        3












        3








        3





        $begingroup$

        Performing Gaussian elimination on a matrix of size $m times m$ takes $mathcal{O}(m^3)$ operations. To find the cofactor matrix of a $n times n$ matrix you need to calculate $n^2$ determinants of $(n-1)times(n-1)$ matrices. If you don't find any way to take advantage of previous work that takes $n^2 mathcal{O}((n-1)^3) = mathcal{O}(n^5)$ operations.



        Computing a LU-decomposition of that $n times n$ matrixtakes $frac23 n^3$ operations in leading order (so in particular $mathcal{O}(n^3)$ operations). Therefore calculating the LU-decomposition is clearly faster than the method you are currently using for calculating the inverse.

        (My argument only shows this for large enough $n$, but I'm very sure a detailed analysis will show that LU-decomposition is faster for basically any size.)



        Note however that you can use Gauß elimination to directly calculate the inverse of a matrix, without the costly computation of the adjoint, see here. If my calculation is correct this requires $frac56 n^3$ operations in leading order so it is still a bit slower than LU-decomposition.



        In theory the Strassen algorithm or even faster algorithms for matrix multiplication give rise to matrix inversion algorithms that is even faster than $mathcal{O}(n^3)$, but only for very large matrices.



        Summary: LU-decomposition is the fastest way to solve a reasonably sized system of linear equations and superior to calculating inverses in almost all aspects. In particular performing the multiplication $A^{-1} x$ is as expensive as solving the linear equations $L y = b$ and $U x = y$, since this can be done very efficiently using forward/backward substitution.






        share|cite|improve this answer











        $endgroup$



        Performing Gaussian elimination on a matrix of size $m times m$ takes $mathcal{O}(m^3)$ operations. To find the cofactor matrix of a $n times n$ matrix you need to calculate $n^2$ determinants of $(n-1)times(n-1)$ matrices. If you don't find any way to take advantage of previous work that takes $n^2 mathcal{O}((n-1)^3) = mathcal{O}(n^5)$ operations.



        Computing a LU-decomposition of that $n times n$ matrixtakes $frac23 n^3$ operations in leading order (so in particular $mathcal{O}(n^3)$ operations). Therefore calculating the LU-decomposition is clearly faster than the method you are currently using for calculating the inverse.

        (My argument only shows this for large enough $n$, but I'm very sure a detailed analysis will show that LU-decomposition is faster for basically any size.)



        Note however that you can use Gauß elimination to directly calculate the inverse of a matrix, without the costly computation of the adjoint, see here. If my calculation is correct this requires $frac56 n^3$ operations in leading order so it is still a bit slower than LU-decomposition.



        In theory the Strassen algorithm or even faster algorithms for matrix multiplication give rise to matrix inversion algorithms that is even faster than $mathcal{O}(n^3)$, but only for very large matrices.



        Summary: LU-decomposition is the fastest way to solve a reasonably sized system of linear equations and superior to calculating inverses in almost all aspects. In particular performing the multiplication $A^{-1} x$ is as expensive as solving the linear equations $L y = b$ and $U x = y$, since this can be done very efficiently using forward/backward substitution.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Jan 17 at 23:19

























        answered Jan 17 at 22:58









        0x5390x539

        1,425518




        1,425518






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3077604%2fwhich-one-is-most-cost-expensive-to-solve-a-linear-equation-lu-or-inverse%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

            How to fix TextFormField cause rebuild widget in Flutter