What is the intuition behind why a rank deficient matrix does not have an inverse?












2












$begingroup$


Suppose that we have a $p$ dimensional square matrix $A$ whose rank is less than $p$. We know that such a matrix cannot have an inverse and there are several different ways to prove that the $A$ does not have an inverse.



However, I am struggling to obtain an intuition behind why the inverse does not exist. I considered the following ideas to generate an intuition but failed to do so.




  1. The matrix $A$ can be viewed as a set of transformations such as scaling, translation, rotation etc. Thus, when we apply $A$ to a vector in $p$ dimensions it always maps the $p$ dimensional vector to a vector in a subspace spanned by $A$ if it is less than full rank. Lack of an inverse implies that we cannot reverse the transformations. Why not?


  2. The columns of $A$ span only a subspace of $R^p$ if it is less than full rank. Thus, a transformation such as $A y$ takes a vector from $R^p$ to a vector that always belongs to that subspace. The above viewpoint did not help in obtaining an intuition either.



Is there a way to obtain an intuition as to why a rank deficient matrix does not have an inverse?










share|cite|improve this question









$endgroup$








  • 2




    $begingroup$
    I think point 2 will give you the intuition if you consider the action on a basis. (The image of a basis is a basis of the image.)
    $endgroup$
    – saulspatz
    Jan 17 at 14:57










  • $begingroup$
    To elaborate on the above comment, if $A$ does not have full rank, then a nontrivial subspace gets sent to the zero vector. What happens if we try to invert that? In other words, how can we find the inverse image of the zero vector if lots of vectors are mapped to it (the answer: we can't!).
    $endgroup$
    – OldGodzilla
    Jan 17 at 15:00










  • $begingroup$
    It hasn't an inverse but it has a pseudo-inverse, a very useful notion.
    $endgroup$
    – Jean Marie
    Jan 25 at 15:44


















2












$begingroup$


Suppose that we have a $p$ dimensional square matrix $A$ whose rank is less than $p$. We know that such a matrix cannot have an inverse and there are several different ways to prove that the $A$ does not have an inverse.



However, I am struggling to obtain an intuition behind why the inverse does not exist. I considered the following ideas to generate an intuition but failed to do so.




  1. The matrix $A$ can be viewed as a set of transformations such as scaling, translation, rotation etc. Thus, when we apply $A$ to a vector in $p$ dimensions it always maps the $p$ dimensional vector to a vector in a subspace spanned by $A$ if it is less than full rank. Lack of an inverse implies that we cannot reverse the transformations. Why not?


  2. The columns of $A$ span only a subspace of $R^p$ if it is less than full rank. Thus, a transformation such as $A y$ takes a vector from $R^p$ to a vector that always belongs to that subspace. The above viewpoint did not help in obtaining an intuition either.



Is there a way to obtain an intuition as to why a rank deficient matrix does not have an inverse?










share|cite|improve this question









$endgroup$








  • 2




    $begingroup$
    I think point 2 will give you the intuition if you consider the action on a basis. (The image of a basis is a basis of the image.)
    $endgroup$
    – saulspatz
    Jan 17 at 14:57










  • $begingroup$
    To elaborate on the above comment, if $A$ does not have full rank, then a nontrivial subspace gets sent to the zero vector. What happens if we try to invert that? In other words, how can we find the inverse image of the zero vector if lots of vectors are mapped to it (the answer: we can't!).
    $endgroup$
    – OldGodzilla
    Jan 17 at 15:00










  • $begingroup$
    It hasn't an inverse but it has a pseudo-inverse, a very useful notion.
    $endgroup$
    – Jean Marie
    Jan 25 at 15:44
















2












2








2





$begingroup$


Suppose that we have a $p$ dimensional square matrix $A$ whose rank is less than $p$. We know that such a matrix cannot have an inverse and there are several different ways to prove that the $A$ does not have an inverse.



However, I am struggling to obtain an intuition behind why the inverse does not exist. I considered the following ideas to generate an intuition but failed to do so.




  1. The matrix $A$ can be viewed as a set of transformations such as scaling, translation, rotation etc. Thus, when we apply $A$ to a vector in $p$ dimensions it always maps the $p$ dimensional vector to a vector in a subspace spanned by $A$ if it is less than full rank. Lack of an inverse implies that we cannot reverse the transformations. Why not?


  2. The columns of $A$ span only a subspace of $R^p$ if it is less than full rank. Thus, a transformation such as $A y$ takes a vector from $R^p$ to a vector that always belongs to that subspace. The above viewpoint did not help in obtaining an intuition either.



Is there a way to obtain an intuition as to why a rank deficient matrix does not have an inverse?










share|cite|improve this question









$endgroup$




Suppose that we have a $p$ dimensional square matrix $A$ whose rank is less than $p$. We know that such a matrix cannot have an inverse and there are several different ways to prove that the $A$ does not have an inverse.



However, I am struggling to obtain an intuition behind why the inverse does not exist. I considered the following ideas to generate an intuition but failed to do so.




  1. The matrix $A$ can be viewed as a set of transformations such as scaling, translation, rotation etc. Thus, when we apply $A$ to a vector in $p$ dimensions it always maps the $p$ dimensional vector to a vector in a subspace spanned by $A$ if it is less than full rank. Lack of an inverse implies that we cannot reverse the transformations. Why not?


  2. The columns of $A$ span only a subspace of $R^p$ if it is less than full rank. Thus, a transformation such as $A y$ takes a vector from $R^p$ to a vector that always belongs to that subspace. The above viewpoint did not help in obtaining an intuition either.



Is there a way to obtain an intuition as to why a rank deficient matrix does not have an inverse?







linear-algebra inverse intuition matrix-rank






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 17 at 14:50









jaggujaggu

111




111








  • 2




    $begingroup$
    I think point 2 will give you the intuition if you consider the action on a basis. (The image of a basis is a basis of the image.)
    $endgroup$
    – saulspatz
    Jan 17 at 14:57










  • $begingroup$
    To elaborate on the above comment, if $A$ does not have full rank, then a nontrivial subspace gets sent to the zero vector. What happens if we try to invert that? In other words, how can we find the inverse image of the zero vector if lots of vectors are mapped to it (the answer: we can't!).
    $endgroup$
    – OldGodzilla
    Jan 17 at 15:00










  • $begingroup$
    It hasn't an inverse but it has a pseudo-inverse, a very useful notion.
    $endgroup$
    – Jean Marie
    Jan 25 at 15:44
















  • 2




    $begingroup$
    I think point 2 will give you the intuition if you consider the action on a basis. (The image of a basis is a basis of the image.)
    $endgroup$
    – saulspatz
    Jan 17 at 14:57










  • $begingroup$
    To elaborate on the above comment, if $A$ does not have full rank, then a nontrivial subspace gets sent to the zero vector. What happens if we try to invert that? In other words, how can we find the inverse image of the zero vector if lots of vectors are mapped to it (the answer: we can't!).
    $endgroup$
    – OldGodzilla
    Jan 17 at 15:00










  • $begingroup$
    It hasn't an inverse but it has a pseudo-inverse, a very useful notion.
    $endgroup$
    – Jean Marie
    Jan 25 at 15:44










2




2




$begingroup$
I think point 2 will give you the intuition if you consider the action on a basis. (The image of a basis is a basis of the image.)
$endgroup$
– saulspatz
Jan 17 at 14:57




$begingroup$
I think point 2 will give you the intuition if you consider the action on a basis. (The image of a basis is a basis of the image.)
$endgroup$
– saulspatz
Jan 17 at 14:57












$begingroup$
To elaborate on the above comment, if $A$ does not have full rank, then a nontrivial subspace gets sent to the zero vector. What happens if we try to invert that? In other words, how can we find the inverse image of the zero vector if lots of vectors are mapped to it (the answer: we can't!).
$endgroup$
– OldGodzilla
Jan 17 at 15:00




$begingroup$
To elaborate on the above comment, if $A$ does not have full rank, then a nontrivial subspace gets sent to the zero vector. What happens if we try to invert that? In other words, how can we find the inverse image of the zero vector if lots of vectors are mapped to it (the answer: we can't!).
$endgroup$
– OldGodzilla
Jan 17 at 15:00












$begingroup$
It hasn't an inverse but it has a pseudo-inverse, a very useful notion.
$endgroup$
– Jean Marie
Jan 25 at 15:44






$begingroup$
It hasn't an inverse but it has a pseudo-inverse, a very useful notion.
$endgroup$
– Jean Marie
Jan 25 at 15:44












4 Answers
4






active

oldest

votes


















2












$begingroup$

"Invertible" when talking about linear transformations means "reversible". In other words, a linear transformation (and the corresponding matrix in a given basis) is invertible iff it is possible, given an output, to figure out exactly what the input was.



A rank deficient linear transformation will collapse at least one dimension, meaning each output could be the result of any of a number of different inputs. Specifically, it will have a non-trivial kernel, so multiple different inputs result in the output $vec 0$.






share|cite|improve this answer









$endgroup$





















    1












    $begingroup$

    The row reduced echelon form of your matrix will have some rows of zero at the bottom , which is not invertible.



    An invertible matrix when reduced to its row reduced echelon form becomes the identity matrix.






    share|cite|improve this answer









    $endgroup$





















      1












      $begingroup$

      This stems from the fact that a mapping $f:Vto V$ cannot possess any left inverse if it is not injective and it cannot possess any right inverse if it is not surjective:




      • if $f$ is not injective, i.e. if $f(u)=f(v)$ for some $une v$, then $f$ cannot have any left inverse, otherwise we would have $u=(f^{-1}circ f)(u)=f^{-1}(f(u))=f^{-1}(f(v))=(f^{-1}circ f)(v)=v$, which is a contradiction;

      • if $f$ is not surjective, i.e. if there is some member $w$ of $V$ that lies outside $f(V)$, then $f$ cannot possess any right inverse, otherwise we would have $w=f(f^{-1}(w))in f(V)$, which is a contradiction.


      Now, if a square matrix $A$ is rank deficient, its columns are linearly dependent. Therefore $Au=0$ for some nonzero vector $u$. In other words, $A$ maps both $u$ and $0$ to $0$. Hence $A$ has not any left inverse, because the mapping $xmapsto Ax$ is not injective. (If you invert back, what should $A^{-1}0$ be? $u$ or $0$?)



      Also, as $A$ is rank deficient, its column space $A$ is a proper subspace of the ambient space. Hence $A$ has not any right inverse, because the mapping $xmapsto Ax$ is not surjective. (If $w$ lies outside the column space of $A$ and it has an inverse image, then $w$ itself is the image of its own inverse image, hence $w$ also lies inside the column space of $A$. How paradoxical!)






      share|cite|improve this answer









      $endgroup$





















        0












        $begingroup$

        The way i learned is was like this:
        You should see the Det function as sending a matrix (of dim=n),to the oriented n-dim volume it's column vector's span. if this is zero, this will be a n-1 dim hyper-surface. Thus losing surjectivity into its image, we know injectivity and surjectivity are equivalent for finite dim square matrices. So it can't be invertible.






        share|cite|improve this answer









        $endgroup$













        • $begingroup$
          All functions are surjective onto their image. Onto their codomains, on the other hand...
          $endgroup$
          – Arthur
          Jan 17 at 15:17












        • $begingroup$
          yes ofcourse, i meant onto it's n dim vector space
          $endgroup$
          – Aylon Pinto
          Jan 17 at 15:21











        Your Answer





        StackExchange.ifUsing("editor", function () {
        return StackExchange.using("mathjaxEditing", function () {
        StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
        StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
        });
        });
        }, "mathjax-editing");

        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "69"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        autoActivateHeartbeat: false,
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        noCode: true, onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3077074%2fwhat-is-the-intuition-behind-why-a-rank-deficient-matrix-does-not-have-an-invers%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        4 Answers
        4






        active

        oldest

        votes








        4 Answers
        4






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes









        2












        $begingroup$

        "Invertible" when talking about linear transformations means "reversible". In other words, a linear transformation (and the corresponding matrix in a given basis) is invertible iff it is possible, given an output, to figure out exactly what the input was.



        A rank deficient linear transformation will collapse at least one dimension, meaning each output could be the result of any of a number of different inputs. Specifically, it will have a non-trivial kernel, so multiple different inputs result in the output $vec 0$.






        share|cite|improve this answer









        $endgroup$


















          2












          $begingroup$

          "Invertible" when talking about linear transformations means "reversible". In other words, a linear transformation (and the corresponding matrix in a given basis) is invertible iff it is possible, given an output, to figure out exactly what the input was.



          A rank deficient linear transformation will collapse at least one dimension, meaning each output could be the result of any of a number of different inputs. Specifically, it will have a non-trivial kernel, so multiple different inputs result in the output $vec 0$.






          share|cite|improve this answer









          $endgroup$
















            2












            2








            2





            $begingroup$

            "Invertible" when talking about linear transformations means "reversible". In other words, a linear transformation (and the corresponding matrix in a given basis) is invertible iff it is possible, given an output, to figure out exactly what the input was.



            A rank deficient linear transformation will collapse at least one dimension, meaning each output could be the result of any of a number of different inputs. Specifically, it will have a non-trivial kernel, so multiple different inputs result in the output $vec 0$.






            share|cite|improve this answer









            $endgroup$



            "Invertible" when talking about linear transformations means "reversible". In other words, a linear transformation (and the corresponding matrix in a given basis) is invertible iff it is possible, given an output, to figure out exactly what the input was.



            A rank deficient linear transformation will collapse at least one dimension, meaning each output could be the result of any of a number of different inputs. Specifically, it will have a non-trivial kernel, so multiple different inputs result in the output $vec 0$.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered Jan 17 at 15:07









            ArthurArthur

            116k7116199




            116k7116199























                1












                $begingroup$

                The row reduced echelon form of your matrix will have some rows of zero at the bottom , which is not invertible.



                An invertible matrix when reduced to its row reduced echelon form becomes the identity matrix.






                share|cite|improve this answer









                $endgroup$


















                  1












                  $begingroup$

                  The row reduced echelon form of your matrix will have some rows of zero at the bottom , which is not invertible.



                  An invertible matrix when reduced to its row reduced echelon form becomes the identity matrix.






                  share|cite|improve this answer









                  $endgroup$
















                    1












                    1








                    1





                    $begingroup$

                    The row reduced echelon form of your matrix will have some rows of zero at the bottom , which is not invertible.



                    An invertible matrix when reduced to its row reduced echelon form becomes the identity matrix.






                    share|cite|improve this answer









                    $endgroup$



                    The row reduced echelon form of your matrix will have some rows of zero at the bottom , which is not invertible.



                    An invertible matrix when reduced to its row reduced echelon form becomes the identity matrix.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Jan 17 at 15:00









                    Mohammad Riazi-KermaniMohammad Riazi-Kermani

                    41.6k42061




                    41.6k42061























                        1












                        $begingroup$

                        This stems from the fact that a mapping $f:Vto V$ cannot possess any left inverse if it is not injective and it cannot possess any right inverse if it is not surjective:




                        • if $f$ is not injective, i.e. if $f(u)=f(v)$ for some $une v$, then $f$ cannot have any left inverse, otherwise we would have $u=(f^{-1}circ f)(u)=f^{-1}(f(u))=f^{-1}(f(v))=(f^{-1}circ f)(v)=v$, which is a contradiction;

                        • if $f$ is not surjective, i.e. if there is some member $w$ of $V$ that lies outside $f(V)$, then $f$ cannot possess any right inverse, otherwise we would have $w=f(f^{-1}(w))in f(V)$, which is a contradiction.


                        Now, if a square matrix $A$ is rank deficient, its columns are linearly dependent. Therefore $Au=0$ for some nonzero vector $u$. In other words, $A$ maps both $u$ and $0$ to $0$. Hence $A$ has not any left inverse, because the mapping $xmapsto Ax$ is not injective. (If you invert back, what should $A^{-1}0$ be? $u$ or $0$?)



                        Also, as $A$ is rank deficient, its column space $A$ is a proper subspace of the ambient space. Hence $A$ has not any right inverse, because the mapping $xmapsto Ax$ is not surjective. (If $w$ lies outside the column space of $A$ and it has an inverse image, then $w$ itself is the image of its own inverse image, hence $w$ also lies inside the column space of $A$. How paradoxical!)






                        share|cite|improve this answer









                        $endgroup$


















                          1












                          $begingroup$

                          This stems from the fact that a mapping $f:Vto V$ cannot possess any left inverse if it is not injective and it cannot possess any right inverse if it is not surjective:




                          • if $f$ is not injective, i.e. if $f(u)=f(v)$ for some $une v$, then $f$ cannot have any left inverse, otherwise we would have $u=(f^{-1}circ f)(u)=f^{-1}(f(u))=f^{-1}(f(v))=(f^{-1}circ f)(v)=v$, which is a contradiction;

                          • if $f$ is not surjective, i.e. if there is some member $w$ of $V$ that lies outside $f(V)$, then $f$ cannot possess any right inverse, otherwise we would have $w=f(f^{-1}(w))in f(V)$, which is a contradiction.


                          Now, if a square matrix $A$ is rank deficient, its columns are linearly dependent. Therefore $Au=0$ for some nonzero vector $u$. In other words, $A$ maps both $u$ and $0$ to $0$. Hence $A$ has not any left inverse, because the mapping $xmapsto Ax$ is not injective. (If you invert back, what should $A^{-1}0$ be? $u$ or $0$?)



                          Also, as $A$ is rank deficient, its column space $A$ is a proper subspace of the ambient space. Hence $A$ has not any right inverse, because the mapping $xmapsto Ax$ is not surjective. (If $w$ lies outside the column space of $A$ and it has an inverse image, then $w$ itself is the image of its own inverse image, hence $w$ also lies inside the column space of $A$. How paradoxical!)






                          share|cite|improve this answer









                          $endgroup$
















                            1












                            1








                            1





                            $begingroup$

                            This stems from the fact that a mapping $f:Vto V$ cannot possess any left inverse if it is not injective and it cannot possess any right inverse if it is not surjective:




                            • if $f$ is not injective, i.e. if $f(u)=f(v)$ for some $une v$, then $f$ cannot have any left inverse, otherwise we would have $u=(f^{-1}circ f)(u)=f^{-1}(f(u))=f^{-1}(f(v))=(f^{-1}circ f)(v)=v$, which is a contradiction;

                            • if $f$ is not surjective, i.e. if there is some member $w$ of $V$ that lies outside $f(V)$, then $f$ cannot possess any right inverse, otherwise we would have $w=f(f^{-1}(w))in f(V)$, which is a contradiction.


                            Now, if a square matrix $A$ is rank deficient, its columns are linearly dependent. Therefore $Au=0$ for some nonzero vector $u$. In other words, $A$ maps both $u$ and $0$ to $0$. Hence $A$ has not any left inverse, because the mapping $xmapsto Ax$ is not injective. (If you invert back, what should $A^{-1}0$ be? $u$ or $0$?)



                            Also, as $A$ is rank deficient, its column space $A$ is a proper subspace of the ambient space. Hence $A$ has not any right inverse, because the mapping $xmapsto Ax$ is not surjective. (If $w$ lies outside the column space of $A$ and it has an inverse image, then $w$ itself is the image of its own inverse image, hence $w$ also lies inside the column space of $A$. How paradoxical!)






                            share|cite|improve this answer









                            $endgroup$



                            This stems from the fact that a mapping $f:Vto V$ cannot possess any left inverse if it is not injective and it cannot possess any right inverse if it is not surjective:




                            • if $f$ is not injective, i.e. if $f(u)=f(v)$ for some $une v$, then $f$ cannot have any left inverse, otherwise we would have $u=(f^{-1}circ f)(u)=f^{-1}(f(u))=f^{-1}(f(v))=(f^{-1}circ f)(v)=v$, which is a contradiction;

                            • if $f$ is not surjective, i.e. if there is some member $w$ of $V$ that lies outside $f(V)$, then $f$ cannot possess any right inverse, otherwise we would have $w=f(f^{-1}(w))in f(V)$, which is a contradiction.


                            Now, if a square matrix $A$ is rank deficient, its columns are linearly dependent. Therefore $Au=0$ for some nonzero vector $u$. In other words, $A$ maps both $u$ and $0$ to $0$. Hence $A$ has not any left inverse, because the mapping $xmapsto Ax$ is not injective. (If you invert back, what should $A^{-1}0$ be? $u$ or $0$?)



                            Also, as $A$ is rank deficient, its column space $A$ is a proper subspace of the ambient space. Hence $A$ has not any right inverse, because the mapping $xmapsto Ax$ is not surjective. (If $w$ lies outside the column space of $A$ and it has an inverse image, then $w$ itself is the image of its own inverse image, hence $w$ also lies inside the column space of $A$. How paradoxical!)







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered Jan 17 at 15:58









                            user1551user1551

                            72.9k566128




                            72.9k566128























                                0












                                $begingroup$

                                The way i learned is was like this:
                                You should see the Det function as sending a matrix (of dim=n),to the oriented n-dim volume it's column vector's span. if this is zero, this will be a n-1 dim hyper-surface. Thus losing surjectivity into its image, we know injectivity and surjectivity are equivalent for finite dim square matrices. So it can't be invertible.






                                share|cite|improve this answer









                                $endgroup$













                                • $begingroup$
                                  All functions are surjective onto their image. Onto their codomains, on the other hand...
                                  $endgroup$
                                  – Arthur
                                  Jan 17 at 15:17












                                • $begingroup$
                                  yes ofcourse, i meant onto it's n dim vector space
                                  $endgroup$
                                  – Aylon Pinto
                                  Jan 17 at 15:21
















                                0












                                $begingroup$

                                The way i learned is was like this:
                                You should see the Det function as sending a matrix (of dim=n),to the oriented n-dim volume it's column vector's span. if this is zero, this will be a n-1 dim hyper-surface. Thus losing surjectivity into its image, we know injectivity and surjectivity are equivalent for finite dim square matrices. So it can't be invertible.






                                share|cite|improve this answer









                                $endgroup$













                                • $begingroup$
                                  All functions are surjective onto their image. Onto their codomains, on the other hand...
                                  $endgroup$
                                  – Arthur
                                  Jan 17 at 15:17












                                • $begingroup$
                                  yes ofcourse, i meant onto it's n dim vector space
                                  $endgroup$
                                  – Aylon Pinto
                                  Jan 17 at 15:21














                                0












                                0








                                0





                                $begingroup$

                                The way i learned is was like this:
                                You should see the Det function as sending a matrix (of dim=n),to the oriented n-dim volume it's column vector's span. if this is zero, this will be a n-1 dim hyper-surface. Thus losing surjectivity into its image, we know injectivity and surjectivity are equivalent for finite dim square matrices. So it can't be invertible.






                                share|cite|improve this answer









                                $endgroup$



                                The way i learned is was like this:
                                You should see the Det function as sending a matrix (of dim=n),to the oriented n-dim volume it's column vector's span. if this is zero, this will be a n-1 dim hyper-surface. Thus losing surjectivity into its image, we know injectivity and surjectivity are equivalent for finite dim square matrices. So it can't be invertible.







                                share|cite|improve this answer












                                share|cite|improve this answer



                                share|cite|improve this answer










                                answered Jan 17 at 15:13









                                Aylon PintoAylon Pinto

                                235




                                235












                                • $begingroup$
                                  All functions are surjective onto their image. Onto their codomains, on the other hand...
                                  $endgroup$
                                  – Arthur
                                  Jan 17 at 15:17












                                • $begingroup$
                                  yes ofcourse, i meant onto it's n dim vector space
                                  $endgroup$
                                  – Aylon Pinto
                                  Jan 17 at 15:21


















                                • $begingroup$
                                  All functions are surjective onto their image. Onto their codomains, on the other hand...
                                  $endgroup$
                                  – Arthur
                                  Jan 17 at 15:17












                                • $begingroup$
                                  yes ofcourse, i meant onto it's n dim vector space
                                  $endgroup$
                                  – Aylon Pinto
                                  Jan 17 at 15:21
















                                $begingroup$
                                All functions are surjective onto their image. Onto their codomains, on the other hand...
                                $endgroup$
                                – Arthur
                                Jan 17 at 15:17






                                $begingroup$
                                All functions are surjective onto their image. Onto their codomains, on the other hand...
                                $endgroup$
                                – Arthur
                                Jan 17 at 15:17














                                $begingroup$
                                yes ofcourse, i meant onto it's n dim vector space
                                $endgroup$
                                – Aylon Pinto
                                Jan 17 at 15:21




                                $begingroup$
                                yes ofcourse, i meant onto it's n dim vector space
                                $endgroup$
                                – Aylon Pinto
                                Jan 17 at 15:21


















                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Mathematics Stack Exchange!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                Use MathJax to format equations. MathJax reference.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3077074%2fwhat-is-the-intuition-behind-why-a-rank-deficient-matrix-does-not-have-an-invers%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Can a sorcerer learn a 5th-level spell early by creating spell slots using the Font of Magic feature?

                                Does disintegrating a polymorphed enemy still kill it after the 2018 errata?

                                A Topological Invariant for $pi_3(U(n))$