necessary and sufficient condition for trivial kernel of a matrix over a commutative ring












30












$begingroup$


In answering Do these matrix rings have non-zero elements that are neither units nor zero divisors? I was surprised how hard it was to find anything on the Web about the generalization of the following fact to commutative rings:




A square matrix over a field has trivial kernel if and only if its determinant is non-zero.




As Bill demonstrated in the above question, a related fact about fields generalizes directly to commutative rings:




A square matrix over a commutative ring is invertible if and only if its
determinant is invertible.




However, the kernel being trivial and the matrix being invertible are not equivalent for general rings, so the question arises what the proper generalization of the first fact is. Since it took me quite a lot of searching to find the answer to this rather basic question, and it's excplicitly encouraged to write a question and answer it to document something that might be useful to others, I thought I'd write this up here in an accessible form.



So my questions are: What is the relationship between the determinant of a square matrix over a commutative ring and the triviality of its kernel? Can the simple relationship that holds for fields be generalized? And (generalizing with a view to the answer) what is a necessary and sufficient condition for a (not necessarily square) matrix over a commutative ring to have trivial kernel?










share|cite|improve this question











$endgroup$

















    30












    $begingroup$


    In answering Do these matrix rings have non-zero elements that are neither units nor zero divisors? I was surprised how hard it was to find anything on the Web about the generalization of the following fact to commutative rings:




    A square matrix over a field has trivial kernel if and only if its determinant is non-zero.




    As Bill demonstrated in the above question, a related fact about fields generalizes directly to commutative rings:




    A square matrix over a commutative ring is invertible if and only if its
    determinant is invertible.




    However, the kernel being trivial and the matrix being invertible are not equivalent for general rings, so the question arises what the proper generalization of the first fact is. Since it took me quite a lot of searching to find the answer to this rather basic question, and it's excplicitly encouraged to write a question and answer it to document something that might be useful to others, I thought I'd write this up here in an accessible form.



    So my questions are: What is the relationship between the determinant of a square matrix over a commutative ring and the triviality of its kernel? Can the simple relationship that holds for fields be generalized? And (generalizing with a view to the answer) what is a necessary and sufficient condition for a (not necessarily square) matrix over a commutative ring to have trivial kernel?










    share|cite|improve this question











    $endgroup$















      30












      30








      30


      19



      $begingroup$


      In answering Do these matrix rings have non-zero elements that are neither units nor zero divisors? I was surprised how hard it was to find anything on the Web about the generalization of the following fact to commutative rings:




      A square matrix over a field has trivial kernel if and only if its determinant is non-zero.




      As Bill demonstrated in the above question, a related fact about fields generalizes directly to commutative rings:




      A square matrix over a commutative ring is invertible if and only if its
      determinant is invertible.




      However, the kernel being trivial and the matrix being invertible are not equivalent for general rings, so the question arises what the proper generalization of the first fact is. Since it took me quite a lot of searching to find the answer to this rather basic question, and it's excplicitly encouraged to write a question and answer it to document something that might be useful to others, I thought I'd write this up here in an accessible form.



      So my questions are: What is the relationship between the determinant of a square matrix over a commutative ring and the triviality of its kernel? Can the simple relationship that holds for fields be generalized? And (generalizing with a view to the answer) what is a necessary and sufficient condition for a (not necessarily square) matrix over a commutative ring to have trivial kernel?










      share|cite|improve this question











      $endgroup$




      In answering Do these matrix rings have non-zero elements that are neither units nor zero divisors? I was surprised how hard it was to find anything on the Web about the generalization of the following fact to commutative rings:




      A square matrix over a field has trivial kernel if and only if its determinant is non-zero.




      As Bill demonstrated in the above question, a related fact about fields generalizes directly to commutative rings:




      A square matrix over a commutative ring is invertible if and only if its
      determinant is invertible.




      However, the kernel being trivial and the matrix being invertible are not equivalent for general rings, so the question arises what the proper generalization of the first fact is. Since it took me quite a lot of searching to find the answer to this rather basic question, and it's excplicitly encouraged to write a question and answer it to document something that might be useful to others, I thought I'd write this up here in an accessible form.



      So my questions are: What is the relationship between the determinant of a square matrix over a commutative ring and the triviality of its kernel? Can the simple relationship that holds for fields be generalized? And (generalizing with a view to the answer) what is a necessary and sufficient condition for a (not necessarily square) matrix over a commutative ring to have trivial kernel?







      linear-algebra abstract-algebra matrices commutative-algebra






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Aug 11 '18 at 0:04









      Pierre-Yves Gaillard

      13.4k23184




      13.4k23184










      asked Oct 11 '11 at 16:07









      jorikijoriki

      171k10188349




      171k10188349






















          2 Answers
          2






          active

          oldest

          votes


















          24












          $begingroup$

          I found the answer in this book (in Section $6.4.14$, “Determinants, Ranks and Linear Equations”). I'd tried using a similar Laplace expansion myself but was missing the idea of using the largest dimension at which the minors are not all annihilated by the same non-zero element. I'll try to summarize the argument in somewhat less formal terms, omitting the tangential material included in the book.



          Let $A$ be an $mtimes n$ matrix over a commutative ring $R$. We want to find a condition for the system of equations $Ax=0$ with $xin R^n$ to have a non-trivial solution. If $R$ is a field, various definitions of the rank of $A$ coincide, including the column rank (the dimension of the column space), the row rank (the dimension of the row space) and the determinantal rank (the dimension of the lowest non-zero minor). This is not the case for a general commutative ring. It turns out that for our present purposes a useful generalization of rank is the largest integer $k$ such that there is no non-zero element of $R$ that annihilates all minors of dimension $k$, with $k=0$ if there is no such integer.




          We want to show that $Ax=0$ has a non-trivial solution if and only if $klt n$.




          If $k=0$, there is a non-zero element $rin R$ which annihilates all matrix elements (the minors of dimension $1$), so there is a non-trivial solution



          $$Apmatrix{r\vdots\r}=0;.$$



          Now assume $0lt klt n$. If $mlt n$, we can add rows of zeros to $A$ without changing $k$ or the solution set, so we can assume $klt nle m$. There is some non-zero element $rin R$ that annihilates all minors of dimension $k+1$, and there is a minor of dimension $k$ that isn't annihilated by $r$. Without loss of generality, assume that this is the minor of the first $k$ rows and columns. Now consider the matrix formed of the first $k+1$ rows and columns of $A$, and form a solution $x$ from the $(k+1)$-th column of its adjugate by multiplying it by $r$ and padding it with zeros. By construction, the first $k$ entries of $Ax$ are determinants of a matrix with two equal rows, and thus vanish; the remaining entries are each $r$ times a minor of dimension $k+1$, and thus also vanish. But the $(k+1)$-th entry of this solution is non-zero, being $r$ times the minor of the first $k$ rows and columns, which isn't annihilated by $r$. Thus we have constructed a non-trivial solution.



          In summary, if $klt n$, there is a non-trivial solution to $Ax=0$.



          Now assume conversely that there is such a solution $x$. If $ngt m$, there are no minors of dimension $n$, so $klt n$. Thus we can assume $nle m$. The minors of dimension $n$ are the determinants of matrices $B$ formed by choosing any $n$ rows of $A$. Since each row of $A$ times $x$ is $0$, we have $Bx=0$, and then multiplying by the adjugate of $B$ yields $det B x=0$. Since there is at least one non-zero entry in the non-trivial solution $x$, there is at least one non-zero element of $R$ that annihilates all minors of size $n$, and thus $klt n$.



          Specializing to the case $m=n$ of square matrices, we can conclude:




          A system of linear equations $Ax=0$ with a square $ntimes n$ matrix
          $A$ over a commutative ring $R$ has a non-trivial solution if and only
          if its determinant (its only minor of dimension $n$) is annihilated by
          some non-zero element of $R$, that is, if its determinant is a zero divisor or zero.







          share|cite|improve this answer











          $endgroup$









          • 3




            $begingroup$
            The book joriki links to is A Second Semester of Linear Algebra by S. E. Payne. Here are two other links to the same book, via the author's site: PDF file. HTML page: Class Notes.
            $endgroup$
            – Pierre-Yves Gaillard
            Oct 11 '11 at 17:45





















          8












          $begingroup$

          See Section III.8.7, entitled Application to Linear Equations, of Algebra, by Nicolas Bourbaki.



          EDIT 1. Let $R$ be a commutative ring, let $m$ and $n$ be positive integers, let $M$ be an $R$-module, and let $A:R^nto M$ be $R$-linear.



          Identify the $n$ th exterior power $Lambda^n(R^n)$ of $R^n$ to $R$ in the obvious way, so that $Lambda^n(A)$ is a map from $R$ to $Lambda^n(M)$.



          Put $v_i:=Ae_i$, where $e_i$ is the $i$ th vector of the canonical basis of $R^n$. In particular we have
          $$
          Ax=sum_{i=1}^n x_i v_i,quadLambda^n(A) r=r v_1wedgecdotswedge v_n.
          $$

          (where $x_i$ is the $i$-th coordinate of $x$, and $r$ denotes any element of $Lambda^nleft(R^nright) cong R$).




          If $Lambda^n(A)$ is injective, so is $A$.




          In other words:




          If the $v_i$ are linearly dependent, then $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$.




          Indeed, for $x$ in $ker A$ we have
          $$
          Lambda^n(A) x_1=x_1 v_1wedge v_2wedgecdotswedge v_n=
          -sum_{i=2}^n x_i v_iwedge v_2wedgecdotswedge v_n=0,
          $$

          and, similarly, $Lambda^n(A) x_i=0$ for all $i$.



          [Edit: Old version (before Georges's comment): Assume now that $M$ embeds into $R^m$.]



          Assume now that there is an $R$-linear injection $B:Mto R^m$ such that
          $$
          Lambda^n(B):Lambda^n(M)toLambda^n(R^m)
          $$

          is injective. This is always the case (for a suitable $m$) if $M$ is projective and finitely generated.




          If $A$ is injective, so is $Lambda^n(A)$.




          In other words:




          If $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$, then the $v_i$ are linearly dependent.




          The proof is given in joriki's nice answer.



          This is also proved as Proposition 12 in Bourbaki's Algebra III.7.9 p. 519. Unfortunately, I don't understand Bourbaki's argument. I'd be most grateful to whoever would be kind and patient enough to explain it to me.



          EDIT 2. According to the indications given by Tsit-Yuen Lam on page 150 of his book Exercises in modules and rings, the theorem is due to N. H. McCoy, and appeared first, as Theorem 1 page 288, in




          • N. H. McCoy, Remarks on Divisors of Zero, The American Mathematical Monthly Vol. 49, No. 5 (May, 1942), pp. 286-295, JSTOR.


          Lam also says that




          • N. H. McCoy, Rings and ideals, The Carus Mathematical Monographs, no. 8, The Mathematical Association of America, 1948,


          is an "excellent exposition" of the subject. See Theorem 51 page 159.



          McCoy's Theorem is also stated and proved in the following texts:




          • Ex. 5.23.A(3) on page 149 of Lam's Exercises in modules and rings.


          • Theorem 2.2 page 3 in Anton Gerashenko's notes from Lam's Course: Math 274, Commutative Rings, Fall 2006: PDF file.


          • Theorem 1.6 in Chapter 13, entitled "Various topics", of The CRing Project. --- PDF file for Chapter 13. --- PDF file for the whole book.


          • Blocki, Zbigniew, An elementary proof of the McCoy theorem, J. Univ. Iagel. Acta Math.; N 30; 1993; 215-218.


          • Theorem 6.4.16. page 101, A Second Semester of Linear Algebra, Math 5718, by Stan Payne. PDF file.







          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks for this link! Note, however, that the result is proved only for square matrices there. This case was the original motivation for the question, but the question and my answer apply to the general case.
            $endgroup$
            – joriki
            Nov 10 '11 at 9:13












          • $begingroup$
            Dear @Pierre-Yves, I'm not quite sure that your claim (displayed in grey) "If $A$ is injective, so is $Lambda^n A$" holds if you assume only that $M$ embeds into $R^m$.There is the subtle point that exterior products don't mean the same in both spaces, in other words $Lambda ^n Mto Lambda ^n R^m$ needn't be injective. Everything is fine if $M$ is projective, though, and this is Bourbaki's assumption . But this is nitpicking: +1, needless to say.
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 14:17












          • $begingroup$
            Dear @Georges: No, this is definitely not nitpicking! Thanks a lot! I hope it's correct now. - It was my secret hope that you would read this answer. Did you see the last paragraph? I'm sure you understand Bourbaki's argument...
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 15:11










          • $begingroup$
            Dear @Pierre-Yves: No, I don't understand Bourbaki's argument either. Specifically, when he writes "it follows from no.8, Corollary 3 to Theorem 1 that $mu x_1$ is a linear combination....", I don't see how it follows. (By the way, this is Proposition 12 in my edition)
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 18:19












          • $begingroup$
            Dear @Georges: Once more you're right: it's Proposition 12 (in the link to the English edition I give, and in my French edition - it was just a typo). Thank you very much for your time and effort. I'm having the same problem as you with Bourbaki's proof. It's weird. I find joriki's formulation of the argument very nice.
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 18:42











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f71740%2fnecessary-and-sufficient-condition-for-trivial-kernel-of-a-matrix-over-a-commuta%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          24












          $begingroup$

          I found the answer in this book (in Section $6.4.14$, “Determinants, Ranks and Linear Equations”). I'd tried using a similar Laplace expansion myself but was missing the idea of using the largest dimension at which the minors are not all annihilated by the same non-zero element. I'll try to summarize the argument in somewhat less formal terms, omitting the tangential material included in the book.



          Let $A$ be an $mtimes n$ matrix over a commutative ring $R$. We want to find a condition for the system of equations $Ax=0$ with $xin R^n$ to have a non-trivial solution. If $R$ is a field, various definitions of the rank of $A$ coincide, including the column rank (the dimension of the column space), the row rank (the dimension of the row space) and the determinantal rank (the dimension of the lowest non-zero minor). This is not the case for a general commutative ring. It turns out that for our present purposes a useful generalization of rank is the largest integer $k$ such that there is no non-zero element of $R$ that annihilates all minors of dimension $k$, with $k=0$ if there is no such integer.




          We want to show that $Ax=0$ has a non-trivial solution if and only if $klt n$.




          If $k=0$, there is a non-zero element $rin R$ which annihilates all matrix elements (the minors of dimension $1$), so there is a non-trivial solution



          $$Apmatrix{r\vdots\r}=0;.$$



          Now assume $0lt klt n$. If $mlt n$, we can add rows of zeros to $A$ without changing $k$ or the solution set, so we can assume $klt nle m$. There is some non-zero element $rin R$ that annihilates all minors of dimension $k+1$, and there is a minor of dimension $k$ that isn't annihilated by $r$. Without loss of generality, assume that this is the minor of the first $k$ rows and columns. Now consider the matrix formed of the first $k+1$ rows and columns of $A$, and form a solution $x$ from the $(k+1)$-th column of its adjugate by multiplying it by $r$ and padding it with zeros. By construction, the first $k$ entries of $Ax$ are determinants of a matrix with two equal rows, and thus vanish; the remaining entries are each $r$ times a minor of dimension $k+1$, and thus also vanish. But the $(k+1)$-th entry of this solution is non-zero, being $r$ times the minor of the first $k$ rows and columns, which isn't annihilated by $r$. Thus we have constructed a non-trivial solution.



          In summary, if $klt n$, there is a non-trivial solution to $Ax=0$.



          Now assume conversely that there is such a solution $x$. If $ngt m$, there are no minors of dimension $n$, so $klt n$. Thus we can assume $nle m$. The minors of dimension $n$ are the determinants of matrices $B$ formed by choosing any $n$ rows of $A$. Since each row of $A$ times $x$ is $0$, we have $Bx=0$, and then multiplying by the adjugate of $B$ yields $det B x=0$. Since there is at least one non-zero entry in the non-trivial solution $x$, there is at least one non-zero element of $R$ that annihilates all minors of size $n$, and thus $klt n$.



          Specializing to the case $m=n$ of square matrices, we can conclude:




          A system of linear equations $Ax=0$ with a square $ntimes n$ matrix
          $A$ over a commutative ring $R$ has a non-trivial solution if and only
          if its determinant (its only minor of dimension $n$) is annihilated by
          some non-zero element of $R$, that is, if its determinant is a zero divisor or zero.







          share|cite|improve this answer











          $endgroup$









          • 3




            $begingroup$
            The book joriki links to is A Second Semester of Linear Algebra by S. E. Payne. Here are two other links to the same book, via the author's site: PDF file. HTML page: Class Notes.
            $endgroup$
            – Pierre-Yves Gaillard
            Oct 11 '11 at 17:45


















          24












          $begingroup$

          I found the answer in this book (in Section $6.4.14$, “Determinants, Ranks and Linear Equations”). I'd tried using a similar Laplace expansion myself but was missing the idea of using the largest dimension at which the minors are not all annihilated by the same non-zero element. I'll try to summarize the argument in somewhat less formal terms, omitting the tangential material included in the book.



          Let $A$ be an $mtimes n$ matrix over a commutative ring $R$. We want to find a condition for the system of equations $Ax=0$ with $xin R^n$ to have a non-trivial solution. If $R$ is a field, various definitions of the rank of $A$ coincide, including the column rank (the dimension of the column space), the row rank (the dimension of the row space) and the determinantal rank (the dimension of the lowest non-zero minor). This is not the case for a general commutative ring. It turns out that for our present purposes a useful generalization of rank is the largest integer $k$ such that there is no non-zero element of $R$ that annihilates all minors of dimension $k$, with $k=0$ if there is no such integer.




          We want to show that $Ax=0$ has a non-trivial solution if and only if $klt n$.




          If $k=0$, there is a non-zero element $rin R$ which annihilates all matrix elements (the minors of dimension $1$), so there is a non-trivial solution



          $$Apmatrix{r\vdots\r}=0;.$$



          Now assume $0lt klt n$. If $mlt n$, we can add rows of zeros to $A$ without changing $k$ or the solution set, so we can assume $klt nle m$. There is some non-zero element $rin R$ that annihilates all minors of dimension $k+1$, and there is a minor of dimension $k$ that isn't annihilated by $r$. Without loss of generality, assume that this is the minor of the first $k$ rows and columns. Now consider the matrix formed of the first $k+1$ rows and columns of $A$, and form a solution $x$ from the $(k+1)$-th column of its adjugate by multiplying it by $r$ and padding it with zeros. By construction, the first $k$ entries of $Ax$ are determinants of a matrix with two equal rows, and thus vanish; the remaining entries are each $r$ times a minor of dimension $k+1$, and thus also vanish. But the $(k+1)$-th entry of this solution is non-zero, being $r$ times the minor of the first $k$ rows and columns, which isn't annihilated by $r$. Thus we have constructed a non-trivial solution.



          In summary, if $klt n$, there is a non-trivial solution to $Ax=0$.



          Now assume conversely that there is such a solution $x$. If $ngt m$, there are no minors of dimension $n$, so $klt n$. Thus we can assume $nle m$. The minors of dimension $n$ are the determinants of matrices $B$ formed by choosing any $n$ rows of $A$. Since each row of $A$ times $x$ is $0$, we have $Bx=0$, and then multiplying by the adjugate of $B$ yields $det B x=0$. Since there is at least one non-zero entry in the non-trivial solution $x$, there is at least one non-zero element of $R$ that annihilates all minors of size $n$, and thus $klt n$.



          Specializing to the case $m=n$ of square matrices, we can conclude:




          A system of linear equations $Ax=0$ with a square $ntimes n$ matrix
          $A$ over a commutative ring $R$ has a non-trivial solution if and only
          if its determinant (its only minor of dimension $n$) is annihilated by
          some non-zero element of $R$, that is, if its determinant is a zero divisor or zero.







          share|cite|improve this answer











          $endgroup$









          • 3




            $begingroup$
            The book joriki links to is A Second Semester of Linear Algebra by S. E. Payne. Here are two other links to the same book, via the author's site: PDF file. HTML page: Class Notes.
            $endgroup$
            – Pierre-Yves Gaillard
            Oct 11 '11 at 17:45
















          24












          24








          24





          $begingroup$

          I found the answer in this book (in Section $6.4.14$, “Determinants, Ranks and Linear Equations”). I'd tried using a similar Laplace expansion myself but was missing the idea of using the largest dimension at which the minors are not all annihilated by the same non-zero element. I'll try to summarize the argument in somewhat less formal terms, omitting the tangential material included in the book.



          Let $A$ be an $mtimes n$ matrix over a commutative ring $R$. We want to find a condition for the system of equations $Ax=0$ with $xin R^n$ to have a non-trivial solution. If $R$ is a field, various definitions of the rank of $A$ coincide, including the column rank (the dimension of the column space), the row rank (the dimension of the row space) and the determinantal rank (the dimension of the lowest non-zero minor). This is not the case for a general commutative ring. It turns out that for our present purposes a useful generalization of rank is the largest integer $k$ such that there is no non-zero element of $R$ that annihilates all minors of dimension $k$, with $k=0$ if there is no such integer.




          We want to show that $Ax=0$ has a non-trivial solution if and only if $klt n$.




          If $k=0$, there is a non-zero element $rin R$ which annihilates all matrix elements (the minors of dimension $1$), so there is a non-trivial solution



          $$Apmatrix{r\vdots\r}=0;.$$



          Now assume $0lt klt n$. If $mlt n$, we can add rows of zeros to $A$ without changing $k$ or the solution set, so we can assume $klt nle m$. There is some non-zero element $rin R$ that annihilates all minors of dimension $k+1$, and there is a minor of dimension $k$ that isn't annihilated by $r$. Without loss of generality, assume that this is the minor of the first $k$ rows and columns. Now consider the matrix formed of the first $k+1$ rows and columns of $A$, and form a solution $x$ from the $(k+1)$-th column of its adjugate by multiplying it by $r$ and padding it with zeros. By construction, the first $k$ entries of $Ax$ are determinants of a matrix with two equal rows, and thus vanish; the remaining entries are each $r$ times a minor of dimension $k+1$, and thus also vanish. But the $(k+1)$-th entry of this solution is non-zero, being $r$ times the minor of the first $k$ rows and columns, which isn't annihilated by $r$. Thus we have constructed a non-trivial solution.



          In summary, if $klt n$, there is a non-trivial solution to $Ax=0$.



          Now assume conversely that there is such a solution $x$. If $ngt m$, there are no minors of dimension $n$, so $klt n$. Thus we can assume $nle m$. The minors of dimension $n$ are the determinants of matrices $B$ formed by choosing any $n$ rows of $A$. Since each row of $A$ times $x$ is $0$, we have $Bx=0$, and then multiplying by the adjugate of $B$ yields $det B x=0$. Since there is at least one non-zero entry in the non-trivial solution $x$, there is at least one non-zero element of $R$ that annihilates all minors of size $n$, and thus $klt n$.



          Specializing to the case $m=n$ of square matrices, we can conclude:




          A system of linear equations $Ax=0$ with a square $ntimes n$ matrix
          $A$ over a commutative ring $R$ has a non-trivial solution if and only
          if its determinant (its only minor of dimension $n$) is annihilated by
          some non-zero element of $R$, that is, if its determinant is a zero divisor or zero.







          share|cite|improve this answer











          $endgroup$



          I found the answer in this book (in Section $6.4.14$, “Determinants, Ranks and Linear Equations”). I'd tried using a similar Laplace expansion myself but was missing the idea of using the largest dimension at which the minors are not all annihilated by the same non-zero element. I'll try to summarize the argument in somewhat less formal terms, omitting the tangential material included in the book.



          Let $A$ be an $mtimes n$ matrix over a commutative ring $R$. We want to find a condition for the system of equations $Ax=0$ with $xin R^n$ to have a non-trivial solution. If $R$ is a field, various definitions of the rank of $A$ coincide, including the column rank (the dimension of the column space), the row rank (the dimension of the row space) and the determinantal rank (the dimension of the lowest non-zero minor). This is not the case for a general commutative ring. It turns out that for our present purposes a useful generalization of rank is the largest integer $k$ such that there is no non-zero element of $R$ that annihilates all minors of dimension $k$, with $k=0$ if there is no such integer.




          We want to show that $Ax=0$ has a non-trivial solution if and only if $klt n$.




          If $k=0$, there is a non-zero element $rin R$ which annihilates all matrix elements (the minors of dimension $1$), so there is a non-trivial solution



          $$Apmatrix{r\vdots\r}=0;.$$



          Now assume $0lt klt n$. If $mlt n$, we can add rows of zeros to $A$ without changing $k$ or the solution set, so we can assume $klt nle m$. There is some non-zero element $rin R$ that annihilates all minors of dimension $k+1$, and there is a minor of dimension $k$ that isn't annihilated by $r$. Without loss of generality, assume that this is the minor of the first $k$ rows and columns. Now consider the matrix formed of the first $k+1$ rows and columns of $A$, and form a solution $x$ from the $(k+1)$-th column of its adjugate by multiplying it by $r$ and padding it with zeros. By construction, the first $k$ entries of $Ax$ are determinants of a matrix with two equal rows, and thus vanish; the remaining entries are each $r$ times a minor of dimension $k+1$, and thus also vanish. But the $(k+1)$-th entry of this solution is non-zero, being $r$ times the minor of the first $k$ rows and columns, which isn't annihilated by $r$. Thus we have constructed a non-trivial solution.



          In summary, if $klt n$, there is a non-trivial solution to $Ax=0$.



          Now assume conversely that there is such a solution $x$. If $ngt m$, there are no minors of dimension $n$, so $klt n$. Thus we can assume $nle m$. The minors of dimension $n$ are the determinants of matrices $B$ formed by choosing any $n$ rows of $A$. Since each row of $A$ times $x$ is $0$, we have $Bx=0$, and then multiplying by the adjugate of $B$ yields $det B x=0$. Since there is at least one non-zero entry in the non-trivial solution $x$, there is at least one non-zero element of $R$ that annihilates all minors of size $n$, and thus $klt n$.



          Specializing to the case $m=n$ of square matrices, we can conclude:




          A system of linear equations $Ax=0$ with a square $ntimes n$ matrix
          $A$ over a commutative ring $R$ has a non-trivial solution if and only
          if its determinant (its only minor of dimension $n$) is annihilated by
          some non-zero element of $R$, that is, if its determinant is a zero divisor or zero.








          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Aug 11 '18 at 15:57

























          answered Oct 11 '11 at 16:08









          jorikijoriki

          171k10188349




          171k10188349








          • 3




            $begingroup$
            The book joriki links to is A Second Semester of Linear Algebra by S. E. Payne. Here are two other links to the same book, via the author's site: PDF file. HTML page: Class Notes.
            $endgroup$
            – Pierre-Yves Gaillard
            Oct 11 '11 at 17:45
















          • 3




            $begingroup$
            The book joriki links to is A Second Semester of Linear Algebra by S. E. Payne. Here are two other links to the same book, via the author's site: PDF file. HTML page: Class Notes.
            $endgroup$
            – Pierre-Yves Gaillard
            Oct 11 '11 at 17:45










          3




          3




          $begingroup$
          The book joriki links to is A Second Semester of Linear Algebra by S. E. Payne. Here are two other links to the same book, via the author's site: PDF file. HTML page: Class Notes.
          $endgroup$
          – Pierre-Yves Gaillard
          Oct 11 '11 at 17:45






          $begingroup$
          The book joriki links to is A Second Semester of Linear Algebra by S. E. Payne. Here are two other links to the same book, via the author's site: PDF file. HTML page: Class Notes.
          $endgroup$
          – Pierre-Yves Gaillard
          Oct 11 '11 at 17:45













          8












          $begingroup$

          See Section III.8.7, entitled Application to Linear Equations, of Algebra, by Nicolas Bourbaki.



          EDIT 1. Let $R$ be a commutative ring, let $m$ and $n$ be positive integers, let $M$ be an $R$-module, and let $A:R^nto M$ be $R$-linear.



          Identify the $n$ th exterior power $Lambda^n(R^n)$ of $R^n$ to $R$ in the obvious way, so that $Lambda^n(A)$ is a map from $R$ to $Lambda^n(M)$.



          Put $v_i:=Ae_i$, where $e_i$ is the $i$ th vector of the canonical basis of $R^n$. In particular we have
          $$
          Ax=sum_{i=1}^n x_i v_i,quadLambda^n(A) r=r v_1wedgecdotswedge v_n.
          $$

          (where $x_i$ is the $i$-th coordinate of $x$, and $r$ denotes any element of $Lambda^nleft(R^nright) cong R$).




          If $Lambda^n(A)$ is injective, so is $A$.




          In other words:




          If the $v_i$ are linearly dependent, then $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$.




          Indeed, for $x$ in $ker A$ we have
          $$
          Lambda^n(A) x_1=x_1 v_1wedge v_2wedgecdotswedge v_n=
          -sum_{i=2}^n x_i v_iwedge v_2wedgecdotswedge v_n=0,
          $$

          and, similarly, $Lambda^n(A) x_i=0$ for all $i$.



          [Edit: Old version (before Georges's comment): Assume now that $M$ embeds into $R^m$.]



          Assume now that there is an $R$-linear injection $B:Mto R^m$ such that
          $$
          Lambda^n(B):Lambda^n(M)toLambda^n(R^m)
          $$

          is injective. This is always the case (for a suitable $m$) if $M$ is projective and finitely generated.




          If $A$ is injective, so is $Lambda^n(A)$.




          In other words:




          If $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$, then the $v_i$ are linearly dependent.




          The proof is given in joriki's nice answer.



          This is also proved as Proposition 12 in Bourbaki's Algebra III.7.9 p. 519. Unfortunately, I don't understand Bourbaki's argument. I'd be most grateful to whoever would be kind and patient enough to explain it to me.



          EDIT 2. According to the indications given by Tsit-Yuen Lam on page 150 of his book Exercises in modules and rings, the theorem is due to N. H. McCoy, and appeared first, as Theorem 1 page 288, in




          • N. H. McCoy, Remarks on Divisors of Zero, The American Mathematical Monthly Vol. 49, No. 5 (May, 1942), pp. 286-295, JSTOR.


          Lam also says that




          • N. H. McCoy, Rings and ideals, The Carus Mathematical Monographs, no. 8, The Mathematical Association of America, 1948,


          is an "excellent exposition" of the subject. See Theorem 51 page 159.



          McCoy's Theorem is also stated and proved in the following texts:




          • Ex. 5.23.A(3) on page 149 of Lam's Exercises in modules and rings.


          • Theorem 2.2 page 3 in Anton Gerashenko's notes from Lam's Course: Math 274, Commutative Rings, Fall 2006: PDF file.


          • Theorem 1.6 in Chapter 13, entitled "Various topics", of The CRing Project. --- PDF file for Chapter 13. --- PDF file for the whole book.


          • Blocki, Zbigniew, An elementary proof of the McCoy theorem, J. Univ. Iagel. Acta Math.; N 30; 1993; 215-218.


          • Theorem 6.4.16. page 101, A Second Semester of Linear Algebra, Math 5718, by Stan Payne. PDF file.







          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks for this link! Note, however, that the result is proved only for square matrices there. This case was the original motivation for the question, but the question and my answer apply to the general case.
            $endgroup$
            – joriki
            Nov 10 '11 at 9:13












          • $begingroup$
            Dear @Pierre-Yves, I'm not quite sure that your claim (displayed in grey) "If $A$ is injective, so is $Lambda^n A$" holds if you assume only that $M$ embeds into $R^m$.There is the subtle point that exterior products don't mean the same in both spaces, in other words $Lambda ^n Mto Lambda ^n R^m$ needn't be injective. Everything is fine if $M$ is projective, though, and this is Bourbaki's assumption . But this is nitpicking: +1, needless to say.
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 14:17












          • $begingroup$
            Dear @Georges: No, this is definitely not nitpicking! Thanks a lot! I hope it's correct now. - It was my secret hope that you would read this answer. Did you see the last paragraph? I'm sure you understand Bourbaki's argument...
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 15:11










          • $begingroup$
            Dear @Pierre-Yves: No, I don't understand Bourbaki's argument either. Specifically, when he writes "it follows from no.8, Corollary 3 to Theorem 1 that $mu x_1$ is a linear combination....", I don't see how it follows. (By the way, this is Proposition 12 in my edition)
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 18:19












          • $begingroup$
            Dear @Georges: Once more you're right: it's Proposition 12 (in the link to the English edition I give, and in my French edition - it was just a typo). Thank you very much for your time and effort. I'm having the same problem as you with Bourbaki's proof. It's weird. I find joriki's formulation of the argument very nice.
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 18:42
















          8












          $begingroup$

          See Section III.8.7, entitled Application to Linear Equations, of Algebra, by Nicolas Bourbaki.



          EDIT 1. Let $R$ be a commutative ring, let $m$ and $n$ be positive integers, let $M$ be an $R$-module, and let $A:R^nto M$ be $R$-linear.



          Identify the $n$ th exterior power $Lambda^n(R^n)$ of $R^n$ to $R$ in the obvious way, so that $Lambda^n(A)$ is a map from $R$ to $Lambda^n(M)$.



          Put $v_i:=Ae_i$, where $e_i$ is the $i$ th vector of the canonical basis of $R^n$. In particular we have
          $$
          Ax=sum_{i=1}^n x_i v_i,quadLambda^n(A) r=r v_1wedgecdotswedge v_n.
          $$

          (where $x_i$ is the $i$-th coordinate of $x$, and $r$ denotes any element of $Lambda^nleft(R^nright) cong R$).




          If $Lambda^n(A)$ is injective, so is $A$.




          In other words:




          If the $v_i$ are linearly dependent, then $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$.




          Indeed, for $x$ in $ker A$ we have
          $$
          Lambda^n(A) x_1=x_1 v_1wedge v_2wedgecdotswedge v_n=
          -sum_{i=2}^n x_i v_iwedge v_2wedgecdotswedge v_n=0,
          $$

          and, similarly, $Lambda^n(A) x_i=0$ for all $i$.



          [Edit: Old version (before Georges's comment): Assume now that $M$ embeds into $R^m$.]



          Assume now that there is an $R$-linear injection $B:Mto R^m$ such that
          $$
          Lambda^n(B):Lambda^n(M)toLambda^n(R^m)
          $$

          is injective. This is always the case (for a suitable $m$) if $M$ is projective and finitely generated.




          If $A$ is injective, so is $Lambda^n(A)$.




          In other words:




          If $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$, then the $v_i$ are linearly dependent.




          The proof is given in joriki's nice answer.



          This is also proved as Proposition 12 in Bourbaki's Algebra III.7.9 p. 519. Unfortunately, I don't understand Bourbaki's argument. I'd be most grateful to whoever would be kind and patient enough to explain it to me.



          EDIT 2. According to the indications given by Tsit-Yuen Lam on page 150 of his book Exercises in modules and rings, the theorem is due to N. H. McCoy, and appeared first, as Theorem 1 page 288, in




          • N. H. McCoy, Remarks on Divisors of Zero, The American Mathematical Monthly Vol. 49, No. 5 (May, 1942), pp. 286-295, JSTOR.


          Lam also says that




          • N. H. McCoy, Rings and ideals, The Carus Mathematical Monographs, no. 8, The Mathematical Association of America, 1948,


          is an "excellent exposition" of the subject. See Theorem 51 page 159.



          McCoy's Theorem is also stated and proved in the following texts:




          • Ex. 5.23.A(3) on page 149 of Lam's Exercises in modules and rings.


          • Theorem 2.2 page 3 in Anton Gerashenko's notes from Lam's Course: Math 274, Commutative Rings, Fall 2006: PDF file.


          • Theorem 1.6 in Chapter 13, entitled "Various topics", of The CRing Project. --- PDF file for Chapter 13. --- PDF file for the whole book.


          • Blocki, Zbigniew, An elementary proof of the McCoy theorem, J. Univ. Iagel. Acta Math.; N 30; 1993; 215-218.


          • Theorem 6.4.16. page 101, A Second Semester of Linear Algebra, Math 5718, by Stan Payne. PDF file.







          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Thanks for this link! Note, however, that the result is proved only for square matrices there. This case was the original motivation for the question, but the question and my answer apply to the general case.
            $endgroup$
            – joriki
            Nov 10 '11 at 9:13












          • $begingroup$
            Dear @Pierre-Yves, I'm not quite sure that your claim (displayed in grey) "If $A$ is injective, so is $Lambda^n A$" holds if you assume only that $M$ embeds into $R^m$.There is the subtle point that exterior products don't mean the same in both spaces, in other words $Lambda ^n Mto Lambda ^n R^m$ needn't be injective. Everything is fine if $M$ is projective, though, and this is Bourbaki's assumption . But this is nitpicking: +1, needless to say.
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 14:17












          • $begingroup$
            Dear @Georges: No, this is definitely not nitpicking! Thanks a lot! I hope it's correct now. - It was my secret hope that you would read this answer. Did you see the last paragraph? I'm sure you understand Bourbaki's argument...
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 15:11










          • $begingroup$
            Dear @Pierre-Yves: No, I don't understand Bourbaki's argument either. Specifically, when he writes "it follows from no.8, Corollary 3 to Theorem 1 that $mu x_1$ is a linear combination....", I don't see how it follows. (By the way, this is Proposition 12 in my edition)
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 18:19












          • $begingroup$
            Dear @Georges: Once more you're right: it's Proposition 12 (in the link to the English edition I give, and in my French edition - it was just a typo). Thank you very much for your time and effort. I'm having the same problem as you with Bourbaki's proof. It's weird. I find joriki's formulation of the argument very nice.
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 18:42














          8












          8








          8





          $begingroup$

          See Section III.8.7, entitled Application to Linear Equations, of Algebra, by Nicolas Bourbaki.



          EDIT 1. Let $R$ be a commutative ring, let $m$ and $n$ be positive integers, let $M$ be an $R$-module, and let $A:R^nto M$ be $R$-linear.



          Identify the $n$ th exterior power $Lambda^n(R^n)$ of $R^n$ to $R$ in the obvious way, so that $Lambda^n(A)$ is a map from $R$ to $Lambda^n(M)$.



          Put $v_i:=Ae_i$, where $e_i$ is the $i$ th vector of the canonical basis of $R^n$. In particular we have
          $$
          Ax=sum_{i=1}^n x_i v_i,quadLambda^n(A) r=r v_1wedgecdotswedge v_n.
          $$

          (where $x_i$ is the $i$-th coordinate of $x$, and $r$ denotes any element of $Lambda^nleft(R^nright) cong R$).




          If $Lambda^n(A)$ is injective, so is $A$.




          In other words:




          If the $v_i$ are linearly dependent, then $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$.




          Indeed, for $x$ in $ker A$ we have
          $$
          Lambda^n(A) x_1=x_1 v_1wedge v_2wedgecdotswedge v_n=
          -sum_{i=2}^n x_i v_iwedge v_2wedgecdotswedge v_n=0,
          $$

          and, similarly, $Lambda^n(A) x_i=0$ for all $i$.



          [Edit: Old version (before Georges's comment): Assume now that $M$ embeds into $R^m$.]



          Assume now that there is an $R$-linear injection $B:Mto R^m$ such that
          $$
          Lambda^n(B):Lambda^n(M)toLambda^n(R^m)
          $$

          is injective. This is always the case (for a suitable $m$) if $M$ is projective and finitely generated.




          If $A$ is injective, so is $Lambda^n(A)$.




          In other words:




          If $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$, then the $v_i$ are linearly dependent.




          The proof is given in joriki's nice answer.



          This is also proved as Proposition 12 in Bourbaki's Algebra III.7.9 p. 519. Unfortunately, I don't understand Bourbaki's argument. I'd be most grateful to whoever would be kind and patient enough to explain it to me.



          EDIT 2. According to the indications given by Tsit-Yuen Lam on page 150 of his book Exercises in modules and rings, the theorem is due to N. H. McCoy, and appeared first, as Theorem 1 page 288, in




          • N. H. McCoy, Remarks on Divisors of Zero, The American Mathematical Monthly Vol. 49, No. 5 (May, 1942), pp. 286-295, JSTOR.


          Lam also says that




          • N. H. McCoy, Rings and ideals, The Carus Mathematical Monographs, no. 8, The Mathematical Association of America, 1948,


          is an "excellent exposition" of the subject. See Theorem 51 page 159.



          McCoy's Theorem is also stated and proved in the following texts:




          • Ex. 5.23.A(3) on page 149 of Lam's Exercises in modules and rings.


          • Theorem 2.2 page 3 in Anton Gerashenko's notes from Lam's Course: Math 274, Commutative Rings, Fall 2006: PDF file.


          • Theorem 1.6 in Chapter 13, entitled "Various topics", of The CRing Project. --- PDF file for Chapter 13. --- PDF file for the whole book.


          • Blocki, Zbigniew, An elementary proof of the McCoy theorem, J. Univ. Iagel. Acta Math.; N 30; 1993; 215-218.


          • Theorem 6.4.16. page 101, A Second Semester of Linear Algebra, Math 5718, by Stan Payne. PDF file.







          share|cite|improve this answer











          $endgroup$



          See Section III.8.7, entitled Application to Linear Equations, of Algebra, by Nicolas Bourbaki.



          EDIT 1. Let $R$ be a commutative ring, let $m$ and $n$ be positive integers, let $M$ be an $R$-module, and let $A:R^nto M$ be $R$-linear.



          Identify the $n$ th exterior power $Lambda^n(R^n)$ of $R^n$ to $R$ in the obvious way, so that $Lambda^n(A)$ is a map from $R$ to $Lambda^n(M)$.



          Put $v_i:=Ae_i$, where $e_i$ is the $i$ th vector of the canonical basis of $R^n$. In particular we have
          $$
          Ax=sum_{i=1}^n x_i v_i,quadLambda^n(A) r=r v_1wedgecdotswedge v_n.
          $$

          (where $x_i$ is the $i$-th coordinate of $x$, and $r$ denotes any element of $Lambda^nleft(R^nright) cong R$).




          If $Lambda^n(A)$ is injective, so is $A$.




          In other words:




          If the $v_i$ are linearly dependent, then $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$.




          Indeed, for $x$ in $ker A$ we have
          $$
          Lambda^n(A) x_1=x_1 v_1wedge v_2wedgecdotswedge v_n=
          -sum_{i=2}^n x_i v_iwedge v_2wedgecdotswedge v_n=0,
          $$

          and, similarly, $Lambda^n(A) x_i=0$ for all $i$.



          [Edit: Old version (before Georges's comment): Assume now that $M$ embeds into $R^m$.]



          Assume now that there is an $R$-linear injection $B:Mto R^m$ such that
          $$
          Lambda^n(B):Lambda^n(M)toLambda^n(R^m)
          $$

          is injective. This is always the case (for a suitable $m$) if $M$ is projective and finitely generated.




          If $A$ is injective, so is $Lambda^n(A)$.




          In other words:




          If $r v_1wedgecdotswedge v_n=0$ for some nonzero $r$ in $R$, then the $v_i$ are linearly dependent.




          The proof is given in joriki's nice answer.



          This is also proved as Proposition 12 in Bourbaki's Algebra III.7.9 p. 519. Unfortunately, I don't understand Bourbaki's argument. I'd be most grateful to whoever would be kind and patient enough to explain it to me.



          EDIT 2. According to the indications given by Tsit-Yuen Lam on page 150 of his book Exercises in modules and rings, the theorem is due to N. H. McCoy, and appeared first, as Theorem 1 page 288, in




          • N. H. McCoy, Remarks on Divisors of Zero, The American Mathematical Monthly Vol. 49, No. 5 (May, 1942), pp. 286-295, JSTOR.


          Lam also says that




          • N. H. McCoy, Rings and ideals, The Carus Mathematical Monographs, no. 8, The Mathematical Association of America, 1948,


          is an "excellent exposition" of the subject. See Theorem 51 page 159.



          McCoy's Theorem is also stated and proved in the following texts:




          • Ex. 5.23.A(3) on page 149 of Lam's Exercises in modules and rings.


          • Theorem 2.2 page 3 in Anton Gerashenko's notes from Lam's Course: Math 274, Commutative Rings, Fall 2006: PDF file.


          • Theorem 1.6 in Chapter 13, entitled "Various topics", of The CRing Project. --- PDF file for Chapter 13. --- PDF file for the whole book.


          • Blocki, Zbigniew, An elementary proof of the McCoy theorem, J. Univ. Iagel. Acta Math.; N 30; 1993; 215-218.


          • Theorem 6.4.16. page 101, A Second Semester of Linear Algebra, Math 5718, by Stan Payne. PDF file.








          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Jan 20 at 12:34









          darij grinberg

          11.1k33167




          11.1k33167










          answered Oct 11 '11 at 16:57









          Pierre-Yves GaillardPierre-Yves Gaillard

          13.4k23184




          13.4k23184












          • $begingroup$
            Thanks for this link! Note, however, that the result is proved only for square matrices there. This case was the original motivation for the question, but the question and my answer apply to the general case.
            $endgroup$
            – joriki
            Nov 10 '11 at 9:13












          • $begingroup$
            Dear @Pierre-Yves, I'm not quite sure that your claim (displayed in grey) "If $A$ is injective, so is $Lambda^n A$" holds if you assume only that $M$ embeds into $R^m$.There is the subtle point that exterior products don't mean the same in both spaces, in other words $Lambda ^n Mto Lambda ^n R^m$ needn't be injective. Everything is fine if $M$ is projective, though, and this is Bourbaki's assumption . But this is nitpicking: +1, needless to say.
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 14:17












          • $begingroup$
            Dear @Georges: No, this is definitely not nitpicking! Thanks a lot! I hope it's correct now. - It was my secret hope that you would read this answer. Did you see the last paragraph? I'm sure you understand Bourbaki's argument...
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 15:11










          • $begingroup$
            Dear @Pierre-Yves: No, I don't understand Bourbaki's argument either. Specifically, when he writes "it follows from no.8, Corollary 3 to Theorem 1 that $mu x_1$ is a linear combination....", I don't see how it follows. (By the way, this is Proposition 12 in my edition)
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 18:19












          • $begingroup$
            Dear @Georges: Once more you're right: it's Proposition 12 (in the link to the English edition I give, and in my French edition - it was just a typo). Thank you very much for your time and effort. I'm having the same problem as you with Bourbaki's proof. It's weird. I find joriki's formulation of the argument very nice.
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 18:42


















          • $begingroup$
            Thanks for this link! Note, however, that the result is proved only for square matrices there. This case was the original motivation for the question, but the question and my answer apply to the general case.
            $endgroup$
            – joriki
            Nov 10 '11 at 9:13












          • $begingroup$
            Dear @Pierre-Yves, I'm not quite sure that your claim (displayed in grey) "If $A$ is injective, so is $Lambda^n A$" holds if you assume only that $M$ embeds into $R^m$.There is the subtle point that exterior products don't mean the same in both spaces, in other words $Lambda ^n Mto Lambda ^n R^m$ needn't be injective. Everything is fine if $M$ is projective, though, and this is Bourbaki's assumption . But this is nitpicking: +1, needless to say.
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 14:17












          • $begingroup$
            Dear @Georges: No, this is definitely not nitpicking! Thanks a lot! I hope it's correct now. - It was my secret hope that you would read this answer. Did you see the last paragraph? I'm sure you understand Bourbaki's argument...
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 15:11










          • $begingroup$
            Dear @Pierre-Yves: No, I don't understand Bourbaki's argument either. Specifically, when he writes "it follows from no.8, Corollary 3 to Theorem 1 that $mu x_1$ is a linear combination....", I don't see how it follows. (By the way, this is Proposition 12 in my edition)
            $endgroup$
            – Georges Elencwajg
            Nov 12 '11 at 18:19












          • $begingroup$
            Dear @Georges: Once more you're right: it's Proposition 12 (in the link to the English edition I give, and in my French edition - it was just a typo). Thank you very much for your time and effort. I'm having the same problem as you with Bourbaki's proof. It's weird. I find joriki's formulation of the argument very nice.
            $endgroup$
            – Pierre-Yves Gaillard
            Nov 12 '11 at 18:42
















          $begingroup$
          Thanks for this link! Note, however, that the result is proved only for square matrices there. This case was the original motivation for the question, but the question and my answer apply to the general case.
          $endgroup$
          – joriki
          Nov 10 '11 at 9:13






          $begingroup$
          Thanks for this link! Note, however, that the result is proved only for square matrices there. This case was the original motivation for the question, but the question and my answer apply to the general case.
          $endgroup$
          – joriki
          Nov 10 '11 at 9:13














          $begingroup$
          Dear @Pierre-Yves, I'm not quite sure that your claim (displayed in grey) "If $A$ is injective, so is $Lambda^n A$" holds if you assume only that $M$ embeds into $R^m$.There is the subtle point that exterior products don't mean the same in both spaces, in other words $Lambda ^n Mto Lambda ^n R^m$ needn't be injective. Everything is fine if $M$ is projective, though, and this is Bourbaki's assumption . But this is nitpicking: +1, needless to say.
          $endgroup$
          – Georges Elencwajg
          Nov 12 '11 at 14:17






          $begingroup$
          Dear @Pierre-Yves, I'm not quite sure that your claim (displayed in grey) "If $A$ is injective, so is $Lambda^n A$" holds if you assume only that $M$ embeds into $R^m$.There is the subtle point that exterior products don't mean the same in both spaces, in other words $Lambda ^n Mto Lambda ^n R^m$ needn't be injective. Everything is fine if $M$ is projective, though, and this is Bourbaki's assumption . But this is nitpicking: +1, needless to say.
          $endgroup$
          – Georges Elencwajg
          Nov 12 '11 at 14:17














          $begingroup$
          Dear @Georges: No, this is definitely not nitpicking! Thanks a lot! I hope it's correct now. - It was my secret hope that you would read this answer. Did you see the last paragraph? I'm sure you understand Bourbaki's argument...
          $endgroup$
          – Pierre-Yves Gaillard
          Nov 12 '11 at 15:11




          $begingroup$
          Dear @Georges: No, this is definitely not nitpicking! Thanks a lot! I hope it's correct now. - It was my secret hope that you would read this answer. Did you see the last paragraph? I'm sure you understand Bourbaki's argument...
          $endgroup$
          – Pierre-Yves Gaillard
          Nov 12 '11 at 15:11












          $begingroup$
          Dear @Pierre-Yves: No, I don't understand Bourbaki's argument either. Specifically, when he writes "it follows from no.8, Corollary 3 to Theorem 1 that $mu x_1$ is a linear combination....", I don't see how it follows. (By the way, this is Proposition 12 in my edition)
          $endgroup$
          – Georges Elencwajg
          Nov 12 '11 at 18:19






          $begingroup$
          Dear @Pierre-Yves: No, I don't understand Bourbaki's argument either. Specifically, when he writes "it follows from no.8, Corollary 3 to Theorem 1 that $mu x_1$ is a linear combination....", I don't see how it follows. (By the way, this is Proposition 12 in my edition)
          $endgroup$
          – Georges Elencwajg
          Nov 12 '11 at 18:19














          $begingroup$
          Dear @Georges: Once more you're right: it's Proposition 12 (in the link to the English edition I give, and in my French edition - it was just a typo). Thank you very much for your time and effort. I'm having the same problem as you with Bourbaki's proof. It's weird. I find joriki's formulation of the argument very nice.
          $endgroup$
          – Pierre-Yves Gaillard
          Nov 12 '11 at 18:42




          $begingroup$
          Dear @Georges: Once more you're right: it's Proposition 12 (in the link to the English edition I give, and in my French edition - it was just a typo). Thank you very much for your time and effort. I'm having the same problem as you with Bourbaki's proof. It's weird. I find joriki's formulation of the argument very nice.
          $endgroup$
          – Pierre-Yves Gaillard
          Nov 12 '11 at 18:42


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f71740%2fnecessary-and-sufficient-condition-for-trivial-kernel-of-a-matrix-over-a-commuta%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          MongoDB - Not Authorized To Execute Command

          Npm cannot find a required file even through it is in the searched directory

          in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith