How to prove that eigenvectors from different eigenvalues are linearly independent












54












$begingroup$


How can I prove that if I have $n$ eigenvectors from different eigenvalues, they are all linearly independent?










share|cite|improve this question











$endgroup$

















    54












    $begingroup$


    How can I prove that if I have $n$ eigenvectors from different eigenvalues, they are all linearly independent?










    share|cite|improve this question











    $endgroup$















      54












      54








      54


      29



      $begingroup$


      How can I prove that if I have $n$ eigenvectors from different eigenvalues, they are all linearly independent?










      share|cite|improve this question











      $endgroup$




      How can I prove that if I have $n$ eigenvectors from different eigenvalues, they are all linearly independent?







      linear-algebra eigenvalues-eigenvectors






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 12 '16 at 16:38









      Martin Sleziak

      44.7k8117272




      44.7k8117272










      asked Mar 27 '11 at 22:00









      Corey L.Corey L.

      276145




      276145






















          4 Answers
          4






          active

          oldest

          votes


















          69












          $begingroup$

          I'll do it with two vectors. I'll leave it to you do it in general.



          Suppose $mathbf{v}_1$ and $mathbf{v}_2$ correspond to distinct eigenvalues $lambda_1$ and $lambda_2$, respectively.



          Take a linear combination that is equal to $0$, $alpha_1mathbf{v}_1+alpha_2mathbf{v}_2 = mathbf{0}$. We need to show that $alpha_1=alpha_2=0$.



          Applying $T$ to both sides, we get
          $$mathbf{0} = T(mathbf{0}) = T(alpha_1mathbf{v}_1+alpha_2mathbf{v}_2) = alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2.$$
          Now, instead, multiply the original equation by $lambda_1$:
          $$mathbf{0} = lambda_1alpha_1mathbf{v}_1 + lambda_1alpha_2mathbf{v}_2.$$
          Now take the two equations,
          $$begin{align*}
          mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2\
          mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_1mathbf{v}_2
          end{align*}$$
          and taking the difference, we get:
          $$mathbf{0} = 0mathbf{v}_1 + alpha_2(lambda_2-lambda_1)mathbf{v}_2 = alpha_2(lambda_2-lambda_1)mathbf{v}_2.$$



          Since $lambda_2-lambda_1neq 0$, and since $mathbf{v}_2neqmathbf{0}$ (because $mathbf{v}_2$ is an eigenvector), then $alpha_2=0$. Using this on the original linear combination $mathbf{0} = alpha_1mathbf{v}_1 + alpha_2mathbf{v}_2$, we conclude that $alpha_1=0$ as well (since $mathbf{v}_1neqmathbf{0}$).



          So $mathbf{v}_1$ and $mathbf{v}_2$ are linearly independent.



          Now try using induction on $n$ for the general case.






          share|cite|improve this answer











          $endgroup$









          • 3




            $begingroup$
            I believe that you wrote $lambda_2$ instead of $lambda_1$ in the row before "Now take"
            $endgroup$
            – jacob
            Mar 14 '14 at 8:08






          • 4




            $begingroup$
            Is there any intuition behind this? Any pictorial way of thinking?
            $endgroup$
            – IgNite
            May 7 '16 at 8:42










          • $begingroup$
            Maybe this will be of use: from the $0 = alpha_1 v_1 + ... + alpha_n v_n$, if you do $$left|left|lim_{jtoinfty} (1/ lambda_1^j) A^j (alpha_1 v_1 + ... + alpha_n v_n)right|right|$$, from the definition of the eigenvalue (that $Av = lambda v$), we can see the $v_1$ component will grow much faster than the others, so that limit equals $alpha_1$, which equals $0$ if linearly independent.
            $endgroup$
            – user203509
            Apr 29 '17 at 15:54










          • $begingroup$
            @Arturo Very nice.
            $endgroup$
            – user412674
            May 27 '17 at 20:24



















          31












          $begingroup$

          Alternative:



          Let $j$ be the maximal $j$ such that $v_1,dots,v_j$ are independent. Then there exists $c_i$, $1leq ileq j$ so that $sum_{i=1}^j c_iv_i=v_{j+1}$. But by applying $T$ we also have that



          $$sum_{i=1}^j c_ilambda_iv_i=lambda_{j+1}v_{j+1}=lambda_{j+1}sum_{i=1}^j c_i v_i$$ Hence $$sum_{i=1}^j left(lambda_i-lambda_{j+1}right) c_iv_i=0$$ which is a contradiction since $lambda_ineq lambda_{j+1}$ for $1leq ileq j$.



          Hope that helps,






          share|cite|improve this answer











          $endgroup$









          • 4




            $begingroup$
            P.S. The argument uses the well-ordering principle on the naturals (by looking at the least $j$ such that $v_1,ldots,v_{j+1}$ is dependent). Well-ordering for the naturals is equivalent to induction.
            $endgroup$
            – Arturo Magidin
            Mar 28 '11 at 1:20










          • $begingroup$
            @Arturo: Thanks! Should be fixed now. (Didn't realize I was using well ordering in that way)
            $endgroup$
            – Eric Naslund
            Mar 28 '11 at 1:30






          • 1




            $begingroup$
            No problem; I'll delete the other two comments since it's been fixed. In any case, many people prefer arguments along these lines to an explicit induction, even if they are logically equivalent (you can think of the argument you give as a proof by contradiction of the inductive step, with the base being taken for granted [or as trivial, since $v_1$ is nonzero]; cast that way, it may be clearer why the two arguments are closely connected).
            $endgroup$
            – Arturo Magidin
            Mar 28 '11 at 1:34






          • 1




            $begingroup$
            @Eric Naslund This proofs seems very similar to the one given in Axler...
            $endgroup$
            – user38268
            Aug 31 '11 at 7:29










          • $begingroup$
            @D B Lim: What is Axler?
            $endgroup$
            – Eric Naslund
            Aug 31 '11 at 14:42



















          11












          $begingroup$

          Hey I think there's a slick way to do this without induction. Suppose that $T$ is a linear transformation of a vector space $V$ and that $v_1,ldots,v_n in V$ are eigenvectors of $T$ with corresponding eigenvalues $lambda_1,ldots,lambda_n in F$ ($F$ the field of scalars). We want to show that, if $sum_{i=1}^n c_i v_i = 0$, where the coefficients $c_i$ are in $F$, then necessarily each $c_i$ is zero.



          For simplicity, I will just explain why $c_1 = 0$. Consider the polynomial $p_1(x) in F[x]$ given as $p_1(x) = (x-lambda_2) cdots (x-lambda_n)$. Note that the $x-lambda_1$ term is "missing" here. Now, since each $v_i$ is an eigenvector of $T$, we have
          begin{align*}
          p_1(T) v_i = p_1(lambda_i) v_i &&
          text{ where} && p_1(lambda_i) = begin{cases}
          0 & text{ if } i neq 1 \
          p_1(lambda_1) neq 0 & text{ if } i = 1
          end{cases}.
          end{align*}



          Thus, applying $p_1(T)$ to the sum $sum_{i=1}^n c_i v_i = 0$, we get
          $$ p_1(lambda_1) c_1 v_1 = 0 $$
          which implies $c_1 = 0$, since $p_1(lambda_1) neq 0$ and $v_1 neq 0$.






          share|cite|improve this answer











          $endgroup$





















            1












            $begingroup$

            For eigenvectors $vec{v^1},vec{v^2},dots,vec{v^n}$ with different eigenvalues $lambda_1neqlambda_2neq dots neqlambda_n$ of a $ ntimes n$ matrix $A$.



            Given the $ ntimes n$ matrix $P$ of the eigenvectors (with eigenvectors as the columns).
            $$P=Big[vec{v^1},vec{v^2},dots,vec{v^n}Big]$$



            Given the $ ntimes n$ matrix $Lambda$ of the eigenvalues on the diagonal (zeros elsewhere):
            $$Lambda = begin{bmatrix}
            lambda_1 & 0 & dots & 0 \
            0 & lambda_2 & dots & 0 \
            vdots & vdots & ddots & vdots \
            0 & 0 & dots & lambda_n
            end{bmatrix}
            $$
            Let $vec{c}=(c_1,c_2,dots,c_n)^T$



            We need to show that only $c_1=c_2=...=c_n=0$ can satisfy the following:
            $$c_1vec{v^1}+c_2vec{v^2}+...= vec{0^{}}$$
            Applying the matrix to this equation gives:
            $$c_1lambda_1vec{v^1}+c_2lambda_2vec{v^2}+...+c_nlambda_nvec{v^n}= vec{0^{}}$$
            We can write this equation in the form of vectors and matrices:



            $$PLambda vec{c^{}}=vec{0^{}}$$



            But with since $A$ can be diagonalised to $Lambda$, we know $PLambda=AP$
            $$implies APvec{c^{}}=vec{0^{}}$$
            since $APneq 0$, we have $vec{c}=0$.






            share|cite|improve this answer









            $endgroup$













            • $begingroup$
              Let me add a comment to the last. In order to have $c=0$ the only solution, $AP$ must be invertible but since $A$ is already invertible because $lambda_i neq lambda_j forall ineq j$, the only demand is $P$ to be invertible $Rightarrow$ independent eigenvectors.
              $endgroup$
              – Thoth
              Sep 13 '16 at 16:36













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f29371%2fhow-to-prove-that-eigenvectors-from-different-eigenvalues-are-linearly-independe%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            4 Answers
            4






            active

            oldest

            votes








            4 Answers
            4






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            69












            $begingroup$

            I'll do it with two vectors. I'll leave it to you do it in general.



            Suppose $mathbf{v}_1$ and $mathbf{v}_2$ correspond to distinct eigenvalues $lambda_1$ and $lambda_2$, respectively.



            Take a linear combination that is equal to $0$, $alpha_1mathbf{v}_1+alpha_2mathbf{v}_2 = mathbf{0}$. We need to show that $alpha_1=alpha_2=0$.



            Applying $T$ to both sides, we get
            $$mathbf{0} = T(mathbf{0}) = T(alpha_1mathbf{v}_1+alpha_2mathbf{v}_2) = alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2.$$
            Now, instead, multiply the original equation by $lambda_1$:
            $$mathbf{0} = lambda_1alpha_1mathbf{v}_1 + lambda_1alpha_2mathbf{v}_2.$$
            Now take the two equations,
            $$begin{align*}
            mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2\
            mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_1mathbf{v}_2
            end{align*}$$
            and taking the difference, we get:
            $$mathbf{0} = 0mathbf{v}_1 + alpha_2(lambda_2-lambda_1)mathbf{v}_2 = alpha_2(lambda_2-lambda_1)mathbf{v}_2.$$



            Since $lambda_2-lambda_1neq 0$, and since $mathbf{v}_2neqmathbf{0}$ (because $mathbf{v}_2$ is an eigenvector), then $alpha_2=0$. Using this on the original linear combination $mathbf{0} = alpha_1mathbf{v}_1 + alpha_2mathbf{v}_2$, we conclude that $alpha_1=0$ as well (since $mathbf{v}_1neqmathbf{0}$).



            So $mathbf{v}_1$ and $mathbf{v}_2$ are linearly independent.



            Now try using induction on $n$ for the general case.






            share|cite|improve this answer











            $endgroup$









            • 3




              $begingroup$
              I believe that you wrote $lambda_2$ instead of $lambda_1$ in the row before "Now take"
              $endgroup$
              – jacob
              Mar 14 '14 at 8:08






            • 4




              $begingroup$
              Is there any intuition behind this? Any pictorial way of thinking?
              $endgroup$
              – IgNite
              May 7 '16 at 8:42










            • $begingroup$
              Maybe this will be of use: from the $0 = alpha_1 v_1 + ... + alpha_n v_n$, if you do $$left|left|lim_{jtoinfty} (1/ lambda_1^j) A^j (alpha_1 v_1 + ... + alpha_n v_n)right|right|$$, from the definition of the eigenvalue (that $Av = lambda v$), we can see the $v_1$ component will grow much faster than the others, so that limit equals $alpha_1$, which equals $0$ if linearly independent.
              $endgroup$
              – user203509
              Apr 29 '17 at 15:54










            • $begingroup$
              @Arturo Very nice.
              $endgroup$
              – user412674
              May 27 '17 at 20:24
















            69












            $begingroup$

            I'll do it with two vectors. I'll leave it to you do it in general.



            Suppose $mathbf{v}_1$ and $mathbf{v}_2$ correspond to distinct eigenvalues $lambda_1$ and $lambda_2$, respectively.



            Take a linear combination that is equal to $0$, $alpha_1mathbf{v}_1+alpha_2mathbf{v}_2 = mathbf{0}$. We need to show that $alpha_1=alpha_2=0$.



            Applying $T$ to both sides, we get
            $$mathbf{0} = T(mathbf{0}) = T(alpha_1mathbf{v}_1+alpha_2mathbf{v}_2) = alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2.$$
            Now, instead, multiply the original equation by $lambda_1$:
            $$mathbf{0} = lambda_1alpha_1mathbf{v}_1 + lambda_1alpha_2mathbf{v}_2.$$
            Now take the two equations,
            $$begin{align*}
            mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2\
            mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_1mathbf{v}_2
            end{align*}$$
            and taking the difference, we get:
            $$mathbf{0} = 0mathbf{v}_1 + alpha_2(lambda_2-lambda_1)mathbf{v}_2 = alpha_2(lambda_2-lambda_1)mathbf{v}_2.$$



            Since $lambda_2-lambda_1neq 0$, and since $mathbf{v}_2neqmathbf{0}$ (because $mathbf{v}_2$ is an eigenvector), then $alpha_2=0$. Using this on the original linear combination $mathbf{0} = alpha_1mathbf{v}_1 + alpha_2mathbf{v}_2$, we conclude that $alpha_1=0$ as well (since $mathbf{v}_1neqmathbf{0}$).



            So $mathbf{v}_1$ and $mathbf{v}_2$ are linearly independent.



            Now try using induction on $n$ for the general case.






            share|cite|improve this answer











            $endgroup$









            • 3




              $begingroup$
              I believe that you wrote $lambda_2$ instead of $lambda_1$ in the row before "Now take"
              $endgroup$
              – jacob
              Mar 14 '14 at 8:08






            • 4




              $begingroup$
              Is there any intuition behind this? Any pictorial way of thinking?
              $endgroup$
              – IgNite
              May 7 '16 at 8:42










            • $begingroup$
              Maybe this will be of use: from the $0 = alpha_1 v_1 + ... + alpha_n v_n$, if you do $$left|left|lim_{jtoinfty} (1/ lambda_1^j) A^j (alpha_1 v_1 + ... + alpha_n v_n)right|right|$$, from the definition of the eigenvalue (that $Av = lambda v$), we can see the $v_1$ component will grow much faster than the others, so that limit equals $alpha_1$, which equals $0$ if linearly independent.
              $endgroup$
              – user203509
              Apr 29 '17 at 15:54










            • $begingroup$
              @Arturo Very nice.
              $endgroup$
              – user412674
              May 27 '17 at 20:24














            69












            69








            69





            $begingroup$

            I'll do it with two vectors. I'll leave it to you do it in general.



            Suppose $mathbf{v}_1$ and $mathbf{v}_2$ correspond to distinct eigenvalues $lambda_1$ and $lambda_2$, respectively.



            Take a linear combination that is equal to $0$, $alpha_1mathbf{v}_1+alpha_2mathbf{v}_2 = mathbf{0}$. We need to show that $alpha_1=alpha_2=0$.



            Applying $T$ to both sides, we get
            $$mathbf{0} = T(mathbf{0}) = T(alpha_1mathbf{v}_1+alpha_2mathbf{v}_2) = alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2.$$
            Now, instead, multiply the original equation by $lambda_1$:
            $$mathbf{0} = lambda_1alpha_1mathbf{v}_1 + lambda_1alpha_2mathbf{v}_2.$$
            Now take the two equations,
            $$begin{align*}
            mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2\
            mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_1mathbf{v}_2
            end{align*}$$
            and taking the difference, we get:
            $$mathbf{0} = 0mathbf{v}_1 + alpha_2(lambda_2-lambda_1)mathbf{v}_2 = alpha_2(lambda_2-lambda_1)mathbf{v}_2.$$



            Since $lambda_2-lambda_1neq 0$, and since $mathbf{v}_2neqmathbf{0}$ (because $mathbf{v}_2$ is an eigenvector), then $alpha_2=0$. Using this on the original linear combination $mathbf{0} = alpha_1mathbf{v}_1 + alpha_2mathbf{v}_2$, we conclude that $alpha_1=0$ as well (since $mathbf{v}_1neqmathbf{0}$).



            So $mathbf{v}_1$ and $mathbf{v}_2$ are linearly independent.



            Now try using induction on $n$ for the general case.






            share|cite|improve this answer











            $endgroup$



            I'll do it with two vectors. I'll leave it to you do it in general.



            Suppose $mathbf{v}_1$ and $mathbf{v}_2$ correspond to distinct eigenvalues $lambda_1$ and $lambda_2$, respectively.



            Take a linear combination that is equal to $0$, $alpha_1mathbf{v}_1+alpha_2mathbf{v}_2 = mathbf{0}$. We need to show that $alpha_1=alpha_2=0$.



            Applying $T$ to both sides, we get
            $$mathbf{0} = T(mathbf{0}) = T(alpha_1mathbf{v}_1+alpha_2mathbf{v}_2) = alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2.$$
            Now, instead, multiply the original equation by $lambda_1$:
            $$mathbf{0} = lambda_1alpha_1mathbf{v}_1 + lambda_1alpha_2mathbf{v}_2.$$
            Now take the two equations,
            $$begin{align*}
            mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_2mathbf{v}_2\
            mathbf{0} &= alpha_1lambda_1mathbf{v}_1 + alpha_2lambda_1mathbf{v}_2
            end{align*}$$
            and taking the difference, we get:
            $$mathbf{0} = 0mathbf{v}_1 + alpha_2(lambda_2-lambda_1)mathbf{v}_2 = alpha_2(lambda_2-lambda_1)mathbf{v}_2.$$



            Since $lambda_2-lambda_1neq 0$, and since $mathbf{v}_2neqmathbf{0}$ (because $mathbf{v}_2$ is an eigenvector), then $alpha_2=0$. Using this on the original linear combination $mathbf{0} = alpha_1mathbf{v}_1 + alpha_2mathbf{v}_2$, we conclude that $alpha_1=0$ as well (since $mathbf{v}_1neqmathbf{0}$).



            So $mathbf{v}_1$ and $mathbf{v}_2$ are linearly independent.



            Now try using induction on $n$ for the general case.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Dec 2 '15 at 1:00









            Community

            1




            1










            answered Mar 27 '11 at 22:06









            Arturo MagidinArturo Magidin

            261k33585906




            261k33585906








            • 3




              $begingroup$
              I believe that you wrote $lambda_2$ instead of $lambda_1$ in the row before "Now take"
              $endgroup$
              – jacob
              Mar 14 '14 at 8:08






            • 4




              $begingroup$
              Is there any intuition behind this? Any pictorial way of thinking?
              $endgroup$
              – IgNite
              May 7 '16 at 8:42










            • $begingroup$
              Maybe this will be of use: from the $0 = alpha_1 v_1 + ... + alpha_n v_n$, if you do $$left|left|lim_{jtoinfty} (1/ lambda_1^j) A^j (alpha_1 v_1 + ... + alpha_n v_n)right|right|$$, from the definition of the eigenvalue (that $Av = lambda v$), we can see the $v_1$ component will grow much faster than the others, so that limit equals $alpha_1$, which equals $0$ if linearly independent.
              $endgroup$
              – user203509
              Apr 29 '17 at 15:54










            • $begingroup$
              @Arturo Very nice.
              $endgroup$
              – user412674
              May 27 '17 at 20:24














            • 3




              $begingroup$
              I believe that you wrote $lambda_2$ instead of $lambda_1$ in the row before "Now take"
              $endgroup$
              – jacob
              Mar 14 '14 at 8:08






            • 4




              $begingroup$
              Is there any intuition behind this? Any pictorial way of thinking?
              $endgroup$
              – IgNite
              May 7 '16 at 8:42










            • $begingroup$
              Maybe this will be of use: from the $0 = alpha_1 v_1 + ... + alpha_n v_n$, if you do $$left|left|lim_{jtoinfty} (1/ lambda_1^j) A^j (alpha_1 v_1 + ... + alpha_n v_n)right|right|$$, from the definition of the eigenvalue (that $Av = lambda v$), we can see the $v_1$ component will grow much faster than the others, so that limit equals $alpha_1$, which equals $0$ if linearly independent.
              $endgroup$
              – user203509
              Apr 29 '17 at 15:54










            • $begingroup$
              @Arturo Very nice.
              $endgroup$
              – user412674
              May 27 '17 at 20:24








            3




            3




            $begingroup$
            I believe that you wrote $lambda_2$ instead of $lambda_1$ in the row before "Now take"
            $endgroup$
            – jacob
            Mar 14 '14 at 8:08




            $begingroup$
            I believe that you wrote $lambda_2$ instead of $lambda_1$ in the row before "Now take"
            $endgroup$
            – jacob
            Mar 14 '14 at 8:08




            4




            4




            $begingroup$
            Is there any intuition behind this? Any pictorial way of thinking?
            $endgroup$
            – IgNite
            May 7 '16 at 8:42




            $begingroup$
            Is there any intuition behind this? Any pictorial way of thinking?
            $endgroup$
            – IgNite
            May 7 '16 at 8:42












            $begingroup$
            Maybe this will be of use: from the $0 = alpha_1 v_1 + ... + alpha_n v_n$, if you do $$left|left|lim_{jtoinfty} (1/ lambda_1^j) A^j (alpha_1 v_1 + ... + alpha_n v_n)right|right|$$, from the definition of the eigenvalue (that $Av = lambda v$), we can see the $v_1$ component will grow much faster than the others, so that limit equals $alpha_1$, which equals $0$ if linearly independent.
            $endgroup$
            – user203509
            Apr 29 '17 at 15:54




            $begingroup$
            Maybe this will be of use: from the $0 = alpha_1 v_1 + ... + alpha_n v_n$, if you do $$left|left|lim_{jtoinfty} (1/ lambda_1^j) A^j (alpha_1 v_1 + ... + alpha_n v_n)right|right|$$, from the definition of the eigenvalue (that $Av = lambda v$), we can see the $v_1$ component will grow much faster than the others, so that limit equals $alpha_1$, which equals $0$ if linearly independent.
            $endgroup$
            – user203509
            Apr 29 '17 at 15:54












            $begingroup$
            @Arturo Very nice.
            $endgroup$
            – user412674
            May 27 '17 at 20:24




            $begingroup$
            @Arturo Very nice.
            $endgroup$
            – user412674
            May 27 '17 at 20:24











            31












            $begingroup$

            Alternative:



            Let $j$ be the maximal $j$ such that $v_1,dots,v_j$ are independent. Then there exists $c_i$, $1leq ileq j$ so that $sum_{i=1}^j c_iv_i=v_{j+1}$. But by applying $T$ we also have that



            $$sum_{i=1}^j c_ilambda_iv_i=lambda_{j+1}v_{j+1}=lambda_{j+1}sum_{i=1}^j c_i v_i$$ Hence $$sum_{i=1}^j left(lambda_i-lambda_{j+1}right) c_iv_i=0$$ which is a contradiction since $lambda_ineq lambda_{j+1}$ for $1leq ileq j$.



            Hope that helps,






            share|cite|improve this answer











            $endgroup$









            • 4




              $begingroup$
              P.S. The argument uses the well-ordering principle on the naturals (by looking at the least $j$ such that $v_1,ldots,v_{j+1}$ is dependent). Well-ordering for the naturals is equivalent to induction.
              $endgroup$
              – Arturo Magidin
              Mar 28 '11 at 1:20










            • $begingroup$
              @Arturo: Thanks! Should be fixed now. (Didn't realize I was using well ordering in that way)
              $endgroup$
              – Eric Naslund
              Mar 28 '11 at 1:30






            • 1




              $begingroup$
              No problem; I'll delete the other two comments since it's been fixed. In any case, many people prefer arguments along these lines to an explicit induction, even if they are logically equivalent (you can think of the argument you give as a proof by contradiction of the inductive step, with the base being taken for granted [or as trivial, since $v_1$ is nonzero]; cast that way, it may be clearer why the two arguments are closely connected).
              $endgroup$
              – Arturo Magidin
              Mar 28 '11 at 1:34






            • 1




              $begingroup$
              @Eric Naslund This proofs seems very similar to the one given in Axler...
              $endgroup$
              – user38268
              Aug 31 '11 at 7:29










            • $begingroup$
              @D B Lim: What is Axler?
              $endgroup$
              – Eric Naslund
              Aug 31 '11 at 14:42
















            31












            $begingroup$

            Alternative:



            Let $j$ be the maximal $j$ such that $v_1,dots,v_j$ are independent. Then there exists $c_i$, $1leq ileq j$ so that $sum_{i=1}^j c_iv_i=v_{j+1}$. But by applying $T$ we also have that



            $$sum_{i=1}^j c_ilambda_iv_i=lambda_{j+1}v_{j+1}=lambda_{j+1}sum_{i=1}^j c_i v_i$$ Hence $$sum_{i=1}^j left(lambda_i-lambda_{j+1}right) c_iv_i=0$$ which is a contradiction since $lambda_ineq lambda_{j+1}$ for $1leq ileq j$.



            Hope that helps,






            share|cite|improve this answer











            $endgroup$









            • 4




              $begingroup$
              P.S. The argument uses the well-ordering principle on the naturals (by looking at the least $j$ such that $v_1,ldots,v_{j+1}$ is dependent). Well-ordering for the naturals is equivalent to induction.
              $endgroup$
              – Arturo Magidin
              Mar 28 '11 at 1:20










            • $begingroup$
              @Arturo: Thanks! Should be fixed now. (Didn't realize I was using well ordering in that way)
              $endgroup$
              – Eric Naslund
              Mar 28 '11 at 1:30






            • 1




              $begingroup$
              No problem; I'll delete the other two comments since it's been fixed. In any case, many people prefer arguments along these lines to an explicit induction, even if they are logically equivalent (you can think of the argument you give as a proof by contradiction of the inductive step, with the base being taken for granted [or as trivial, since $v_1$ is nonzero]; cast that way, it may be clearer why the two arguments are closely connected).
              $endgroup$
              – Arturo Magidin
              Mar 28 '11 at 1:34






            • 1




              $begingroup$
              @Eric Naslund This proofs seems very similar to the one given in Axler...
              $endgroup$
              – user38268
              Aug 31 '11 at 7:29










            • $begingroup$
              @D B Lim: What is Axler?
              $endgroup$
              – Eric Naslund
              Aug 31 '11 at 14:42














            31












            31








            31





            $begingroup$

            Alternative:



            Let $j$ be the maximal $j$ such that $v_1,dots,v_j$ are independent. Then there exists $c_i$, $1leq ileq j$ so that $sum_{i=1}^j c_iv_i=v_{j+1}$. But by applying $T$ we also have that



            $$sum_{i=1}^j c_ilambda_iv_i=lambda_{j+1}v_{j+1}=lambda_{j+1}sum_{i=1}^j c_i v_i$$ Hence $$sum_{i=1}^j left(lambda_i-lambda_{j+1}right) c_iv_i=0$$ which is a contradiction since $lambda_ineq lambda_{j+1}$ for $1leq ileq j$.



            Hope that helps,






            share|cite|improve this answer











            $endgroup$



            Alternative:



            Let $j$ be the maximal $j$ such that $v_1,dots,v_j$ are independent. Then there exists $c_i$, $1leq ileq j$ so that $sum_{i=1}^j c_iv_i=v_{j+1}$. But by applying $T$ we also have that



            $$sum_{i=1}^j c_ilambda_iv_i=lambda_{j+1}v_{j+1}=lambda_{j+1}sum_{i=1}^j c_i v_i$$ Hence $$sum_{i=1}^j left(lambda_i-lambda_{j+1}right) c_iv_i=0$$ which is a contradiction since $lambda_ineq lambda_{j+1}$ for $1leq ileq j$.



            Hope that helps,







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Aug 22 '17 at 14:27

























            answered Mar 28 '11 at 0:34









            Eric NaslundEric Naslund

            60.2k10138240




            60.2k10138240








            • 4




              $begingroup$
              P.S. The argument uses the well-ordering principle on the naturals (by looking at the least $j$ such that $v_1,ldots,v_{j+1}$ is dependent). Well-ordering for the naturals is equivalent to induction.
              $endgroup$
              – Arturo Magidin
              Mar 28 '11 at 1:20










            • $begingroup$
              @Arturo: Thanks! Should be fixed now. (Didn't realize I was using well ordering in that way)
              $endgroup$
              – Eric Naslund
              Mar 28 '11 at 1:30






            • 1




              $begingroup$
              No problem; I'll delete the other two comments since it's been fixed. In any case, many people prefer arguments along these lines to an explicit induction, even if they are logically equivalent (you can think of the argument you give as a proof by contradiction of the inductive step, with the base being taken for granted [or as trivial, since $v_1$ is nonzero]; cast that way, it may be clearer why the two arguments are closely connected).
              $endgroup$
              – Arturo Magidin
              Mar 28 '11 at 1:34






            • 1




              $begingroup$
              @Eric Naslund This proofs seems very similar to the one given in Axler...
              $endgroup$
              – user38268
              Aug 31 '11 at 7:29










            • $begingroup$
              @D B Lim: What is Axler?
              $endgroup$
              – Eric Naslund
              Aug 31 '11 at 14:42














            • 4




              $begingroup$
              P.S. The argument uses the well-ordering principle on the naturals (by looking at the least $j$ such that $v_1,ldots,v_{j+1}$ is dependent). Well-ordering for the naturals is equivalent to induction.
              $endgroup$
              – Arturo Magidin
              Mar 28 '11 at 1:20










            • $begingroup$
              @Arturo: Thanks! Should be fixed now. (Didn't realize I was using well ordering in that way)
              $endgroup$
              – Eric Naslund
              Mar 28 '11 at 1:30






            • 1




              $begingroup$
              No problem; I'll delete the other two comments since it's been fixed. In any case, many people prefer arguments along these lines to an explicit induction, even if they are logically equivalent (you can think of the argument you give as a proof by contradiction of the inductive step, with the base being taken for granted [or as trivial, since $v_1$ is nonzero]; cast that way, it may be clearer why the two arguments are closely connected).
              $endgroup$
              – Arturo Magidin
              Mar 28 '11 at 1:34






            • 1




              $begingroup$
              @Eric Naslund This proofs seems very similar to the one given in Axler...
              $endgroup$
              – user38268
              Aug 31 '11 at 7:29










            • $begingroup$
              @D B Lim: What is Axler?
              $endgroup$
              – Eric Naslund
              Aug 31 '11 at 14:42








            4




            4




            $begingroup$
            P.S. The argument uses the well-ordering principle on the naturals (by looking at the least $j$ such that $v_1,ldots,v_{j+1}$ is dependent). Well-ordering for the naturals is equivalent to induction.
            $endgroup$
            – Arturo Magidin
            Mar 28 '11 at 1:20




            $begingroup$
            P.S. The argument uses the well-ordering principle on the naturals (by looking at the least $j$ such that $v_1,ldots,v_{j+1}$ is dependent). Well-ordering for the naturals is equivalent to induction.
            $endgroup$
            – Arturo Magidin
            Mar 28 '11 at 1:20












            $begingroup$
            @Arturo: Thanks! Should be fixed now. (Didn't realize I was using well ordering in that way)
            $endgroup$
            – Eric Naslund
            Mar 28 '11 at 1:30




            $begingroup$
            @Arturo: Thanks! Should be fixed now. (Didn't realize I was using well ordering in that way)
            $endgroup$
            – Eric Naslund
            Mar 28 '11 at 1:30




            1




            1




            $begingroup$
            No problem; I'll delete the other two comments since it's been fixed. In any case, many people prefer arguments along these lines to an explicit induction, even if they are logically equivalent (you can think of the argument you give as a proof by contradiction of the inductive step, with the base being taken for granted [or as trivial, since $v_1$ is nonzero]; cast that way, it may be clearer why the two arguments are closely connected).
            $endgroup$
            – Arturo Magidin
            Mar 28 '11 at 1:34




            $begingroup$
            No problem; I'll delete the other two comments since it's been fixed. In any case, many people prefer arguments along these lines to an explicit induction, even if they are logically equivalent (you can think of the argument you give as a proof by contradiction of the inductive step, with the base being taken for granted [or as trivial, since $v_1$ is nonzero]; cast that way, it may be clearer why the two arguments are closely connected).
            $endgroup$
            – Arturo Magidin
            Mar 28 '11 at 1:34




            1




            1




            $begingroup$
            @Eric Naslund This proofs seems very similar to the one given in Axler...
            $endgroup$
            – user38268
            Aug 31 '11 at 7:29




            $begingroup$
            @Eric Naslund This proofs seems very similar to the one given in Axler...
            $endgroup$
            – user38268
            Aug 31 '11 at 7:29












            $begingroup$
            @D B Lim: What is Axler?
            $endgroup$
            – Eric Naslund
            Aug 31 '11 at 14:42




            $begingroup$
            @D B Lim: What is Axler?
            $endgroup$
            – Eric Naslund
            Aug 31 '11 at 14:42











            11












            $begingroup$

            Hey I think there's a slick way to do this without induction. Suppose that $T$ is a linear transformation of a vector space $V$ and that $v_1,ldots,v_n in V$ are eigenvectors of $T$ with corresponding eigenvalues $lambda_1,ldots,lambda_n in F$ ($F$ the field of scalars). We want to show that, if $sum_{i=1}^n c_i v_i = 0$, where the coefficients $c_i$ are in $F$, then necessarily each $c_i$ is zero.



            For simplicity, I will just explain why $c_1 = 0$. Consider the polynomial $p_1(x) in F[x]$ given as $p_1(x) = (x-lambda_2) cdots (x-lambda_n)$. Note that the $x-lambda_1$ term is "missing" here. Now, since each $v_i$ is an eigenvector of $T$, we have
            begin{align*}
            p_1(T) v_i = p_1(lambda_i) v_i &&
            text{ where} && p_1(lambda_i) = begin{cases}
            0 & text{ if } i neq 1 \
            p_1(lambda_1) neq 0 & text{ if } i = 1
            end{cases}.
            end{align*}



            Thus, applying $p_1(T)$ to the sum $sum_{i=1}^n c_i v_i = 0$, we get
            $$ p_1(lambda_1) c_1 v_1 = 0 $$
            which implies $c_1 = 0$, since $p_1(lambda_1) neq 0$ and $v_1 neq 0$.






            share|cite|improve this answer











            $endgroup$


















              11












              $begingroup$

              Hey I think there's a slick way to do this without induction. Suppose that $T$ is a linear transformation of a vector space $V$ and that $v_1,ldots,v_n in V$ are eigenvectors of $T$ with corresponding eigenvalues $lambda_1,ldots,lambda_n in F$ ($F$ the field of scalars). We want to show that, if $sum_{i=1}^n c_i v_i = 0$, where the coefficients $c_i$ are in $F$, then necessarily each $c_i$ is zero.



              For simplicity, I will just explain why $c_1 = 0$. Consider the polynomial $p_1(x) in F[x]$ given as $p_1(x) = (x-lambda_2) cdots (x-lambda_n)$. Note that the $x-lambda_1$ term is "missing" here. Now, since each $v_i$ is an eigenvector of $T$, we have
              begin{align*}
              p_1(T) v_i = p_1(lambda_i) v_i &&
              text{ where} && p_1(lambda_i) = begin{cases}
              0 & text{ if } i neq 1 \
              p_1(lambda_1) neq 0 & text{ if } i = 1
              end{cases}.
              end{align*}



              Thus, applying $p_1(T)$ to the sum $sum_{i=1}^n c_i v_i = 0$, we get
              $$ p_1(lambda_1) c_1 v_1 = 0 $$
              which implies $c_1 = 0$, since $p_1(lambda_1) neq 0$ and $v_1 neq 0$.






              share|cite|improve this answer











              $endgroup$
















                11












                11








                11





                $begingroup$

                Hey I think there's a slick way to do this without induction. Suppose that $T$ is a linear transformation of a vector space $V$ and that $v_1,ldots,v_n in V$ are eigenvectors of $T$ with corresponding eigenvalues $lambda_1,ldots,lambda_n in F$ ($F$ the field of scalars). We want to show that, if $sum_{i=1}^n c_i v_i = 0$, where the coefficients $c_i$ are in $F$, then necessarily each $c_i$ is zero.



                For simplicity, I will just explain why $c_1 = 0$. Consider the polynomial $p_1(x) in F[x]$ given as $p_1(x) = (x-lambda_2) cdots (x-lambda_n)$. Note that the $x-lambda_1$ term is "missing" here. Now, since each $v_i$ is an eigenvector of $T$, we have
                begin{align*}
                p_1(T) v_i = p_1(lambda_i) v_i &&
                text{ where} && p_1(lambda_i) = begin{cases}
                0 & text{ if } i neq 1 \
                p_1(lambda_1) neq 0 & text{ if } i = 1
                end{cases}.
                end{align*}



                Thus, applying $p_1(T)$ to the sum $sum_{i=1}^n c_i v_i = 0$, we get
                $$ p_1(lambda_1) c_1 v_1 = 0 $$
                which implies $c_1 = 0$, since $p_1(lambda_1) neq 0$ and $v_1 neq 0$.






                share|cite|improve this answer











                $endgroup$



                Hey I think there's a slick way to do this without induction. Suppose that $T$ is a linear transformation of a vector space $V$ and that $v_1,ldots,v_n in V$ are eigenvectors of $T$ with corresponding eigenvalues $lambda_1,ldots,lambda_n in F$ ($F$ the field of scalars). We want to show that, if $sum_{i=1}^n c_i v_i = 0$, where the coefficients $c_i$ are in $F$, then necessarily each $c_i$ is zero.



                For simplicity, I will just explain why $c_1 = 0$. Consider the polynomial $p_1(x) in F[x]$ given as $p_1(x) = (x-lambda_2) cdots (x-lambda_n)$. Note that the $x-lambda_1$ term is "missing" here. Now, since each $v_i$ is an eigenvector of $T$, we have
                begin{align*}
                p_1(T) v_i = p_1(lambda_i) v_i &&
                text{ where} && p_1(lambda_i) = begin{cases}
                0 & text{ if } i neq 1 \
                p_1(lambda_1) neq 0 & text{ if } i = 1
                end{cases}.
                end{align*}



                Thus, applying $p_1(T)$ to the sum $sum_{i=1}^n c_i v_i = 0$, we get
                $$ p_1(lambda_1) c_1 v_1 = 0 $$
                which implies $c_1 = 0$, since $p_1(lambda_1) neq 0$ and $v_1 neq 0$.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Nov 2 '15 at 2:34

























                answered Nov 2 '15 at 1:27









                Mike FMike F

                12.4k23481




                12.4k23481























                    1












                    $begingroup$

                    For eigenvectors $vec{v^1},vec{v^2},dots,vec{v^n}$ with different eigenvalues $lambda_1neqlambda_2neq dots neqlambda_n$ of a $ ntimes n$ matrix $A$.



                    Given the $ ntimes n$ matrix $P$ of the eigenvectors (with eigenvectors as the columns).
                    $$P=Big[vec{v^1},vec{v^2},dots,vec{v^n}Big]$$



                    Given the $ ntimes n$ matrix $Lambda$ of the eigenvalues on the diagonal (zeros elsewhere):
                    $$Lambda = begin{bmatrix}
                    lambda_1 & 0 & dots & 0 \
                    0 & lambda_2 & dots & 0 \
                    vdots & vdots & ddots & vdots \
                    0 & 0 & dots & lambda_n
                    end{bmatrix}
                    $$
                    Let $vec{c}=(c_1,c_2,dots,c_n)^T$



                    We need to show that only $c_1=c_2=...=c_n=0$ can satisfy the following:
                    $$c_1vec{v^1}+c_2vec{v^2}+...= vec{0^{}}$$
                    Applying the matrix to this equation gives:
                    $$c_1lambda_1vec{v^1}+c_2lambda_2vec{v^2}+...+c_nlambda_nvec{v^n}= vec{0^{}}$$
                    We can write this equation in the form of vectors and matrices:



                    $$PLambda vec{c^{}}=vec{0^{}}$$



                    But with since $A$ can be diagonalised to $Lambda$, we know $PLambda=AP$
                    $$implies APvec{c^{}}=vec{0^{}}$$
                    since $APneq 0$, we have $vec{c}=0$.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      Let me add a comment to the last. In order to have $c=0$ the only solution, $AP$ must be invertible but since $A$ is already invertible because $lambda_i neq lambda_j forall ineq j$, the only demand is $P$ to be invertible $Rightarrow$ independent eigenvectors.
                      $endgroup$
                      – Thoth
                      Sep 13 '16 at 16:36


















                    1












                    $begingroup$

                    For eigenvectors $vec{v^1},vec{v^2},dots,vec{v^n}$ with different eigenvalues $lambda_1neqlambda_2neq dots neqlambda_n$ of a $ ntimes n$ matrix $A$.



                    Given the $ ntimes n$ matrix $P$ of the eigenvectors (with eigenvectors as the columns).
                    $$P=Big[vec{v^1},vec{v^2},dots,vec{v^n}Big]$$



                    Given the $ ntimes n$ matrix $Lambda$ of the eigenvalues on the diagonal (zeros elsewhere):
                    $$Lambda = begin{bmatrix}
                    lambda_1 & 0 & dots & 0 \
                    0 & lambda_2 & dots & 0 \
                    vdots & vdots & ddots & vdots \
                    0 & 0 & dots & lambda_n
                    end{bmatrix}
                    $$
                    Let $vec{c}=(c_1,c_2,dots,c_n)^T$



                    We need to show that only $c_1=c_2=...=c_n=0$ can satisfy the following:
                    $$c_1vec{v^1}+c_2vec{v^2}+...= vec{0^{}}$$
                    Applying the matrix to this equation gives:
                    $$c_1lambda_1vec{v^1}+c_2lambda_2vec{v^2}+...+c_nlambda_nvec{v^n}= vec{0^{}}$$
                    We can write this equation in the form of vectors and matrices:



                    $$PLambda vec{c^{}}=vec{0^{}}$$



                    But with since $A$ can be diagonalised to $Lambda$, we know $PLambda=AP$
                    $$implies APvec{c^{}}=vec{0^{}}$$
                    since $APneq 0$, we have $vec{c}=0$.






                    share|cite|improve this answer









                    $endgroup$













                    • $begingroup$
                      Let me add a comment to the last. In order to have $c=0$ the only solution, $AP$ must be invertible but since $A$ is already invertible because $lambda_i neq lambda_j forall ineq j$, the only demand is $P$ to be invertible $Rightarrow$ independent eigenvectors.
                      $endgroup$
                      – Thoth
                      Sep 13 '16 at 16:36
















                    1












                    1








                    1





                    $begingroup$

                    For eigenvectors $vec{v^1},vec{v^2},dots,vec{v^n}$ with different eigenvalues $lambda_1neqlambda_2neq dots neqlambda_n$ of a $ ntimes n$ matrix $A$.



                    Given the $ ntimes n$ matrix $P$ of the eigenvectors (with eigenvectors as the columns).
                    $$P=Big[vec{v^1},vec{v^2},dots,vec{v^n}Big]$$



                    Given the $ ntimes n$ matrix $Lambda$ of the eigenvalues on the diagonal (zeros elsewhere):
                    $$Lambda = begin{bmatrix}
                    lambda_1 & 0 & dots & 0 \
                    0 & lambda_2 & dots & 0 \
                    vdots & vdots & ddots & vdots \
                    0 & 0 & dots & lambda_n
                    end{bmatrix}
                    $$
                    Let $vec{c}=(c_1,c_2,dots,c_n)^T$



                    We need to show that only $c_1=c_2=...=c_n=0$ can satisfy the following:
                    $$c_1vec{v^1}+c_2vec{v^2}+...= vec{0^{}}$$
                    Applying the matrix to this equation gives:
                    $$c_1lambda_1vec{v^1}+c_2lambda_2vec{v^2}+...+c_nlambda_nvec{v^n}= vec{0^{}}$$
                    We can write this equation in the form of vectors and matrices:



                    $$PLambda vec{c^{}}=vec{0^{}}$$



                    But with since $A$ can be diagonalised to $Lambda$, we know $PLambda=AP$
                    $$implies APvec{c^{}}=vec{0^{}}$$
                    since $APneq 0$, we have $vec{c}=0$.






                    share|cite|improve this answer









                    $endgroup$



                    For eigenvectors $vec{v^1},vec{v^2},dots,vec{v^n}$ with different eigenvalues $lambda_1neqlambda_2neq dots neqlambda_n$ of a $ ntimes n$ matrix $A$.



                    Given the $ ntimes n$ matrix $P$ of the eigenvectors (with eigenvectors as the columns).
                    $$P=Big[vec{v^1},vec{v^2},dots,vec{v^n}Big]$$



                    Given the $ ntimes n$ matrix $Lambda$ of the eigenvalues on the diagonal (zeros elsewhere):
                    $$Lambda = begin{bmatrix}
                    lambda_1 & 0 & dots & 0 \
                    0 & lambda_2 & dots & 0 \
                    vdots & vdots & ddots & vdots \
                    0 & 0 & dots & lambda_n
                    end{bmatrix}
                    $$
                    Let $vec{c}=(c_1,c_2,dots,c_n)^T$



                    We need to show that only $c_1=c_2=...=c_n=0$ can satisfy the following:
                    $$c_1vec{v^1}+c_2vec{v^2}+...= vec{0^{}}$$
                    Applying the matrix to this equation gives:
                    $$c_1lambda_1vec{v^1}+c_2lambda_2vec{v^2}+...+c_nlambda_nvec{v^n}= vec{0^{}}$$
                    We can write this equation in the form of vectors and matrices:



                    $$PLambda vec{c^{}}=vec{0^{}}$$



                    But with since $A$ can be diagonalised to $Lambda$, we know $PLambda=AP$
                    $$implies APvec{c^{}}=vec{0^{}}$$
                    since $APneq 0$, we have $vec{c}=0$.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Mar 13 '16 at 1:58









                    Zeeshan AhmadZeeshan Ahmad

                    1239




                    1239












                    • $begingroup$
                      Let me add a comment to the last. In order to have $c=0$ the only solution, $AP$ must be invertible but since $A$ is already invertible because $lambda_i neq lambda_j forall ineq j$, the only demand is $P$ to be invertible $Rightarrow$ independent eigenvectors.
                      $endgroup$
                      – Thoth
                      Sep 13 '16 at 16:36




















                    • $begingroup$
                      Let me add a comment to the last. In order to have $c=0$ the only solution, $AP$ must be invertible but since $A$ is already invertible because $lambda_i neq lambda_j forall ineq j$, the only demand is $P$ to be invertible $Rightarrow$ independent eigenvectors.
                      $endgroup$
                      – Thoth
                      Sep 13 '16 at 16:36


















                    $begingroup$
                    Let me add a comment to the last. In order to have $c=0$ the only solution, $AP$ must be invertible but since $A$ is already invertible because $lambda_i neq lambda_j forall ineq j$, the only demand is $P$ to be invertible $Rightarrow$ independent eigenvectors.
                    $endgroup$
                    – Thoth
                    Sep 13 '16 at 16:36






                    $begingroup$
                    Let me add a comment to the last. In order to have $c=0$ the only solution, $AP$ must be invertible but since $A$ is already invertible because $lambda_i neq lambda_j forall ineq j$, the only demand is $P$ to be invertible $Rightarrow$ independent eigenvectors.
                    $endgroup$
                    – Thoth
                    Sep 13 '16 at 16:36




















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f29371%2fhow-to-prove-that-eigenvectors-from-different-eigenvalues-are-linearly-independe%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    'app-layout' is not a known element: how to share Component with different Modules

                    android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

                    WPF add header to Image with URL pettitions [duplicate]