Proof that any set linearly independent has at most $n$ elements (when the vector space has basis with n...












3












$begingroup$


My teacher gave us this proof today, but I don't know if I understood it entirely:



Theorem:



Suppose $V$ a vector space (finitely generated) over the reals.



$$B = {v_1,cdots, v_n}$$
where $B$ is a basis for $V$.



Suppose, also, a set $Ssubset V$ L.I., then $S$ is finite and has at most $n$ elements.




Proof:



Let $S = {y_1,cdots,y_m}$, with $m>n$. Suppose $S$ is L.I. Then:
$$alpha_1y_1+cdots+alpha_my_m = 0tag{1}$$ But since $S$ is greater
than $B$, and $B$ is a basis, $S$ can be spanned by $B$. So any
$y_iin S$ can be written as a linear combination of $B$, in this way:
$$\y_1 = beta_{11}v_1+cdots+beta_{n1}v_n\y_2 =
beta_{12}v_1+cdots+beta_{n2}v_n\cdots\y_m =
beta_{1m}v_1+cdots+beta_{nm}v_n$$ Substituting the system above in
$(1)$, we have:
$$alpha_1(beta_{11}v_1+cdots+beta_{n1}v_n)+cdots+alpha_m(beta_{1m}v_1+cdots+beta_{nm}v_n)
= 0implies\ (beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m)v_1+cdots+(beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m)v_m
= 0$$ But since $B$ is L.I., then the linear combination above implies the coefficients are all $0$. Therefore we end up with the system:



$$begin{cases}beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m = 0\
cdots \
beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m = 0end{cases}$$
that has $n$ equations, and $m$ variables, therefore is compatible and
indeterminated, that implies it has infinitely many solutions. One of
them is not trivial, so $S$ is L.D.




I think wikipedia has a similar proof, but there, it proves directly that any basis has the same number of elements (could you tell me why a vector $v_i$ suddenly appears in the middle of the proof?. In this proof, the conclusion is that $S$ has at most $n$ elements.
Is my proof correct? I understood it correctly? (I didn't copy, this is my understanding of the proof).










share|cite|improve this question











$endgroup$












  • $begingroup$
    Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
    $endgroup$
    – megas
    Dec 9 '14 at 0:12


















3












$begingroup$


My teacher gave us this proof today, but I don't know if I understood it entirely:



Theorem:



Suppose $V$ a vector space (finitely generated) over the reals.



$$B = {v_1,cdots, v_n}$$
where $B$ is a basis for $V$.



Suppose, also, a set $Ssubset V$ L.I., then $S$ is finite and has at most $n$ elements.




Proof:



Let $S = {y_1,cdots,y_m}$, with $m>n$. Suppose $S$ is L.I. Then:
$$alpha_1y_1+cdots+alpha_my_m = 0tag{1}$$ But since $S$ is greater
than $B$, and $B$ is a basis, $S$ can be spanned by $B$. So any
$y_iin S$ can be written as a linear combination of $B$, in this way:
$$\y_1 = beta_{11}v_1+cdots+beta_{n1}v_n\y_2 =
beta_{12}v_1+cdots+beta_{n2}v_n\cdots\y_m =
beta_{1m}v_1+cdots+beta_{nm}v_n$$ Substituting the system above in
$(1)$, we have:
$$alpha_1(beta_{11}v_1+cdots+beta_{n1}v_n)+cdots+alpha_m(beta_{1m}v_1+cdots+beta_{nm}v_n)
= 0implies\ (beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m)v_1+cdots+(beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m)v_m
= 0$$ But since $B$ is L.I., then the linear combination above implies the coefficients are all $0$. Therefore we end up with the system:



$$begin{cases}beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m = 0\
cdots \
beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m = 0end{cases}$$
that has $n$ equations, and $m$ variables, therefore is compatible and
indeterminated, that implies it has infinitely many solutions. One of
them is not trivial, so $S$ is L.D.




I think wikipedia has a similar proof, but there, it proves directly that any basis has the same number of elements (could you tell me why a vector $v_i$ suddenly appears in the middle of the proof?. In this proof, the conclusion is that $S$ has at most $n$ elements.
Is my proof correct? I understood it correctly? (I didn't copy, this is my understanding of the proof).










share|cite|improve this question











$endgroup$












  • $begingroup$
    Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
    $endgroup$
    – megas
    Dec 9 '14 at 0:12
















3












3








3





$begingroup$


My teacher gave us this proof today, but I don't know if I understood it entirely:



Theorem:



Suppose $V$ a vector space (finitely generated) over the reals.



$$B = {v_1,cdots, v_n}$$
where $B$ is a basis for $V$.



Suppose, also, a set $Ssubset V$ L.I., then $S$ is finite and has at most $n$ elements.




Proof:



Let $S = {y_1,cdots,y_m}$, with $m>n$. Suppose $S$ is L.I. Then:
$$alpha_1y_1+cdots+alpha_my_m = 0tag{1}$$ But since $S$ is greater
than $B$, and $B$ is a basis, $S$ can be spanned by $B$. So any
$y_iin S$ can be written as a linear combination of $B$, in this way:
$$\y_1 = beta_{11}v_1+cdots+beta_{n1}v_n\y_2 =
beta_{12}v_1+cdots+beta_{n2}v_n\cdots\y_m =
beta_{1m}v_1+cdots+beta_{nm}v_n$$ Substituting the system above in
$(1)$, we have:
$$alpha_1(beta_{11}v_1+cdots+beta_{n1}v_n)+cdots+alpha_m(beta_{1m}v_1+cdots+beta_{nm}v_n)
= 0implies\ (beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m)v_1+cdots+(beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m)v_m
= 0$$ But since $B$ is L.I., then the linear combination above implies the coefficients are all $0$. Therefore we end up with the system:



$$begin{cases}beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m = 0\
cdots \
beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m = 0end{cases}$$
that has $n$ equations, and $m$ variables, therefore is compatible and
indeterminated, that implies it has infinitely many solutions. One of
them is not trivial, so $S$ is L.D.




I think wikipedia has a similar proof, but there, it proves directly that any basis has the same number of elements (could you tell me why a vector $v_i$ suddenly appears in the middle of the proof?. In this proof, the conclusion is that $S$ has at most $n$ elements.
Is my proof correct? I understood it correctly? (I didn't copy, this is my understanding of the proof).










share|cite|improve this question











$endgroup$




My teacher gave us this proof today, but I don't know if I understood it entirely:



Theorem:



Suppose $V$ a vector space (finitely generated) over the reals.



$$B = {v_1,cdots, v_n}$$
where $B$ is a basis for $V$.



Suppose, also, a set $Ssubset V$ L.I., then $S$ is finite and has at most $n$ elements.




Proof:



Let $S = {y_1,cdots,y_m}$, with $m>n$. Suppose $S$ is L.I. Then:
$$alpha_1y_1+cdots+alpha_my_m = 0tag{1}$$ But since $S$ is greater
than $B$, and $B$ is a basis, $S$ can be spanned by $B$. So any
$y_iin S$ can be written as a linear combination of $B$, in this way:
$$\y_1 = beta_{11}v_1+cdots+beta_{n1}v_n\y_2 =
beta_{12}v_1+cdots+beta_{n2}v_n\cdots\y_m =
beta_{1m}v_1+cdots+beta_{nm}v_n$$ Substituting the system above in
$(1)$, we have:
$$alpha_1(beta_{11}v_1+cdots+beta_{n1}v_n)+cdots+alpha_m(beta_{1m}v_1+cdots+beta_{nm}v_n)
= 0implies\ (beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m)v_1+cdots+(beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m)v_m
= 0$$ But since $B$ is L.I., then the linear combination above implies the coefficients are all $0$. Therefore we end up with the system:



$$begin{cases}beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m = 0\
cdots \
beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m = 0end{cases}$$
that has $n$ equations, and $m$ variables, therefore is compatible and
indeterminated, that implies it has infinitely many solutions. One of
them is not trivial, so $S$ is L.D.




I think wikipedia has a similar proof, but there, it proves directly that any basis has the same number of elements (could you tell me why a vector $v_i$ suddenly appears in the middle of the proof?. In this proof, the conclusion is that $S$ has at most $n$ elements.
Is my proof correct? I understood it correctly? (I didn't copy, this is my understanding of the proof).







linear-algebra vector-spaces proof-verification






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 9 '14 at 0:20







Guerlando OCs

















asked Dec 8 '14 at 23:51









Guerlando OCsGuerlando OCs

11321856




11321856












  • $begingroup$
    Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
    $endgroup$
    – megas
    Dec 9 '14 at 0:12




















  • $begingroup$
    Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
    $endgroup$
    – megas
    Dec 9 '14 at 0:12


















$begingroup$
Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
$endgroup$
– megas
Dec 9 '14 at 0:12






$begingroup$
Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
$endgroup$
– megas
Dec 9 '14 at 0:12












1 Answer
1






active

oldest

votes


















0












$begingroup$

It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.



In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.






share|cite|improve this answer











$endgroup$














    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1058357%2fproof-that-any-set-linearly-independent-has-at-most-n-elements-when-the-vecto%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.



    In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.






    share|cite|improve this answer











    $endgroup$


















      0












      $begingroup$

      It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.



      In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.






      share|cite|improve this answer











      $endgroup$
















        0












        0








        0





        $begingroup$

        It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.



        In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.






        share|cite|improve this answer











        $endgroup$



        It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.



        In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Dec 9 '14 at 3:09

























        answered Dec 9 '14 at 2:11









        EmpiricistEmpiricist

        6,83011433




        6,83011433






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1058357%2fproof-that-any-set-linearly-independent-has-at-most-n-elements-when-the-vecto%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            How to fix TextFormField cause rebuild widget in Flutter

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith