Proof that any set linearly independent has at most $n$ elements (when the vector space has basis with n...
$begingroup$
My teacher gave us this proof today, but I don't know if I understood it entirely:
Theorem:
Suppose $V$ a vector space (finitely generated) over the reals.
$$B = {v_1,cdots, v_n}$$
where $B$ is a basis for $V$.
Suppose, also, a set $Ssubset V$ L.I., then $S$ is finite and has at most $n$ elements.
Proof:
Let $S = {y_1,cdots,y_m}$, with $m>n$. Suppose $S$ is L.I. Then:
$$alpha_1y_1+cdots+alpha_my_m = 0tag{1}$$ But since $S$ is greater
than $B$, and $B$ is a basis, $S$ can be spanned by $B$. So any
$y_iin S$ can be written as a linear combination of $B$, in this way:
$$\y_1 = beta_{11}v_1+cdots+beta_{n1}v_n\y_2 =
beta_{12}v_1+cdots+beta_{n2}v_n\cdots\y_m =
beta_{1m}v_1+cdots+beta_{nm}v_n$$ Substituting the system above in
$(1)$, we have:
$$alpha_1(beta_{11}v_1+cdots+beta_{n1}v_n)+cdots+alpha_m(beta_{1m}v_1+cdots+beta_{nm}v_n)
= 0implies\ (beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m)v_1+cdots+(beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m)v_m
= 0$$ But since $B$ is L.I., then the linear combination above implies the coefficients are all $0$. Therefore we end up with the system:
$$begin{cases}beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m = 0\
cdots \
beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m = 0end{cases}$$
that has $n$ equations, and $m$ variables, therefore is compatible and
indeterminated, that implies it has infinitely many solutions. One of
them is not trivial, so $S$ is L.D.
I think wikipedia has a similar proof, but there, it proves directly that any basis has the same number of elements (could you tell me why a vector $v_i$ suddenly appears in the middle of the proof?. In this proof, the conclusion is that $S$ has at most $n$ elements.
Is my proof correct? I understood it correctly? (I didn't copy, this is my understanding of the proof).
linear-algebra vector-spaces proof-verification
$endgroup$
add a comment |
$begingroup$
My teacher gave us this proof today, but I don't know if I understood it entirely:
Theorem:
Suppose $V$ a vector space (finitely generated) over the reals.
$$B = {v_1,cdots, v_n}$$
where $B$ is a basis for $V$.
Suppose, also, a set $Ssubset V$ L.I., then $S$ is finite and has at most $n$ elements.
Proof:
Let $S = {y_1,cdots,y_m}$, with $m>n$. Suppose $S$ is L.I. Then:
$$alpha_1y_1+cdots+alpha_my_m = 0tag{1}$$ But since $S$ is greater
than $B$, and $B$ is a basis, $S$ can be spanned by $B$. So any
$y_iin S$ can be written as a linear combination of $B$, in this way:
$$\y_1 = beta_{11}v_1+cdots+beta_{n1}v_n\y_2 =
beta_{12}v_1+cdots+beta_{n2}v_n\cdots\y_m =
beta_{1m}v_1+cdots+beta_{nm}v_n$$ Substituting the system above in
$(1)$, we have:
$$alpha_1(beta_{11}v_1+cdots+beta_{n1}v_n)+cdots+alpha_m(beta_{1m}v_1+cdots+beta_{nm}v_n)
= 0implies\ (beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m)v_1+cdots+(beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m)v_m
= 0$$ But since $B$ is L.I., then the linear combination above implies the coefficients are all $0$. Therefore we end up with the system:
$$begin{cases}beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m = 0\
cdots \
beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m = 0end{cases}$$
that has $n$ equations, and $m$ variables, therefore is compatible and
indeterminated, that implies it has infinitely many solutions. One of
them is not trivial, so $S$ is L.D.
I think wikipedia has a similar proof, but there, it proves directly that any basis has the same number of elements (could you tell me why a vector $v_i$ suddenly appears in the middle of the proof?. In this proof, the conclusion is that $S$ has at most $n$ elements.
Is my proof correct? I understood it correctly? (I didn't copy, this is my understanding of the proof).
linear-algebra vector-spaces proof-verification
$endgroup$
$begingroup$
Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
$endgroup$
– megas
Dec 9 '14 at 0:12
add a comment |
$begingroup$
My teacher gave us this proof today, but I don't know if I understood it entirely:
Theorem:
Suppose $V$ a vector space (finitely generated) over the reals.
$$B = {v_1,cdots, v_n}$$
where $B$ is a basis for $V$.
Suppose, also, a set $Ssubset V$ L.I., then $S$ is finite and has at most $n$ elements.
Proof:
Let $S = {y_1,cdots,y_m}$, with $m>n$. Suppose $S$ is L.I. Then:
$$alpha_1y_1+cdots+alpha_my_m = 0tag{1}$$ But since $S$ is greater
than $B$, and $B$ is a basis, $S$ can be spanned by $B$. So any
$y_iin S$ can be written as a linear combination of $B$, in this way:
$$\y_1 = beta_{11}v_1+cdots+beta_{n1}v_n\y_2 =
beta_{12}v_1+cdots+beta_{n2}v_n\cdots\y_m =
beta_{1m}v_1+cdots+beta_{nm}v_n$$ Substituting the system above in
$(1)$, we have:
$$alpha_1(beta_{11}v_1+cdots+beta_{n1}v_n)+cdots+alpha_m(beta_{1m}v_1+cdots+beta_{nm}v_n)
= 0implies\ (beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m)v_1+cdots+(beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m)v_m
= 0$$ But since $B$ is L.I., then the linear combination above implies the coefficients are all $0$. Therefore we end up with the system:
$$begin{cases}beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m = 0\
cdots \
beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m = 0end{cases}$$
that has $n$ equations, and $m$ variables, therefore is compatible and
indeterminated, that implies it has infinitely many solutions. One of
them is not trivial, so $S$ is L.D.
I think wikipedia has a similar proof, but there, it proves directly that any basis has the same number of elements (could you tell me why a vector $v_i$ suddenly appears in the middle of the proof?. In this proof, the conclusion is that $S$ has at most $n$ elements.
Is my proof correct? I understood it correctly? (I didn't copy, this is my understanding of the proof).
linear-algebra vector-spaces proof-verification
$endgroup$
My teacher gave us this proof today, but I don't know if I understood it entirely:
Theorem:
Suppose $V$ a vector space (finitely generated) over the reals.
$$B = {v_1,cdots, v_n}$$
where $B$ is a basis for $V$.
Suppose, also, a set $Ssubset V$ L.I., then $S$ is finite and has at most $n$ elements.
Proof:
Let $S = {y_1,cdots,y_m}$, with $m>n$. Suppose $S$ is L.I. Then:
$$alpha_1y_1+cdots+alpha_my_m = 0tag{1}$$ But since $S$ is greater
than $B$, and $B$ is a basis, $S$ can be spanned by $B$. So any
$y_iin S$ can be written as a linear combination of $B$, in this way:
$$\y_1 = beta_{11}v_1+cdots+beta_{n1}v_n\y_2 =
beta_{12}v_1+cdots+beta_{n2}v_n\cdots\y_m =
beta_{1m}v_1+cdots+beta_{nm}v_n$$ Substituting the system above in
$(1)$, we have:
$$alpha_1(beta_{11}v_1+cdots+beta_{n1}v_n)+cdots+alpha_m(beta_{1m}v_1+cdots+beta_{nm}v_n)
= 0implies\ (beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m)v_1+cdots+(beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m)v_m
= 0$$ But since $B$ is L.I., then the linear combination above implies the coefficients are all $0$. Therefore we end up with the system:
$$begin{cases}beta_{11}alpha_1+beta_{12}alpha_2+cdots+beta_{1m}alpha_m = 0\
cdots \
beta_{n1}alpha_1+beta_{n2}alpha_2+cdots+beta_{nm}alpha_m = 0end{cases}$$
that has $n$ equations, and $m$ variables, therefore is compatible and
indeterminated, that implies it has infinitely many solutions. One of
them is not trivial, so $S$ is L.D.
I think wikipedia has a similar proof, but there, it proves directly that any basis has the same number of elements (could you tell me why a vector $v_i$ suddenly appears in the middle of the proof?. In this proof, the conclusion is that $S$ has at most $n$ elements.
Is my proof correct? I understood it correctly? (I didn't copy, this is my understanding of the proof).
linear-algebra vector-spaces proof-verification
linear-algebra vector-spaces proof-verification
edited Dec 9 '14 at 0:20
Guerlando OCs
asked Dec 8 '14 at 23:51
Guerlando OCsGuerlando OCs
11321856
11321856
$begingroup$
Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
$endgroup$
– megas
Dec 9 '14 at 0:12
add a comment |
$begingroup$
Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
$endgroup$
– megas
Dec 9 '14 at 0:12
$begingroup$
Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
$endgroup$
– megas
Dec 9 '14 at 0:12
$begingroup$
Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
$endgroup$
– megas
Dec 9 '14 at 0:12
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.
In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1058357%2fproof-that-any-set-linearly-independent-has-at-most-n-elements-when-the-vecto%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.
In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.
$endgroup$
add a comment |
$begingroup$
It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.
In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.
$endgroup$
add a comment |
$begingroup$
It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.
In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.
$endgroup$
It seems nothing wrong in your understanding. Probably what makes you confused is that the proof on wikipedia is a generalization of what you learnt in your class: your dimension theorem is a version only on finite dimensional vector spaces and that on wikipedia is the full version on both finite or infinite dimensional vector spaces.
In general, in a vector space $V$, the span of a set $S subseteq V$ is define as all finite linear combinations of elements in $S$. A set $S$ is linearly independent if, a finite linear combination of elements in $S$ is the zero vector implies the coefficients are all zero. The standard way of representing a finite combination is by $sum_{i in I}$ where $|I| < infty$ as in the proof from wikipedia.
edited Dec 9 '14 at 3:09
answered Dec 9 '14 at 2:11
EmpiricistEmpiricist
6,83011433
6,83011433
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1058357%2fproof-that-any-set-linearly-independent-has-at-most-n-elements-when-the-vecto%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Your understanding seems to be correct. In (1) you wanted to say that for $S$ linearly independent, $alpha_1y_1+cdots+alpha_my_m = 0$ if and only if $alpha_{i}=0$, $forall i in lbrace 1, dots, m rbrace$. Then your proof essentially shows that there exists a set of $alpha_{i}$'s that are not all zero and still satisfy (1). Hence, S is not L.I. Note that it is important for the argument that $Rightarrow$ in the middle of your proof is actually an equivalence $Leftrightarrow$.
$endgroup$
– megas
Dec 9 '14 at 0:12