Least Squares solution for a symmetric singular matrix
$begingroup$
I want to solve this system by Least Squares method:$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix} $$ This symmetric matrix is singular with one eigenvalue $lambda1 = 0$, so $ A^tcdot A$ is also singular and for this reason I cannot use the normal equation: $hat x = (A^tcdot A)^{-1}cdot A^tcdot b $.
So I performed Gauss-Jordan to the extended matrix to come with $$begin{pmatrix}1 & 2 & 3\ 0 & 1 & 2 \ 0 & 0 & 0 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\3\-1end{pmatrix} $$
Finally I solved the $ 2x2$ system: $$begin{pmatrix}1 & 2\ 0 & 1end{pmatrix}begin{pmatrix}x\yend{pmatrix} =begin{pmatrix}1\3end{pmatrix} $$ taking into account that the best $ hat b $ is $begin{pmatrix}1\3\0end{pmatrix}$
The solution is then $ hat x = begin{pmatrix}-5\3\0end{pmatrix}$
Is this approach correct ?
EDIT
Based on the book 'Lianear Algebra and its applications' from David Lay, I also include the Least Squares method he proposes: $(A^tA)hat x=A^t b $
$$A^t b =begin{pmatrix}5\9\13end{pmatrix}, A^tA = begin{pmatrix}14 & 20 & 26 \
20 & 29 & 38 \
26 & 38 & 50end{pmatrix}$$
The reduced echelon from the augmented is:
$$ begin{pmatrix}14 & 20 & 26 & 5 \
20 & 29 & 38 & 9 \
26 & 38 & 50 & 13 end{pmatrix} sim begin{pmatrix}1 & 0 & -1 & -frac{35}{6} \
0 & 1 & 2 & frac{13}{3} \
0 & 0 & 0 & 0
end{pmatrix} Rightarrow hat x = begin{pmatrix}-frac{35}{6} \ frac{13}{3} \ 0 end{pmatrix}$$ for the independent variable case that $z=alpha , alpha=0 $
matrices least-squares
$endgroup$
add a comment |
$begingroup$
I want to solve this system by Least Squares method:$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix} $$ This symmetric matrix is singular with one eigenvalue $lambda1 = 0$, so $ A^tcdot A$ is also singular and for this reason I cannot use the normal equation: $hat x = (A^tcdot A)^{-1}cdot A^tcdot b $.
So I performed Gauss-Jordan to the extended matrix to come with $$begin{pmatrix}1 & 2 & 3\ 0 & 1 & 2 \ 0 & 0 & 0 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\3\-1end{pmatrix} $$
Finally I solved the $ 2x2$ system: $$begin{pmatrix}1 & 2\ 0 & 1end{pmatrix}begin{pmatrix}x\yend{pmatrix} =begin{pmatrix}1\3end{pmatrix} $$ taking into account that the best $ hat b $ is $begin{pmatrix}1\3\0end{pmatrix}$
The solution is then $ hat x = begin{pmatrix}-5\3\0end{pmatrix}$
Is this approach correct ?
EDIT
Based on the book 'Lianear Algebra and its applications' from David Lay, I also include the Least Squares method he proposes: $(A^tA)hat x=A^t b $
$$A^t b =begin{pmatrix}5\9\13end{pmatrix}, A^tA = begin{pmatrix}14 & 20 & 26 \
20 & 29 & 38 \
26 & 38 & 50end{pmatrix}$$
The reduced echelon from the augmented is:
$$ begin{pmatrix}14 & 20 & 26 & 5 \
20 & 29 & 38 & 9 \
26 & 38 & 50 & 13 end{pmatrix} sim begin{pmatrix}1 & 0 & -1 & -frac{35}{6} \
0 & 1 & 2 & frac{13}{3} \
0 & 0 & 0 & 0
end{pmatrix} Rightarrow hat x = begin{pmatrix}-frac{35}{6} \ frac{13}{3} \ 0 end{pmatrix}$$ for the independent variable case that $z=alpha , alpha=0 $
matrices least-squares
$endgroup$
$begingroup$
The approach is the correct one to take, you just made some mistakes in implementing it.
$endgroup$
– John Doe
Jan 5 at 13:48
add a comment |
$begingroup$
I want to solve this system by Least Squares method:$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix} $$ This symmetric matrix is singular with one eigenvalue $lambda1 = 0$, so $ A^tcdot A$ is also singular and for this reason I cannot use the normal equation: $hat x = (A^tcdot A)^{-1}cdot A^tcdot b $.
So I performed Gauss-Jordan to the extended matrix to come with $$begin{pmatrix}1 & 2 & 3\ 0 & 1 & 2 \ 0 & 0 & 0 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\3\-1end{pmatrix} $$
Finally I solved the $ 2x2$ system: $$begin{pmatrix}1 & 2\ 0 & 1end{pmatrix}begin{pmatrix}x\yend{pmatrix} =begin{pmatrix}1\3end{pmatrix} $$ taking into account that the best $ hat b $ is $begin{pmatrix}1\3\0end{pmatrix}$
The solution is then $ hat x = begin{pmatrix}-5\3\0end{pmatrix}$
Is this approach correct ?
EDIT
Based on the book 'Lianear Algebra and its applications' from David Lay, I also include the Least Squares method he proposes: $(A^tA)hat x=A^t b $
$$A^t b =begin{pmatrix}5\9\13end{pmatrix}, A^tA = begin{pmatrix}14 & 20 & 26 \
20 & 29 & 38 \
26 & 38 & 50end{pmatrix}$$
The reduced echelon from the augmented is:
$$ begin{pmatrix}14 & 20 & 26 & 5 \
20 & 29 & 38 & 9 \
26 & 38 & 50 & 13 end{pmatrix} sim begin{pmatrix}1 & 0 & -1 & -frac{35}{6} \
0 & 1 & 2 & frac{13}{3} \
0 & 0 & 0 & 0
end{pmatrix} Rightarrow hat x = begin{pmatrix}-frac{35}{6} \ frac{13}{3} \ 0 end{pmatrix}$$ for the independent variable case that $z=alpha , alpha=0 $
matrices least-squares
$endgroup$
I want to solve this system by Least Squares method:$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix} $$ This symmetric matrix is singular with one eigenvalue $lambda1 = 0$, so $ A^tcdot A$ is also singular and for this reason I cannot use the normal equation: $hat x = (A^tcdot A)^{-1}cdot A^tcdot b $.
So I performed Gauss-Jordan to the extended matrix to come with $$begin{pmatrix}1 & 2 & 3\ 0 & 1 & 2 \ 0 & 0 & 0 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\3\-1end{pmatrix} $$
Finally I solved the $ 2x2$ system: $$begin{pmatrix}1 & 2\ 0 & 1end{pmatrix}begin{pmatrix}x\yend{pmatrix} =begin{pmatrix}1\3end{pmatrix} $$ taking into account that the best $ hat b $ is $begin{pmatrix}1\3\0end{pmatrix}$
The solution is then $ hat x = begin{pmatrix}-5\3\0end{pmatrix}$
Is this approach correct ?
EDIT
Based on the book 'Lianear Algebra and its applications' from David Lay, I also include the Least Squares method he proposes: $(A^tA)hat x=A^t b $
$$A^t b =begin{pmatrix}5\9\13end{pmatrix}, A^tA = begin{pmatrix}14 & 20 & 26 \
20 & 29 & 38 \
26 & 38 & 50end{pmatrix}$$
The reduced echelon from the augmented is:
$$ begin{pmatrix}14 & 20 & 26 & 5 \
20 & 29 & 38 & 9 \
26 & 38 & 50 & 13 end{pmatrix} sim begin{pmatrix}1 & 0 & -1 & -frac{35}{6} \
0 & 1 & 2 & frac{13}{3} \
0 & 0 & 0 & 0
end{pmatrix} Rightarrow hat x = begin{pmatrix}-frac{35}{6} \ frac{13}{3} \ 0 end{pmatrix}$$ for the independent variable case that $z=alpha , alpha=0 $
matrices least-squares
matrices least-squares
edited Jan 8 at 6:11
Guido
asked Jan 5 at 13:15


GuidoGuido
155
155
$begingroup$
The approach is the correct one to take, you just made some mistakes in implementing it.
$endgroup$
– John Doe
Jan 5 at 13:48
add a comment |
$begingroup$
The approach is the correct one to take, you just made some mistakes in implementing it.
$endgroup$
– John Doe
Jan 5 at 13:48
$begingroup$
The approach is the correct one to take, you just made some mistakes in implementing it.
$endgroup$
– John Doe
Jan 5 at 13:48
$begingroup$
The approach is the correct one to take, you just made some mistakes in implementing it.
$endgroup$
– John Doe
Jan 5 at 13:48
add a comment |
5 Answers
5
active
oldest
votes
$begingroup$
Your solution is incorrect for the following reason. When you perform the Gauss-Jordan elimination, you transform the original system
$$tag{1}
Ax=b
$$ to another
$$tag{2}
SAx=Sb.
$$
But the least squares solutions of (1) and (2) do not coincide in general.
Indeed, the least squares solution of (1) is $A^{+}b$, the least squares solution of (2) is $(SA)^{+}Sb$. If $A$ is invertible, then
$$(SA)^{+}S=(SA)^{-1}S=A^{-1}=A^{+}$$
and everything is OK, but in general case $(SA)^{+}Sne A^{+}$.
In your case, in particular,
$$
S=left(begin{array}{rrr}
1 & 0 & 0 \
2 & -1 & 0 \
1 & -2 & 1 \
end{array}right),quad
(SA)^{+}S=left(begin{array}{rrr}
-11/6 & 4/3 & 0 \
-1/3 & 1/3 & 0 \
7/6 & -2/3 & 0 \
end{array}right),
$$
$$
A^{+}=left(begin{array}{rrr}
-13/12 & -1/6 & 3/4 \
-1/6 & 0 & 1/6 \
3/4 & 1/6 & -5/12\
end{array}right).$$
You can calculate the pseudoinverse matrix by using the rank factorization:
$$
A=BC,quad B=left(begin{array}{rr}
1 & 3\
2 & 4\
3 & 5\
end{array}right),quad
C=left(begin{array}{rrr}
1&1/2&0\
0&1/2&1
end{array}right)
$$
(this decomposition comes from the fact the second column of $A$ is the arithmetic mean of the remaining columns).
It remains only to calculate the pseudoinverse matrix
$$
A^{+}=C^{+}B^{+}=C^T(CC^T)^{-1}(B^TB)^{-1}B^T
$$
and the least squares solution is $A^{+}b$.
$endgroup$
add a comment |
$begingroup$
The RLS solution is given by
$$ hat x = A^+ , b$$
where $A^+$ is the pseudo inverse of $A$.
As $A$ is not full rank, it is not possible effectively to calculate it with the simple formula you mentioned.
However, it is still possible to calculate it with numerical solutions, for example based on the SVD.
By doing so, we get:
$$ hat x = begin{pmatrix}-41/12\-1/2\29/12end{pmatrix} $$
For a value $A , hat x$ equal to:
$$ A , hat x = begin{pmatrix}17/6\4/3\-1/6end{pmatrix} $$
Your method does not seem to work. I can only give my interpretation of what happens:
The issue is your choice of the "best" $hat b$ value. The vector that you considered is not directly related to the real $b$ vector, but to something obtained after some manipulations on the rows of the linear system matrix.
Difficult in this situation to rely it to a selection of a "best" $hat b$ vector
$endgroup$
add a comment |
$begingroup$
Since the matrix has the eigenvector $pmatrix{1&-2&1cr}^t$ with eigenvalue $0$, one has
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x+t\y-2t\z+tend{pmatrix}$$
for all $t$, so there is a least squares solution with $z=0$, which makes it a least squares solution for
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\0end{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix} text{ or, equivalently, } begin{pmatrix}1 & 2 \ 2 & 3 \ 3 & 4 end{pmatrix}begin{pmatrix}x\yend{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix}$$
That makes it a regular least squares problem with solution $pmatrix{x&ycr}^t=pmatrix{-35/6&13/3cr}^t$, so the solutions for the orig1nal problem are $$begin{pmatrix}-frac{35}6 +t\frac{13}3-2t\tend{pmatrix}$$
$endgroup$
add a comment |
$begingroup$
I think you did the Gaussian elimination wrong.
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix}$$
Becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&-2&-4end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-5end{pmatrix}$$
Which becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&0&0end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-11end{pmatrix}$$Now notice that the final row says $0=-11$. That is a contradiction, hence there are no solutions to this equation.
$endgroup$
add a comment |
$begingroup$
Note that $$begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=-2$$and $$Big[2cdotbegin{pmatrix}2&3&4end{pmatrix}-begin{pmatrix}1&2&3end{pmatrix}Big]begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=2times 5-1=9$$since this leads to $-2=9$, therefore the solution space is infeasible.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3062701%2fleast-squares-solution-for-a-symmetric-singular-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
5 Answers
5
active
oldest
votes
5 Answers
5
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Your solution is incorrect for the following reason. When you perform the Gauss-Jordan elimination, you transform the original system
$$tag{1}
Ax=b
$$ to another
$$tag{2}
SAx=Sb.
$$
But the least squares solutions of (1) and (2) do not coincide in general.
Indeed, the least squares solution of (1) is $A^{+}b$, the least squares solution of (2) is $(SA)^{+}Sb$. If $A$ is invertible, then
$$(SA)^{+}S=(SA)^{-1}S=A^{-1}=A^{+}$$
and everything is OK, but in general case $(SA)^{+}Sne A^{+}$.
In your case, in particular,
$$
S=left(begin{array}{rrr}
1 & 0 & 0 \
2 & -1 & 0 \
1 & -2 & 1 \
end{array}right),quad
(SA)^{+}S=left(begin{array}{rrr}
-11/6 & 4/3 & 0 \
-1/3 & 1/3 & 0 \
7/6 & -2/3 & 0 \
end{array}right),
$$
$$
A^{+}=left(begin{array}{rrr}
-13/12 & -1/6 & 3/4 \
-1/6 & 0 & 1/6 \
3/4 & 1/6 & -5/12\
end{array}right).$$
You can calculate the pseudoinverse matrix by using the rank factorization:
$$
A=BC,quad B=left(begin{array}{rr}
1 & 3\
2 & 4\
3 & 5\
end{array}right),quad
C=left(begin{array}{rrr}
1&1/2&0\
0&1/2&1
end{array}right)
$$
(this decomposition comes from the fact the second column of $A$ is the arithmetic mean of the remaining columns).
It remains only to calculate the pseudoinverse matrix
$$
A^{+}=C^{+}B^{+}=C^T(CC^T)^{-1}(B^TB)^{-1}B^T
$$
and the least squares solution is $A^{+}b$.
$endgroup$
add a comment |
$begingroup$
Your solution is incorrect for the following reason. When you perform the Gauss-Jordan elimination, you transform the original system
$$tag{1}
Ax=b
$$ to another
$$tag{2}
SAx=Sb.
$$
But the least squares solutions of (1) and (2) do not coincide in general.
Indeed, the least squares solution of (1) is $A^{+}b$, the least squares solution of (2) is $(SA)^{+}Sb$. If $A$ is invertible, then
$$(SA)^{+}S=(SA)^{-1}S=A^{-1}=A^{+}$$
and everything is OK, but in general case $(SA)^{+}Sne A^{+}$.
In your case, in particular,
$$
S=left(begin{array}{rrr}
1 & 0 & 0 \
2 & -1 & 0 \
1 & -2 & 1 \
end{array}right),quad
(SA)^{+}S=left(begin{array}{rrr}
-11/6 & 4/3 & 0 \
-1/3 & 1/3 & 0 \
7/6 & -2/3 & 0 \
end{array}right),
$$
$$
A^{+}=left(begin{array}{rrr}
-13/12 & -1/6 & 3/4 \
-1/6 & 0 & 1/6 \
3/4 & 1/6 & -5/12\
end{array}right).$$
You can calculate the pseudoinverse matrix by using the rank factorization:
$$
A=BC,quad B=left(begin{array}{rr}
1 & 3\
2 & 4\
3 & 5\
end{array}right),quad
C=left(begin{array}{rrr}
1&1/2&0\
0&1/2&1
end{array}right)
$$
(this decomposition comes from the fact the second column of $A$ is the arithmetic mean of the remaining columns).
It remains only to calculate the pseudoinverse matrix
$$
A^{+}=C^{+}B^{+}=C^T(CC^T)^{-1}(B^TB)^{-1}B^T
$$
and the least squares solution is $A^{+}b$.
$endgroup$
add a comment |
$begingroup$
Your solution is incorrect for the following reason. When you perform the Gauss-Jordan elimination, you transform the original system
$$tag{1}
Ax=b
$$ to another
$$tag{2}
SAx=Sb.
$$
But the least squares solutions of (1) and (2) do not coincide in general.
Indeed, the least squares solution of (1) is $A^{+}b$, the least squares solution of (2) is $(SA)^{+}Sb$. If $A$ is invertible, then
$$(SA)^{+}S=(SA)^{-1}S=A^{-1}=A^{+}$$
and everything is OK, but in general case $(SA)^{+}Sne A^{+}$.
In your case, in particular,
$$
S=left(begin{array}{rrr}
1 & 0 & 0 \
2 & -1 & 0 \
1 & -2 & 1 \
end{array}right),quad
(SA)^{+}S=left(begin{array}{rrr}
-11/6 & 4/3 & 0 \
-1/3 & 1/3 & 0 \
7/6 & -2/3 & 0 \
end{array}right),
$$
$$
A^{+}=left(begin{array}{rrr}
-13/12 & -1/6 & 3/4 \
-1/6 & 0 & 1/6 \
3/4 & 1/6 & -5/12\
end{array}right).$$
You can calculate the pseudoinverse matrix by using the rank factorization:
$$
A=BC,quad B=left(begin{array}{rr}
1 & 3\
2 & 4\
3 & 5\
end{array}right),quad
C=left(begin{array}{rrr}
1&1/2&0\
0&1/2&1
end{array}right)
$$
(this decomposition comes from the fact the second column of $A$ is the arithmetic mean of the remaining columns).
It remains only to calculate the pseudoinverse matrix
$$
A^{+}=C^{+}B^{+}=C^T(CC^T)^{-1}(B^TB)^{-1}B^T
$$
and the least squares solution is $A^{+}b$.
$endgroup$
Your solution is incorrect for the following reason. When you perform the Gauss-Jordan elimination, you transform the original system
$$tag{1}
Ax=b
$$ to another
$$tag{2}
SAx=Sb.
$$
But the least squares solutions of (1) and (2) do not coincide in general.
Indeed, the least squares solution of (1) is $A^{+}b$, the least squares solution of (2) is $(SA)^{+}Sb$. If $A$ is invertible, then
$$(SA)^{+}S=(SA)^{-1}S=A^{-1}=A^{+}$$
and everything is OK, but in general case $(SA)^{+}Sne A^{+}$.
In your case, in particular,
$$
S=left(begin{array}{rrr}
1 & 0 & 0 \
2 & -1 & 0 \
1 & -2 & 1 \
end{array}right),quad
(SA)^{+}S=left(begin{array}{rrr}
-11/6 & 4/3 & 0 \
-1/3 & 1/3 & 0 \
7/6 & -2/3 & 0 \
end{array}right),
$$
$$
A^{+}=left(begin{array}{rrr}
-13/12 & -1/6 & 3/4 \
-1/6 & 0 & 1/6 \
3/4 & 1/6 & -5/12\
end{array}right).$$
You can calculate the pseudoinverse matrix by using the rank factorization:
$$
A=BC,quad B=left(begin{array}{rr}
1 & 3\
2 & 4\
3 & 5\
end{array}right),quad
C=left(begin{array}{rrr}
1&1/2&0\
0&1/2&1
end{array}right)
$$
(this decomposition comes from the fact the second column of $A$ is the arithmetic mean of the remaining columns).
It remains only to calculate the pseudoinverse matrix
$$
A^{+}=C^{+}B^{+}=C^T(CC^T)^{-1}(B^TB)^{-1}B^T
$$
and the least squares solution is $A^{+}b$.
edited Jan 5 at 18:58
answered Jan 5 at 18:28
AVKAVK
2,0961517
2,0961517
add a comment |
add a comment |
$begingroup$
The RLS solution is given by
$$ hat x = A^+ , b$$
where $A^+$ is the pseudo inverse of $A$.
As $A$ is not full rank, it is not possible effectively to calculate it with the simple formula you mentioned.
However, it is still possible to calculate it with numerical solutions, for example based on the SVD.
By doing so, we get:
$$ hat x = begin{pmatrix}-41/12\-1/2\29/12end{pmatrix} $$
For a value $A , hat x$ equal to:
$$ A , hat x = begin{pmatrix}17/6\4/3\-1/6end{pmatrix} $$
Your method does not seem to work. I can only give my interpretation of what happens:
The issue is your choice of the "best" $hat b$ value. The vector that you considered is not directly related to the real $b$ vector, but to something obtained after some manipulations on the rows of the linear system matrix.
Difficult in this situation to rely it to a selection of a "best" $hat b$ vector
$endgroup$
add a comment |
$begingroup$
The RLS solution is given by
$$ hat x = A^+ , b$$
where $A^+$ is the pseudo inverse of $A$.
As $A$ is not full rank, it is not possible effectively to calculate it with the simple formula you mentioned.
However, it is still possible to calculate it with numerical solutions, for example based on the SVD.
By doing so, we get:
$$ hat x = begin{pmatrix}-41/12\-1/2\29/12end{pmatrix} $$
For a value $A , hat x$ equal to:
$$ A , hat x = begin{pmatrix}17/6\4/3\-1/6end{pmatrix} $$
Your method does not seem to work. I can only give my interpretation of what happens:
The issue is your choice of the "best" $hat b$ value. The vector that you considered is not directly related to the real $b$ vector, but to something obtained after some manipulations on the rows of the linear system matrix.
Difficult in this situation to rely it to a selection of a "best" $hat b$ vector
$endgroup$
add a comment |
$begingroup$
The RLS solution is given by
$$ hat x = A^+ , b$$
where $A^+$ is the pseudo inverse of $A$.
As $A$ is not full rank, it is not possible effectively to calculate it with the simple formula you mentioned.
However, it is still possible to calculate it with numerical solutions, for example based on the SVD.
By doing so, we get:
$$ hat x = begin{pmatrix}-41/12\-1/2\29/12end{pmatrix} $$
For a value $A , hat x$ equal to:
$$ A , hat x = begin{pmatrix}17/6\4/3\-1/6end{pmatrix} $$
Your method does not seem to work. I can only give my interpretation of what happens:
The issue is your choice of the "best" $hat b$ value. The vector that you considered is not directly related to the real $b$ vector, but to something obtained after some manipulations on the rows of the linear system matrix.
Difficult in this situation to rely it to a selection of a "best" $hat b$ vector
$endgroup$
The RLS solution is given by
$$ hat x = A^+ , b$$
where $A^+$ is the pseudo inverse of $A$.
As $A$ is not full rank, it is not possible effectively to calculate it with the simple formula you mentioned.
However, it is still possible to calculate it with numerical solutions, for example based on the SVD.
By doing so, we get:
$$ hat x = begin{pmatrix}-41/12\-1/2\29/12end{pmatrix} $$
For a value $A , hat x$ equal to:
$$ A , hat x = begin{pmatrix}17/6\4/3\-1/6end{pmatrix} $$
Your method does not seem to work. I can only give my interpretation of what happens:
The issue is your choice of the "best" $hat b$ value. The vector that you considered is not directly related to the real $b$ vector, but to something obtained after some manipulations on the rows of the linear system matrix.
Difficult in this situation to rely it to a selection of a "best" $hat b$ vector
answered Jan 5 at 16:56
DamienDamien
60714
60714
add a comment |
add a comment |
$begingroup$
Since the matrix has the eigenvector $pmatrix{1&-2&1cr}^t$ with eigenvalue $0$, one has
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x+t\y-2t\z+tend{pmatrix}$$
for all $t$, so there is a least squares solution with $z=0$, which makes it a least squares solution for
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\0end{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix} text{ or, equivalently, } begin{pmatrix}1 & 2 \ 2 & 3 \ 3 & 4 end{pmatrix}begin{pmatrix}x\yend{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix}$$
That makes it a regular least squares problem with solution $pmatrix{x&ycr}^t=pmatrix{-35/6&13/3cr}^t$, so the solutions for the orig1nal problem are $$begin{pmatrix}-frac{35}6 +t\frac{13}3-2t\tend{pmatrix}$$
$endgroup$
add a comment |
$begingroup$
Since the matrix has the eigenvector $pmatrix{1&-2&1cr}^t$ with eigenvalue $0$, one has
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x+t\y-2t\z+tend{pmatrix}$$
for all $t$, so there is a least squares solution with $z=0$, which makes it a least squares solution for
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\0end{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix} text{ or, equivalently, } begin{pmatrix}1 & 2 \ 2 & 3 \ 3 & 4 end{pmatrix}begin{pmatrix}x\yend{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix}$$
That makes it a regular least squares problem with solution $pmatrix{x&ycr}^t=pmatrix{-35/6&13/3cr}^t$, so the solutions for the orig1nal problem are $$begin{pmatrix}-frac{35}6 +t\frac{13}3-2t\tend{pmatrix}$$
$endgroup$
add a comment |
$begingroup$
Since the matrix has the eigenvector $pmatrix{1&-2&1cr}^t$ with eigenvalue $0$, one has
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x+t\y-2t\z+tend{pmatrix}$$
for all $t$, so there is a least squares solution with $z=0$, which makes it a least squares solution for
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\0end{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix} text{ or, equivalently, } begin{pmatrix}1 & 2 \ 2 & 3 \ 3 & 4 end{pmatrix}begin{pmatrix}x\yend{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix}$$
That makes it a regular least squares problem with solution $pmatrix{x&ycr}^t=pmatrix{-35/6&13/3cr}^t$, so the solutions for the orig1nal problem are $$begin{pmatrix}-frac{35}6 +t\frac{13}3-2t\tend{pmatrix}$$
$endgroup$
Since the matrix has the eigenvector $pmatrix{1&-2&1cr}^t$ with eigenvalue $0$, one has
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x+t\y-2t\z+tend{pmatrix}$$
for all $t$, so there is a least squares solution with $z=0$, which makes it a least squares solution for
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\0end{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix} text{ or, equivalently, } begin{pmatrix}1 & 2 \ 2 & 3 \ 3 & 4 end{pmatrix}begin{pmatrix}x\yend{pmatrix} approxbegin{pmatrix}1\5\-2end{pmatrix}$$
That makes it a regular least squares problem with solution $pmatrix{x&ycr}^t=pmatrix{-35/6&13/3cr}^t$, so the solutions for the orig1nal problem are $$begin{pmatrix}-frac{35}6 +t\frac{13}3-2t\tend{pmatrix}$$
answered Jan 6 at 14:53
randomrandom
51626
51626
add a comment |
add a comment |
$begingroup$
I think you did the Gaussian elimination wrong.
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix}$$
Becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&-2&-4end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-5end{pmatrix}$$
Which becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&0&0end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-11end{pmatrix}$$Now notice that the final row says $0=-11$. That is a contradiction, hence there are no solutions to this equation.
$endgroup$
add a comment |
$begingroup$
I think you did the Gaussian elimination wrong.
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix}$$
Becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&-2&-4end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-5end{pmatrix}$$
Which becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&0&0end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-11end{pmatrix}$$Now notice that the final row says $0=-11$. That is a contradiction, hence there are no solutions to this equation.
$endgroup$
add a comment |
$begingroup$
I think you did the Gaussian elimination wrong.
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix}$$
Becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&-2&-4end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-5end{pmatrix}$$
Which becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&0&0end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-11end{pmatrix}$$Now notice that the final row says $0=-11$. That is a contradiction, hence there are no solutions to this equation.
$endgroup$
I think you did the Gaussian elimination wrong.
$$begin{pmatrix}1 & 2 & 3\ 2 & 3 & 4 \ 3 & 4 & 5 end{pmatrix}begin{pmatrix}x\y\zend{pmatrix} =begin{pmatrix}1\5\-2end{pmatrix}$$
Becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&-2&-4end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-5end{pmatrix}$$
Which becomes $$begin{pmatrix}1&2&3\0&-1&-2\0&0&0end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}1\3\-11end{pmatrix}$$Now notice that the final row says $0=-11$. That is a contradiction, hence there are no solutions to this equation.
answered Jan 5 at 13:44
John DoeJohn Doe
11.1k11238
11.1k11238
add a comment |
add a comment |
$begingroup$
Note that $$begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=-2$$and $$Big[2cdotbegin{pmatrix}2&3&4end{pmatrix}-begin{pmatrix}1&2&3end{pmatrix}Big]begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=2times 5-1=9$$since this leads to $-2=9$, therefore the solution space is infeasible.
$endgroup$
add a comment |
$begingroup$
Note that $$begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=-2$$and $$Big[2cdotbegin{pmatrix}2&3&4end{pmatrix}-begin{pmatrix}1&2&3end{pmatrix}Big]begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=2times 5-1=9$$since this leads to $-2=9$, therefore the solution space is infeasible.
$endgroup$
add a comment |
$begingroup$
Note that $$begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=-2$$and $$Big[2cdotbegin{pmatrix}2&3&4end{pmatrix}-begin{pmatrix}1&2&3end{pmatrix}Big]begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=2times 5-1=9$$since this leads to $-2=9$, therefore the solution space is infeasible.
$endgroup$
Note that $$begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=-2$$and $$Big[2cdotbegin{pmatrix}2&3&4end{pmatrix}-begin{pmatrix}1&2&3end{pmatrix}Big]begin{pmatrix}x\y\zend{pmatrix}=begin{pmatrix}3&4&5end{pmatrix}begin{pmatrix}x\y\zend{pmatrix}=2times 5-1=9$$since this leads to $-2=9$, therefore the solution space is infeasible.
answered Jan 5 at 17:15


Mostafa AyazMostafa Ayaz
15.3k3939
15.3k3939
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3062701%2fleast-squares-solution-for-a-symmetric-singular-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
The approach is the correct one to take, you just made some mistakes in implementing it.
$endgroup$
– John Doe
Jan 5 at 13:48