Is it possible to use the deflation algorithm to compute the eigenvalues of a large sparse matrix
$begingroup$
I am trying to compute the eigenvalues of a large sparse matrix (about 10% of the values are nonzero). The matrix is real valued, but since it is accumulated by a stochastic process it is not fully symmetric. I have used power iteration to compute the largest eigenvalue and the method worked fine. Unfortunately if I would apply the standard methods of deflation to compute the other eigenvalues I will spoil the sparsity.
I tried exploiting the fact that the eigenvectors of a real symmetric matrix form a basis. In this basis every vector can be expressed as
${bf x} = a_1{bf v}_1+a_2{bf v}_2 + dots + a_n{bf v}_n$
One can eliminate the eigenvalues one by one by subtracting the component along the corresponding eigenvector.
$ {bf x}_{n+1} = {bf A }{bf x}_{n} $
$ {bf x}_{n+1} = {bf x}_{n+1} - a_1{bf v}_1 $
Where $a_1 = frac{<{bf v}_1,{bf x}_{n+1}>}{lVert {bf x}_{n+1} rVert}$, and ${bf v}_1$ is the eigenvector corresponding to the largest eigenvalue as computed by the power iteration method.
The problem is that my matrix is not really symmetric. Therefore, I presume that my strategy is wrong. In the case of an asymmetric matrix, the eigenvalues can be complex, and some of the eigenvalues might have multiplicity bigger than one.
I looked at other possibilities, for instance
${bf A } - {bf x}_1{bf u}_1^T $
Where different choices of ${bf u}_1$ can be made e.g. left eigenvector of ${bf A }$ or ${bf u}_1=lambda_1{bf x}_1$ etc..
Unfortunately the resulting matrix is no longer sparse.
I will appreciate ideas how to apply the deflation algorithm to the spectrum of the problem outlined above.
Thank you in advance,
Alex
matrices eigenvalues-eigenvectors
$endgroup$
add a comment |
$begingroup$
I am trying to compute the eigenvalues of a large sparse matrix (about 10% of the values are nonzero). The matrix is real valued, but since it is accumulated by a stochastic process it is not fully symmetric. I have used power iteration to compute the largest eigenvalue and the method worked fine. Unfortunately if I would apply the standard methods of deflation to compute the other eigenvalues I will spoil the sparsity.
I tried exploiting the fact that the eigenvectors of a real symmetric matrix form a basis. In this basis every vector can be expressed as
${bf x} = a_1{bf v}_1+a_2{bf v}_2 + dots + a_n{bf v}_n$
One can eliminate the eigenvalues one by one by subtracting the component along the corresponding eigenvector.
$ {bf x}_{n+1} = {bf A }{bf x}_{n} $
$ {bf x}_{n+1} = {bf x}_{n+1} - a_1{bf v}_1 $
Where $a_1 = frac{<{bf v}_1,{bf x}_{n+1}>}{lVert {bf x}_{n+1} rVert}$, and ${bf v}_1$ is the eigenvector corresponding to the largest eigenvalue as computed by the power iteration method.
The problem is that my matrix is not really symmetric. Therefore, I presume that my strategy is wrong. In the case of an asymmetric matrix, the eigenvalues can be complex, and some of the eigenvalues might have multiplicity bigger than one.
I looked at other possibilities, for instance
${bf A } - {bf x}_1{bf u}_1^T $
Where different choices of ${bf u}_1$ can be made e.g. left eigenvector of ${bf A }$ or ${bf u}_1=lambda_1{bf x}_1$ etc..
Unfortunately the resulting matrix is no longer sparse.
I will appreciate ideas how to apply the deflation algorithm to the spectrum of the problem outlined above.
Thank you in advance,
Alex
matrices eigenvalues-eigenvectors
$endgroup$
add a comment |
$begingroup$
I am trying to compute the eigenvalues of a large sparse matrix (about 10% of the values are nonzero). The matrix is real valued, but since it is accumulated by a stochastic process it is not fully symmetric. I have used power iteration to compute the largest eigenvalue and the method worked fine. Unfortunately if I would apply the standard methods of deflation to compute the other eigenvalues I will spoil the sparsity.
I tried exploiting the fact that the eigenvectors of a real symmetric matrix form a basis. In this basis every vector can be expressed as
${bf x} = a_1{bf v}_1+a_2{bf v}_2 + dots + a_n{bf v}_n$
One can eliminate the eigenvalues one by one by subtracting the component along the corresponding eigenvector.
$ {bf x}_{n+1} = {bf A }{bf x}_{n} $
$ {bf x}_{n+1} = {bf x}_{n+1} - a_1{bf v}_1 $
Where $a_1 = frac{<{bf v}_1,{bf x}_{n+1}>}{lVert {bf x}_{n+1} rVert}$, and ${bf v}_1$ is the eigenvector corresponding to the largest eigenvalue as computed by the power iteration method.
The problem is that my matrix is not really symmetric. Therefore, I presume that my strategy is wrong. In the case of an asymmetric matrix, the eigenvalues can be complex, and some of the eigenvalues might have multiplicity bigger than one.
I looked at other possibilities, for instance
${bf A } - {bf x}_1{bf u}_1^T $
Where different choices of ${bf u}_1$ can be made e.g. left eigenvector of ${bf A }$ or ${bf u}_1=lambda_1{bf x}_1$ etc..
Unfortunately the resulting matrix is no longer sparse.
I will appreciate ideas how to apply the deflation algorithm to the spectrum of the problem outlined above.
Thank you in advance,
Alex
matrices eigenvalues-eigenvectors
$endgroup$
I am trying to compute the eigenvalues of a large sparse matrix (about 10% of the values are nonzero). The matrix is real valued, but since it is accumulated by a stochastic process it is not fully symmetric. I have used power iteration to compute the largest eigenvalue and the method worked fine. Unfortunately if I would apply the standard methods of deflation to compute the other eigenvalues I will spoil the sparsity.
I tried exploiting the fact that the eigenvectors of a real symmetric matrix form a basis. In this basis every vector can be expressed as
${bf x} = a_1{bf v}_1+a_2{bf v}_2 + dots + a_n{bf v}_n$
One can eliminate the eigenvalues one by one by subtracting the component along the corresponding eigenvector.
$ {bf x}_{n+1} = {bf A }{bf x}_{n} $
$ {bf x}_{n+1} = {bf x}_{n+1} - a_1{bf v}_1 $
Where $a_1 = frac{<{bf v}_1,{bf x}_{n+1}>}{lVert {bf x}_{n+1} rVert}$, and ${bf v}_1$ is the eigenvector corresponding to the largest eigenvalue as computed by the power iteration method.
The problem is that my matrix is not really symmetric. Therefore, I presume that my strategy is wrong. In the case of an asymmetric matrix, the eigenvalues can be complex, and some of the eigenvalues might have multiplicity bigger than one.
I looked at other possibilities, for instance
${bf A } - {bf x}_1{bf u}_1^T $
Where different choices of ${bf u}_1$ can be made e.g. left eigenvector of ${bf A }$ or ${bf u}_1=lambda_1{bf x}_1$ etc..
Unfortunately the resulting matrix is no longer sparse.
I will appreciate ideas how to apply the deflation algorithm to the spectrum of the problem outlined above.
Thank you in advance,
Alex
matrices eigenvalues-eigenvectors
matrices eigenvalues-eigenvectors
edited Aug 6 '14 at 9:55
Alexander Cska
asked Jul 30 '14 at 15:46
Alexander CskaAlexander Cska
253112
253112
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
wasWell the answer to the above question is yes. The trouble I was having is due to a coding bug. I tried the methodology on a 45000 x 45000 sparse matrix having about 11% non zero elements. The first eigenvalue coincided exactly whit the values given by eigs() from matlab when printed in double precision. The second eigenvalue coincided up to 9 digits after the decimal point. So it looks that although my matrix is not symmetric and positive definite, the above algorithm somehow worked.
If you hove other ideas, write me a comment.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f882761%2fis-it-possible-to-use-the-deflation-algorithm-to-compute-the-eigenvalues-of-a-la%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
wasWell the answer to the above question is yes. The trouble I was having is due to a coding bug. I tried the methodology on a 45000 x 45000 sparse matrix having about 11% non zero elements. The first eigenvalue coincided exactly whit the values given by eigs() from matlab when printed in double precision. The second eigenvalue coincided up to 9 digits after the decimal point. So it looks that although my matrix is not symmetric and positive definite, the above algorithm somehow worked.
If you hove other ideas, write me a comment.
$endgroup$
add a comment |
$begingroup$
wasWell the answer to the above question is yes. The trouble I was having is due to a coding bug. I tried the methodology on a 45000 x 45000 sparse matrix having about 11% non zero elements. The first eigenvalue coincided exactly whit the values given by eigs() from matlab when printed in double precision. The second eigenvalue coincided up to 9 digits after the decimal point. So it looks that although my matrix is not symmetric and positive definite, the above algorithm somehow worked.
If you hove other ideas, write me a comment.
$endgroup$
add a comment |
$begingroup$
wasWell the answer to the above question is yes. The trouble I was having is due to a coding bug. I tried the methodology on a 45000 x 45000 sparse matrix having about 11% non zero elements. The first eigenvalue coincided exactly whit the values given by eigs() from matlab when printed in double precision. The second eigenvalue coincided up to 9 digits after the decimal point. So it looks that although my matrix is not symmetric and positive definite, the above algorithm somehow worked.
If you hove other ideas, write me a comment.
$endgroup$
wasWell the answer to the above question is yes. The trouble I was having is due to a coding bug. I tried the methodology on a 45000 x 45000 sparse matrix having about 11% non zero elements. The first eigenvalue coincided exactly whit the values given by eigs() from matlab when printed in double precision. The second eigenvalue coincided up to 9 digits after the decimal point. So it looks that although my matrix is not symmetric and positive definite, the above algorithm somehow worked.
If you hove other ideas, write me a comment.
answered Aug 6 '14 at 11:35
Alexander CskaAlexander Cska
253112
253112
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f882761%2fis-it-possible-to-use-the-deflation-algorithm-to-compute-the-eigenvalues-of-a-la%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown