Wrong eigenvalues from a sparse matrix: eigenvalues are nonreal











up vote
22
down vote

favorite
6












I notice in the following example that wrong complex eigenvalues are resulted if calculating from a Hermitian sparse matrix, which should by no means have unreal eigenvalues. However, it gives correct result if we




  • calculate from the corresponding normal matrix


I found many cases with this behavior. Actually, it becomes complex once n>24.



I construct the big matrix from 2by2 building blocks A and B. It's just a piece of concise code to make the big matrix of the form like
$ begin{bmatrix}
B & A[1] & 0 & 0 \
A[-1] & B & A[1] & 0 \
0 & A[-1] & B & A[1] \
0 & 0 & A[-1] & B
end{bmatrix} $
.



n = 25; Nless = 10;
BlockDiag[tt_, offset_] :=
DiagonalMatrix[Hold /@ tt, offset] // ReleaseHold // ArrayFlatten;
A[pm_] := {{-1, pm I}, {pm I, 1}};
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = SparseArray[
BlockDiag[Table[B, {j, 1, n}], 0] +
BlockDiag[Table[A[1], n - 1], 2] +
BlockDiag[Table[A[-1], n - 1], -2]];
Reverse[Eigenvalues[M, -Nless]]


The result is



{-8.50551*10^-14, 8.50983*10^-14, 
1.42089 + 0.0000210718 I, -1.42139 + 0.0000878787 I, -1.43983 -
0.0000374648 I, 1.44086 - 0.0000496205 I,
1.47277 + 0.000192942 I, -1.47298 - 0.0000263272 I, -1.516 -
0.000610613 I, 1.5161 - 0.0000292111 I}


The imaginary parts are not negligibly small.
Knowing other strange behavior of sparse matrix, it looks not quite safe to calculate eigensystem of sparse matrix by default in Mathematica?










share|improve this question




















  • 1




    You should report this to support@wolfram.com.
    – user21
    16 hours ago















up vote
22
down vote

favorite
6












I notice in the following example that wrong complex eigenvalues are resulted if calculating from a Hermitian sparse matrix, which should by no means have unreal eigenvalues. However, it gives correct result if we




  • calculate from the corresponding normal matrix


I found many cases with this behavior. Actually, it becomes complex once n>24.



I construct the big matrix from 2by2 building blocks A and B. It's just a piece of concise code to make the big matrix of the form like
$ begin{bmatrix}
B & A[1] & 0 & 0 \
A[-1] & B & A[1] & 0 \
0 & A[-1] & B & A[1] \
0 & 0 & A[-1] & B
end{bmatrix} $
.



n = 25; Nless = 10;
BlockDiag[tt_, offset_] :=
DiagonalMatrix[Hold /@ tt, offset] // ReleaseHold // ArrayFlatten;
A[pm_] := {{-1, pm I}, {pm I, 1}};
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = SparseArray[
BlockDiag[Table[B, {j, 1, n}], 0] +
BlockDiag[Table[A[1], n - 1], 2] +
BlockDiag[Table[A[-1], n - 1], -2]];
Reverse[Eigenvalues[M, -Nless]]


The result is



{-8.50551*10^-14, 8.50983*10^-14, 
1.42089 + 0.0000210718 I, -1.42139 + 0.0000878787 I, -1.43983 -
0.0000374648 I, 1.44086 - 0.0000496205 I,
1.47277 + 0.000192942 I, -1.47298 - 0.0000263272 I, -1.516 -
0.000610613 I, 1.5161 - 0.0000292111 I}


The imaginary parts are not negligibly small.
Knowing other strange behavior of sparse matrix, it looks not quite safe to calculate eigensystem of sparse matrix by default in Mathematica?










share|improve this question




















  • 1




    You should report this to support@wolfram.com.
    – user21
    16 hours ago













up vote
22
down vote

favorite
6









up vote
22
down vote

favorite
6






6





I notice in the following example that wrong complex eigenvalues are resulted if calculating from a Hermitian sparse matrix, which should by no means have unreal eigenvalues. However, it gives correct result if we




  • calculate from the corresponding normal matrix


I found many cases with this behavior. Actually, it becomes complex once n>24.



I construct the big matrix from 2by2 building blocks A and B. It's just a piece of concise code to make the big matrix of the form like
$ begin{bmatrix}
B & A[1] & 0 & 0 \
A[-1] & B & A[1] & 0 \
0 & A[-1] & B & A[1] \
0 & 0 & A[-1] & B
end{bmatrix} $
.



n = 25; Nless = 10;
BlockDiag[tt_, offset_] :=
DiagonalMatrix[Hold /@ tt, offset] // ReleaseHold // ArrayFlatten;
A[pm_] := {{-1, pm I}, {pm I, 1}};
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = SparseArray[
BlockDiag[Table[B, {j, 1, n}], 0] +
BlockDiag[Table[A[1], n - 1], 2] +
BlockDiag[Table[A[-1], n - 1], -2]];
Reverse[Eigenvalues[M, -Nless]]


The result is



{-8.50551*10^-14, 8.50983*10^-14, 
1.42089 + 0.0000210718 I, -1.42139 + 0.0000878787 I, -1.43983 -
0.0000374648 I, 1.44086 - 0.0000496205 I,
1.47277 + 0.000192942 I, -1.47298 - 0.0000263272 I, -1.516 -
0.000610613 I, 1.5161 - 0.0000292111 I}


The imaginary parts are not negligibly small.
Knowing other strange behavior of sparse matrix, it looks not quite safe to calculate eigensystem of sparse matrix by default in Mathematica?










share|improve this question















I notice in the following example that wrong complex eigenvalues are resulted if calculating from a Hermitian sparse matrix, which should by no means have unreal eigenvalues. However, it gives correct result if we




  • calculate from the corresponding normal matrix


I found many cases with this behavior. Actually, it becomes complex once n>24.



I construct the big matrix from 2by2 building blocks A and B. It's just a piece of concise code to make the big matrix of the form like
$ begin{bmatrix}
B & A[1] & 0 & 0 \
A[-1] & B & A[1] & 0 \
0 & A[-1] & B & A[1] \
0 & 0 & A[-1] & B
end{bmatrix} $
.



n = 25; Nless = 10;
BlockDiag[tt_, offset_] :=
DiagonalMatrix[Hold /@ tt, offset] // ReleaseHold // ArrayFlatten;
A[pm_] := {{-1, pm I}, {pm I, 1}};
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = SparseArray[
BlockDiag[Table[B, {j, 1, n}], 0] +
BlockDiag[Table[A[1], n - 1], 2] +
BlockDiag[Table[A[-1], n - 1], -2]];
Reverse[Eigenvalues[M, -Nless]]


The result is



{-8.50551*10^-14, 8.50983*10^-14, 
1.42089 + 0.0000210718 I, -1.42139 + 0.0000878787 I, -1.43983 -
0.0000374648 I, 1.44086 - 0.0000496205 I,
1.47277 + 0.000192942 I, -1.47298 - 0.0000263272 I, -1.516 -
0.000610613 I, 1.5161 - 0.0000292111 I}


The imaginary parts are not negligibly small.
Knowing other strange behavior of sparse matrix, it looks not quite safe to calculate eigensystem of sparse matrix by default in Mathematica?







bugs linear-algebra sparse-arrays eigenvalues






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited 10 hours ago









Henrik Schumacher

45.1k365131




45.1k365131










asked yesterday









xiaohuamao

833621




833621








  • 1




    You should report this to support@wolfram.com.
    – user21
    16 hours ago














  • 1




    You should report this to support@wolfram.com.
    – user21
    16 hours ago








1




1




You should report this to support@wolfram.com.
– user21
16 hours ago




You should report this to support@wolfram.com.
– user21
16 hours ago










1 Answer
1






active

oldest

votes

















up vote
20
down vote













Very good observation. Indeed, this issue is really frustrating. To single out the issue: It seems that Arnoldi's method is to blame:



Max@Abs@Im@Eigenvalues[M, -Nless]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Arnoldi"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Direct"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "FEAST"]



0.000610613



0.000610613



0



0




IRRC, Arnoldi's method may have problems when eigenvalues cluster around 0.
Sometimes, introducing a shift into Arnoldi's method can help. Usually one shifts by a positive real number in order to make the matrix positive-definite. However, this changes also the ordering of the eigenvalues if the matrix is Hermiatian but not indefinite (an issue that has been also observed here several times). In my desperation, I tried to shift by I, and in this case, the imaginary parts are much smaller:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]
Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]



1.11022*10^-15



{1.51607, -1.51607, -1.473, 1.473, -1.44084, 1.44084, -1.42095, 1.42095, -8.51975*10^-14 + 8.88178*10^-16 I,
8.49552*10^-14 - 1.11022*10^-15 I}




To my surprise, the ordering of the eigenvalues seems to be more or less consistent with the outputs of the other methods (M has pairs of eigenvalues of same magnitude but with opposite signs. Since each numerical method may induce small erros, this can effect the default ordering which is by magnitude.)



No guarantees for the correctness of the results, though. I think a bug report is a good idea anyways.



Edit



While I first wondered why shifting by I works, it just came to my mind that the function $x mapsto |x+I|$ is monotonically increasing on the positive real axis:



ParametricPlot[{Abs[x], Abs[x + I]}, {x, -4, 4}, 
AxesLabel -> {"Abs[x]", "Abs[x+I]"}]


enter image description here



So for a Hermitian matrix M, the corresponding eigenvalues of M and M + I are in consistent ordering (up to numerical errors). And of course, this shift guarantees that 0 is not an eigenvalue of M + I. So, now I am more confident to suggest this hack.



Edit 2



Another curiosum: Also shifting by 0 or None seems to force the implementation of Arnoldi's method to branch to a more stable subroutine:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> 0}]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> None}]



9.76729*10^-6



4.84089*10^-17




I would not recommend to use this for larger matrices, though. Here is a significantly faster way to build larger versions of your matrix:



n = 250;
Nless = 10;
A[pm_] := N[{{-1, pm I}, {pm I, 1}}];
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = Plus[
SparseArray[
{
Band[{1, 1}] -> Table[B, {j, 1, n}],
Band[{1, 3}] -> Table[A[1], n - 1],
Band[{3, 1}] -> Table[A[-1], n - 1]
},
{n 2, n 2}], 0.
];


Running Arnoldi's method with "Shift" -> 0 for this bigger matrix returns an error:



Eigenvalues[M, -Nless, 
Method -> {"Arnoldi", "Shift" -> 0, MaxIterations -> 10000}]


But it still seems to produce plausible results with "Shift" -> None without any complaints.



enter image description here






share|improve this answer























  • @ΑλέξανδροςΖεγγ I did not mean to bite you away. Your example with SetPrecision is great because it shows that there is indeed a precision issue with Arnoldi's method.
    – Henrik Schumacher
    yesterday






  • 1




    Never mind, it is more important to figure out what really is wrong :).
    – Αλέξανδρος Ζεγγ
    yesterday











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "387"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f186224%2fwrong-eigenvalues-from-a-sparse-matrix-eigenvalues-are-nonreal%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
20
down vote













Very good observation. Indeed, this issue is really frustrating. To single out the issue: It seems that Arnoldi's method is to blame:



Max@Abs@Im@Eigenvalues[M, -Nless]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Arnoldi"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Direct"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "FEAST"]



0.000610613



0.000610613



0



0




IRRC, Arnoldi's method may have problems when eigenvalues cluster around 0.
Sometimes, introducing a shift into Arnoldi's method can help. Usually one shifts by a positive real number in order to make the matrix positive-definite. However, this changes also the ordering of the eigenvalues if the matrix is Hermiatian but not indefinite (an issue that has been also observed here several times). In my desperation, I tried to shift by I, and in this case, the imaginary parts are much smaller:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]
Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]



1.11022*10^-15



{1.51607, -1.51607, -1.473, 1.473, -1.44084, 1.44084, -1.42095, 1.42095, -8.51975*10^-14 + 8.88178*10^-16 I,
8.49552*10^-14 - 1.11022*10^-15 I}




To my surprise, the ordering of the eigenvalues seems to be more or less consistent with the outputs of the other methods (M has pairs of eigenvalues of same magnitude but with opposite signs. Since each numerical method may induce small erros, this can effect the default ordering which is by magnitude.)



No guarantees for the correctness of the results, though. I think a bug report is a good idea anyways.



Edit



While I first wondered why shifting by I works, it just came to my mind that the function $x mapsto |x+I|$ is monotonically increasing on the positive real axis:



ParametricPlot[{Abs[x], Abs[x + I]}, {x, -4, 4}, 
AxesLabel -> {"Abs[x]", "Abs[x+I]"}]


enter image description here



So for a Hermitian matrix M, the corresponding eigenvalues of M and M + I are in consistent ordering (up to numerical errors). And of course, this shift guarantees that 0 is not an eigenvalue of M + I. So, now I am more confident to suggest this hack.



Edit 2



Another curiosum: Also shifting by 0 or None seems to force the implementation of Arnoldi's method to branch to a more stable subroutine:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> 0}]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> None}]



9.76729*10^-6



4.84089*10^-17




I would not recommend to use this for larger matrices, though. Here is a significantly faster way to build larger versions of your matrix:



n = 250;
Nless = 10;
A[pm_] := N[{{-1, pm I}, {pm I, 1}}];
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = Plus[
SparseArray[
{
Band[{1, 1}] -> Table[B, {j, 1, n}],
Band[{1, 3}] -> Table[A[1], n - 1],
Band[{3, 1}] -> Table[A[-1], n - 1]
},
{n 2, n 2}], 0.
];


Running Arnoldi's method with "Shift" -> 0 for this bigger matrix returns an error:



Eigenvalues[M, -Nless, 
Method -> {"Arnoldi", "Shift" -> 0, MaxIterations -> 10000}]


But it still seems to produce plausible results with "Shift" -> None without any complaints.



enter image description here






share|improve this answer























  • @ΑλέξανδροςΖεγγ I did not mean to bite you away. Your example with SetPrecision is great because it shows that there is indeed a precision issue with Arnoldi's method.
    – Henrik Schumacher
    yesterday






  • 1




    Never mind, it is more important to figure out what really is wrong :).
    – Αλέξανδρος Ζεγγ
    yesterday















up vote
20
down vote













Very good observation. Indeed, this issue is really frustrating. To single out the issue: It seems that Arnoldi's method is to blame:



Max@Abs@Im@Eigenvalues[M, -Nless]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Arnoldi"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Direct"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "FEAST"]



0.000610613



0.000610613



0



0




IRRC, Arnoldi's method may have problems when eigenvalues cluster around 0.
Sometimes, introducing a shift into Arnoldi's method can help. Usually one shifts by a positive real number in order to make the matrix positive-definite. However, this changes also the ordering of the eigenvalues if the matrix is Hermiatian but not indefinite (an issue that has been also observed here several times). In my desperation, I tried to shift by I, and in this case, the imaginary parts are much smaller:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]
Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]



1.11022*10^-15



{1.51607, -1.51607, -1.473, 1.473, -1.44084, 1.44084, -1.42095, 1.42095, -8.51975*10^-14 + 8.88178*10^-16 I,
8.49552*10^-14 - 1.11022*10^-15 I}




To my surprise, the ordering of the eigenvalues seems to be more or less consistent with the outputs of the other methods (M has pairs of eigenvalues of same magnitude but with opposite signs. Since each numerical method may induce small erros, this can effect the default ordering which is by magnitude.)



No guarantees for the correctness of the results, though. I think a bug report is a good idea anyways.



Edit



While I first wondered why shifting by I works, it just came to my mind that the function $x mapsto |x+I|$ is monotonically increasing on the positive real axis:



ParametricPlot[{Abs[x], Abs[x + I]}, {x, -4, 4}, 
AxesLabel -> {"Abs[x]", "Abs[x+I]"}]


enter image description here



So for a Hermitian matrix M, the corresponding eigenvalues of M and M + I are in consistent ordering (up to numerical errors). And of course, this shift guarantees that 0 is not an eigenvalue of M + I. So, now I am more confident to suggest this hack.



Edit 2



Another curiosum: Also shifting by 0 or None seems to force the implementation of Arnoldi's method to branch to a more stable subroutine:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> 0}]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> None}]



9.76729*10^-6



4.84089*10^-17




I would not recommend to use this for larger matrices, though. Here is a significantly faster way to build larger versions of your matrix:



n = 250;
Nless = 10;
A[pm_] := N[{{-1, pm I}, {pm I, 1}}];
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = Plus[
SparseArray[
{
Band[{1, 1}] -> Table[B, {j, 1, n}],
Band[{1, 3}] -> Table[A[1], n - 1],
Band[{3, 1}] -> Table[A[-1], n - 1]
},
{n 2, n 2}], 0.
];


Running Arnoldi's method with "Shift" -> 0 for this bigger matrix returns an error:



Eigenvalues[M, -Nless, 
Method -> {"Arnoldi", "Shift" -> 0, MaxIterations -> 10000}]


But it still seems to produce plausible results with "Shift" -> None without any complaints.



enter image description here






share|improve this answer























  • @ΑλέξανδροςΖεγγ I did not mean to bite you away. Your example with SetPrecision is great because it shows that there is indeed a precision issue with Arnoldi's method.
    – Henrik Schumacher
    yesterday






  • 1




    Never mind, it is more important to figure out what really is wrong :).
    – Αλέξανδρος Ζεγγ
    yesterday













up vote
20
down vote










up vote
20
down vote









Very good observation. Indeed, this issue is really frustrating. To single out the issue: It seems that Arnoldi's method is to blame:



Max@Abs@Im@Eigenvalues[M, -Nless]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Arnoldi"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Direct"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "FEAST"]



0.000610613



0.000610613



0



0




IRRC, Arnoldi's method may have problems when eigenvalues cluster around 0.
Sometimes, introducing a shift into Arnoldi's method can help. Usually one shifts by a positive real number in order to make the matrix positive-definite. However, this changes also the ordering of the eigenvalues if the matrix is Hermiatian but not indefinite (an issue that has been also observed here several times). In my desperation, I tried to shift by I, and in this case, the imaginary parts are much smaller:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]
Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]



1.11022*10^-15



{1.51607, -1.51607, -1.473, 1.473, -1.44084, 1.44084, -1.42095, 1.42095, -8.51975*10^-14 + 8.88178*10^-16 I,
8.49552*10^-14 - 1.11022*10^-15 I}




To my surprise, the ordering of the eigenvalues seems to be more or less consistent with the outputs of the other methods (M has pairs of eigenvalues of same magnitude but with opposite signs. Since each numerical method may induce small erros, this can effect the default ordering which is by magnitude.)



No guarantees for the correctness of the results, though. I think a bug report is a good idea anyways.



Edit



While I first wondered why shifting by I works, it just came to my mind that the function $x mapsto |x+I|$ is monotonically increasing on the positive real axis:



ParametricPlot[{Abs[x], Abs[x + I]}, {x, -4, 4}, 
AxesLabel -> {"Abs[x]", "Abs[x+I]"}]


enter image description here



So for a Hermitian matrix M, the corresponding eigenvalues of M and M + I are in consistent ordering (up to numerical errors). And of course, this shift guarantees that 0 is not an eigenvalue of M + I. So, now I am more confident to suggest this hack.



Edit 2



Another curiosum: Also shifting by 0 or None seems to force the implementation of Arnoldi's method to branch to a more stable subroutine:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> 0}]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> None}]



9.76729*10^-6



4.84089*10^-17




I would not recommend to use this for larger matrices, though. Here is a significantly faster way to build larger versions of your matrix:



n = 250;
Nless = 10;
A[pm_] := N[{{-1, pm I}, {pm I, 1}}];
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = Plus[
SparseArray[
{
Band[{1, 1}] -> Table[B, {j, 1, n}],
Band[{1, 3}] -> Table[A[1], n - 1],
Band[{3, 1}] -> Table[A[-1], n - 1]
},
{n 2, n 2}], 0.
];


Running Arnoldi's method with "Shift" -> 0 for this bigger matrix returns an error:



Eigenvalues[M, -Nless, 
Method -> {"Arnoldi", "Shift" -> 0, MaxIterations -> 10000}]


But it still seems to produce plausible results with "Shift" -> None without any complaints.



enter image description here






share|improve this answer














Very good observation. Indeed, this issue is really frustrating. To single out the issue: It seems that Arnoldi's method is to blame:



Max@Abs@Im@Eigenvalues[M, -Nless]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Arnoldi"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "Direct"]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> "FEAST"]



0.000610613



0.000610613



0



0




IRRC, Arnoldi's method may have problems when eigenvalues cluster around 0.
Sometimes, introducing a shift into Arnoldi's method can help. Usually one shifts by a positive real number in order to make the matrix positive-definite. However, this changes also the ordering of the eigenvalues if the matrix is Hermiatian but not indefinite (an issue that has been also observed here several times). In my desperation, I tried to shift by I, and in this case, the imaginary parts are much smaller:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]
Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> I}]



1.11022*10^-15



{1.51607, -1.51607, -1.473, 1.473, -1.44084, 1.44084, -1.42095, 1.42095, -8.51975*10^-14 + 8.88178*10^-16 I,
8.49552*10^-14 - 1.11022*10^-15 I}




To my surprise, the ordering of the eigenvalues seems to be more or less consistent with the outputs of the other methods (M has pairs of eigenvalues of same magnitude but with opposite signs. Since each numerical method may induce small erros, this can effect the default ordering which is by magnitude.)



No guarantees for the correctness of the results, though. I think a bug report is a good idea anyways.



Edit



While I first wondered why shifting by I works, it just came to my mind that the function $x mapsto |x+I|$ is monotonically increasing on the positive real axis:



ParametricPlot[{Abs[x], Abs[x + I]}, {x, -4, 4}, 
AxesLabel -> {"Abs[x]", "Abs[x+I]"}]


enter image description here



So for a Hermitian matrix M, the corresponding eigenvalues of M and M + I are in consistent ordering (up to numerical errors). And of course, this shift guarantees that 0 is not an eigenvalue of M + I. So, now I am more confident to suggest this hack.



Edit 2



Another curiosum: Also shifting by 0 or None seems to force the implementation of Arnoldi's method to branch to a more stable subroutine:



Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> 0}]
Max@Abs@Im@Eigenvalues[M, -Nless, Method -> {"Arnoldi", "Shift" -> None}]



9.76729*10^-6



4.84089*10^-17




I would not recommend to use this for larger matrices, though. Here is a significantly faster way to build larger versions of your matrix:



n = 250;
Nless = 10;
A[pm_] := N[{{-1, pm I}, {pm I, 1}}];
B = (2.0 - Sqrt[2]) {{1, 0}, {0, -1}};
M = Plus[
SparseArray[
{
Band[{1, 1}] -> Table[B, {j, 1, n}],
Band[{1, 3}] -> Table[A[1], n - 1],
Band[{3, 1}] -> Table[A[-1], n - 1]
},
{n 2, n 2}], 0.
];


Running Arnoldi's method with "Shift" -> 0 for this bigger matrix returns an error:



Eigenvalues[M, -Nless, 
Method -> {"Arnoldi", "Shift" -> 0, MaxIterations -> 10000}]


But it still seems to produce plausible results with "Shift" -> None without any complaints.



enter image description here







share|improve this answer














share|improve this answer



share|improve this answer








edited 11 hours ago

























answered yesterday









Henrik Schumacher

45.1k365131




45.1k365131












  • @ΑλέξανδροςΖεγγ I did not mean to bite you away. Your example with SetPrecision is great because it shows that there is indeed a precision issue with Arnoldi's method.
    – Henrik Schumacher
    yesterday






  • 1




    Never mind, it is more important to figure out what really is wrong :).
    – Αλέξανδρος Ζεγγ
    yesterday


















  • @ΑλέξανδροςΖεγγ I did not mean to bite you away. Your example with SetPrecision is great because it shows that there is indeed a precision issue with Arnoldi's method.
    – Henrik Schumacher
    yesterday






  • 1




    Never mind, it is more important to figure out what really is wrong :).
    – Αλέξανδρος Ζεγγ
    yesterday
















@ΑλέξανδροςΖεγγ I did not mean to bite you away. Your example with SetPrecision is great because it shows that there is indeed a precision issue with Arnoldi's method.
– Henrik Schumacher
yesterday




@ΑλέξανδροςΖεγγ I did not mean to bite you away. Your example with SetPrecision is great because it shows that there is indeed a precision issue with Arnoldi's method.
– Henrik Schumacher
yesterday




1




1




Never mind, it is more important to figure out what really is wrong :).
– Αλέξανδρος Ζεγγ
yesterday




Never mind, it is more important to figure out what really is wrong :).
– Αλέξανδρος Ζεγγ
yesterday


















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f186224%2fwrong-eigenvalues-from-a-sparse-matrix-eigenvalues-are-nonreal%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

'app-layout' is not a known element: how to share Component with different Modules

android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

WPF add header to Image with URL pettitions [duplicate]