Limit of matrix inverse: $lim_{lambda to infty} (A + lambda I)^{-1} = mathbf{0}$?
$begingroup$
Let matrix $A in mathbb{R}^{ntimes n}$ be positive semidefinite.
Is it then true to that
$$
(A + lambda I)^{-1} to mathbf{0} quad (lambda to infty) quad ?
$$If so, is the fact that $A$ is positive definite irrelevant here?
My thoughts so far:
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1}
$$
I think that $lim_{lambda to infty} Big( frac{1}{lambda}A + I Big)^{-1} = I^{-1} = I$, but I don't know if I can just pass the $lim$ through the inverse $(cdot)^{-1}$ like that. If this is the case, then
$$
lim_{lambda to infty} (A + lambda I)^{-1} = lim_{lambda to infty} (1/lambda) lim_{lambda to infty} (A/lambda + I)^{-1} = 0 cdot I = mathbf{0}
$$
as I'd like to show.
Where this comes from:
I'm trying to justify a claim made in an econometrics lecture. Namely,
$$
textrm{Var}(hat{beta}^{textrm{ridge}}) = sigma^2 (X^{T}X + lambda I)^{-1} X^T X [(X^T X + lambda I)^{-1}]^T to mathbf{0}
$$
where $hat{beta}^textrm{ridge}$ is the ridge estimator in a linear model, $X in mathbb{R}^{n times p}$ is the design matrix, and the equality is known. The limit, however, wasn't justified.
linear-algebra matrices limits
$endgroup$
add a comment |
$begingroup$
Let matrix $A in mathbb{R}^{ntimes n}$ be positive semidefinite.
Is it then true to that
$$
(A + lambda I)^{-1} to mathbf{0} quad (lambda to infty) quad ?
$$If so, is the fact that $A$ is positive definite irrelevant here?
My thoughts so far:
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1}
$$
I think that $lim_{lambda to infty} Big( frac{1}{lambda}A + I Big)^{-1} = I^{-1} = I$, but I don't know if I can just pass the $lim$ through the inverse $(cdot)^{-1}$ like that. If this is the case, then
$$
lim_{lambda to infty} (A + lambda I)^{-1} = lim_{lambda to infty} (1/lambda) lim_{lambda to infty} (A/lambda + I)^{-1} = 0 cdot I = mathbf{0}
$$
as I'd like to show.
Where this comes from:
I'm trying to justify a claim made in an econometrics lecture. Namely,
$$
textrm{Var}(hat{beta}^{textrm{ridge}}) = sigma^2 (X^{T}X + lambda I)^{-1} X^T X [(X^T X + lambda I)^{-1}]^T to mathbf{0}
$$
where $hat{beta}^textrm{ridge}$ is the ridge estimator in a linear model, $X in mathbb{R}^{n times p}$ is the design matrix, and the equality is known. The limit, however, wasn't justified.
linear-algebra matrices limits
$endgroup$
4
$begingroup$
$A$ can be any matrix above. The point is, the inverse of a matrix is a continuous function in a neighbourhood of the identity, therefore since $A - lambda I$ is going to eventually be invertible, we may pass the limit inside the inverse by continuity, giving the desired result by the continuity of scalar multiplication.
$endgroup$
– астон вілла олоф мэллбэрг
Jan 24 at 17:27
3
$begingroup$
If $|cdot|$ is a matrix norm, then the Neumann series guarantees that $A+lambda I$ is invertible with $$(A+lambda I)^{-1} = sum_{n=0}^{infty} frac{(-1)^n}{lambda^{n+1}}A^n, $$ which converges uniformly on the region $|lambda| geq |A|+delta$ for any given $delta > 0$. By the Weierstrass M-test, the limit as $lambdatoinfty$ can be evaluated term-wise, proving the desired claim.
$endgroup$
– Sangchul Lee
Jan 24 at 17:34
$begingroup$
@астонвіллаолофмэллбэрг Great! That completes my line of reasoning. For others looking on, here's why there is a neighborhood of $I$ in $M_n(mathbb{R})$ in which $(cdot)^{-1}$ is continuous: $(cdot)^{-1} : GL_n(mathbb{R}) to GL_n(mathbb{R})$ is continuous and $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$ (see: math.stackexchange.com/a/810675/369800). [To understand the proof just linked: determinant continuous (see: math.stackexchange.com/a/121834/369800) and adjoint continuous (see: math.stackexchange.com/a/2031642/369800)]
$endgroup$
– zxmkn
Jan 24 at 19:07
$begingroup$
Recall that the inverse matrix is the adjugate matrix divided by the discriminant. Thus a "singularity" of the inversion only happens when the discriminant vanishes.
$endgroup$
– Alexey
Jan 26 at 20:45
add a comment |
$begingroup$
Let matrix $A in mathbb{R}^{ntimes n}$ be positive semidefinite.
Is it then true to that
$$
(A + lambda I)^{-1} to mathbf{0} quad (lambda to infty) quad ?
$$If so, is the fact that $A$ is positive definite irrelevant here?
My thoughts so far:
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1}
$$
I think that $lim_{lambda to infty} Big( frac{1}{lambda}A + I Big)^{-1} = I^{-1} = I$, but I don't know if I can just pass the $lim$ through the inverse $(cdot)^{-1}$ like that. If this is the case, then
$$
lim_{lambda to infty} (A + lambda I)^{-1} = lim_{lambda to infty} (1/lambda) lim_{lambda to infty} (A/lambda + I)^{-1} = 0 cdot I = mathbf{0}
$$
as I'd like to show.
Where this comes from:
I'm trying to justify a claim made in an econometrics lecture. Namely,
$$
textrm{Var}(hat{beta}^{textrm{ridge}}) = sigma^2 (X^{T}X + lambda I)^{-1} X^T X [(X^T X + lambda I)^{-1}]^T to mathbf{0}
$$
where $hat{beta}^textrm{ridge}$ is the ridge estimator in a linear model, $X in mathbb{R}^{n times p}$ is the design matrix, and the equality is known. The limit, however, wasn't justified.
linear-algebra matrices limits
$endgroup$
Let matrix $A in mathbb{R}^{ntimes n}$ be positive semidefinite.
Is it then true to that
$$
(A + lambda I)^{-1} to mathbf{0} quad (lambda to infty) quad ?
$$If so, is the fact that $A$ is positive definite irrelevant here?
My thoughts so far:
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1}
$$
I think that $lim_{lambda to infty} Big( frac{1}{lambda}A + I Big)^{-1} = I^{-1} = I$, but I don't know if I can just pass the $lim$ through the inverse $(cdot)^{-1}$ like that. If this is the case, then
$$
lim_{lambda to infty} (A + lambda I)^{-1} = lim_{lambda to infty} (1/lambda) lim_{lambda to infty} (A/lambda + I)^{-1} = 0 cdot I = mathbf{0}
$$
as I'd like to show.
Where this comes from:
I'm trying to justify a claim made in an econometrics lecture. Namely,
$$
textrm{Var}(hat{beta}^{textrm{ridge}}) = sigma^2 (X^{T}X + lambda I)^{-1} X^T X [(X^T X + lambda I)^{-1}]^T to mathbf{0}
$$
where $hat{beta}^textrm{ridge}$ is the ridge estimator in a linear model, $X in mathbb{R}^{n times p}$ is the design matrix, and the equality is known. The limit, however, wasn't justified.
linear-algebra matrices limits
linear-algebra matrices limits
asked Jan 24 at 17:22
zxmknzxmkn
340213
340213
4
$begingroup$
$A$ can be any matrix above. The point is, the inverse of a matrix is a continuous function in a neighbourhood of the identity, therefore since $A - lambda I$ is going to eventually be invertible, we may pass the limit inside the inverse by continuity, giving the desired result by the continuity of scalar multiplication.
$endgroup$
– астон вілла олоф мэллбэрг
Jan 24 at 17:27
3
$begingroup$
If $|cdot|$ is a matrix norm, then the Neumann series guarantees that $A+lambda I$ is invertible with $$(A+lambda I)^{-1} = sum_{n=0}^{infty} frac{(-1)^n}{lambda^{n+1}}A^n, $$ which converges uniformly on the region $|lambda| geq |A|+delta$ for any given $delta > 0$. By the Weierstrass M-test, the limit as $lambdatoinfty$ can be evaluated term-wise, proving the desired claim.
$endgroup$
– Sangchul Lee
Jan 24 at 17:34
$begingroup$
@астонвіллаолофмэллбэрг Great! That completes my line of reasoning. For others looking on, here's why there is a neighborhood of $I$ in $M_n(mathbb{R})$ in which $(cdot)^{-1}$ is continuous: $(cdot)^{-1} : GL_n(mathbb{R}) to GL_n(mathbb{R})$ is continuous and $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$ (see: math.stackexchange.com/a/810675/369800). [To understand the proof just linked: determinant continuous (see: math.stackexchange.com/a/121834/369800) and adjoint continuous (see: math.stackexchange.com/a/2031642/369800)]
$endgroup$
– zxmkn
Jan 24 at 19:07
$begingroup$
Recall that the inverse matrix is the adjugate matrix divided by the discriminant. Thus a "singularity" of the inversion only happens when the discriminant vanishes.
$endgroup$
– Alexey
Jan 26 at 20:45
add a comment |
4
$begingroup$
$A$ can be any matrix above. The point is, the inverse of a matrix is a continuous function in a neighbourhood of the identity, therefore since $A - lambda I$ is going to eventually be invertible, we may pass the limit inside the inverse by continuity, giving the desired result by the continuity of scalar multiplication.
$endgroup$
– астон вілла олоф мэллбэрг
Jan 24 at 17:27
3
$begingroup$
If $|cdot|$ is a matrix norm, then the Neumann series guarantees that $A+lambda I$ is invertible with $$(A+lambda I)^{-1} = sum_{n=0}^{infty} frac{(-1)^n}{lambda^{n+1}}A^n, $$ which converges uniformly on the region $|lambda| geq |A|+delta$ for any given $delta > 0$. By the Weierstrass M-test, the limit as $lambdatoinfty$ can be evaluated term-wise, proving the desired claim.
$endgroup$
– Sangchul Lee
Jan 24 at 17:34
$begingroup$
@астонвіллаолофмэллбэрг Great! That completes my line of reasoning. For others looking on, here's why there is a neighborhood of $I$ in $M_n(mathbb{R})$ in which $(cdot)^{-1}$ is continuous: $(cdot)^{-1} : GL_n(mathbb{R}) to GL_n(mathbb{R})$ is continuous and $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$ (see: math.stackexchange.com/a/810675/369800). [To understand the proof just linked: determinant continuous (see: math.stackexchange.com/a/121834/369800) and adjoint continuous (see: math.stackexchange.com/a/2031642/369800)]
$endgroup$
– zxmkn
Jan 24 at 19:07
$begingroup$
Recall that the inverse matrix is the adjugate matrix divided by the discriminant. Thus a "singularity" of the inversion only happens when the discriminant vanishes.
$endgroup$
– Alexey
Jan 26 at 20:45
4
4
$begingroup$
$A$ can be any matrix above. The point is, the inverse of a matrix is a continuous function in a neighbourhood of the identity, therefore since $A - lambda I$ is going to eventually be invertible, we may pass the limit inside the inverse by continuity, giving the desired result by the continuity of scalar multiplication.
$endgroup$
– астон вілла олоф мэллбэрг
Jan 24 at 17:27
$begingroup$
$A$ can be any matrix above. The point is, the inverse of a matrix is a continuous function in a neighbourhood of the identity, therefore since $A - lambda I$ is going to eventually be invertible, we may pass the limit inside the inverse by continuity, giving the desired result by the continuity of scalar multiplication.
$endgroup$
– астон вілла олоф мэллбэрг
Jan 24 at 17:27
3
3
$begingroup$
If $|cdot|$ is a matrix norm, then the Neumann series guarantees that $A+lambda I$ is invertible with $$(A+lambda I)^{-1} = sum_{n=0}^{infty} frac{(-1)^n}{lambda^{n+1}}A^n, $$ which converges uniformly on the region $|lambda| geq |A|+delta$ for any given $delta > 0$. By the Weierstrass M-test, the limit as $lambdatoinfty$ can be evaluated term-wise, proving the desired claim.
$endgroup$
– Sangchul Lee
Jan 24 at 17:34
$begingroup$
If $|cdot|$ is a matrix norm, then the Neumann series guarantees that $A+lambda I$ is invertible with $$(A+lambda I)^{-1} = sum_{n=0}^{infty} frac{(-1)^n}{lambda^{n+1}}A^n, $$ which converges uniformly on the region $|lambda| geq |A|+delta$ for any given $delta > 0$. By the Weierstrass M-test, the limit as $lambdatoinfty$ can be evaluated term-wise, proving the desired claim.
$endgroup$
– Sangchul Lee
Jan 24 at 17:34
$begingroup$
@астонвіллаолофмэллбэрг Great! That completes my line of reasoning. For others looking on, here's why there is a neighborhood of $I$ in $M_n(mathbb{R})$ in which $(cdot)^{-1}$ is continuous: $(cdot)^{-1} : GL_n(mathbb{R}) to GL_n(mathbb{R})$ is continuous and $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$ (see: math.stackexchange.com/a/810675/369800). [To understand the proof just linked: determinant continuous (see: math.stackexchange.com/a/121834/369800) and adjoint continuous (see: math.stackexchange.com/a/2031642/369800)]
$endgroup$
– zxmkn
Jan 24 at 19:07
$begingroup$
@астонвіллаолофмэллбэрг Great! That completes my line of reasoning. For others looking on, here's why there is a neighborhood of $I$ in $M_n(mathbb{R})$ in which $(cdot)^{-1}$ is continuous: $(cdot)^{-1} : GL_n(mathbb{R}) to GL_n(mathbb{R})$ is continuous and $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$ (see: math.stackexchange.com/a/810675/369800). [To understand the proof just linked: determinant continuous (see: math.stackexchange.com/a/121834/369800) and adjoint continuous (see: math.stackexchange.com/a/2031642/369800)]
$endgroup$
– zxmkn
Jan 24 at 19:07
$begingroup$
Recall that the inverse matrix is the adjugate matrix divided by the discriminant. Thus a "singularity" of the inversion only happens when the discriminant vanishes.
$endgroup$
– Alexey
Jan 26 at 20:45
$begingroup$
Recall that the inverse matrix is the adjugate matrix divided by the discriminant. Thus a "singularity" of the inversion only happens when the discriminant vanishes.
$endgroup$
– Alexey
Jan 26 at 20:45
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
The eigenvalues of $A+lambda I$ are of the form $lambda+mu$, where $mu$ is an eigenvalue of $A$ (necessarily real). Then, for $lambda$ sufficiently large, the eigenvalues of $A+lambda I$ are all $>1$.
Note that a matrix $S$ that diagonalizes $A$ also diagonalizes $A+lambda I$, let $A=SDS^{-1}$, with $D$ diagonal.
Then $(A+lambda I)^{-1}$ is diagonalizable with eigenvalues in $(0,1)$ and therefore
$$
lim_{lambdatoinfty}(A+lambda I)^{-1}=
SBigl(,lim_{lambdatoinfty}(D+lambda I)^{-1}Bigr)S^{-1}=0
$$
It is not necessary that $A$ is semipositive definite. Any symmetric matrix will do.
$endgroup$
add a comment |
$begingroup$
The answer I liked the best was left in the comments by астон вілла олоф мэллбэрг, since it shows that $A$ does not need any special structure. Here I'm pulling his answer down and including a bit more detail.
We have
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1},
$$
and we claim that $Big(frac{1}{lambda}A + I Big)^{-1} to I^{-1} = I quad (lambda to infty)$.
Therefore,
$$
(A + lambda I)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1} to 0 cdot I = mathbf{0} quad (lambda to infty),
$$
which was the desired result.
We complete the proof by showing the claim. Since $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$, we find some $epsilon > 0 $ such that the open ball $B(I, epsilon) subseteq GL_n(mathbb{R})$. Hence, for sufficiently large $lambda$, we know that $(A/lambda + I) in B(I, epsilon) subseteq GL_n(mathbb{R})$. Also knowing that $(cdot)^{-1} : GL_n to GL_n$ is continuous, we have
$$
lim_{lambda to infty}Big(frac{1}{lambda}A + I Big)^{-1} = Big(lim_{lambda to infty} frac{1}{lambda}A + I Big)^{-1}= (I)^{-1} = I,
$$
which completes the proof.
To understand the linked proof of the continuity of $(cdot)^{-1}$, see here for justification that the determinant operator is continuous and here for justification that the adjoint operator is continuous.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3086145%2flimit-of-matrix-inverse-lim-lambda-to-infty-a-lambda-i-1-math%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The eigenvalues of $A+lambda I$ are of the form $lambda+mu$, where $mu$ is an eigenvalue of $A$ (necessarily real). Then, for $lambda$ sufficiently large, the eigenvalues of $A+lambda I$ are all $>1$.
Note that a matrix $S$ that diagonalizes $A$ also diagonalizes $A+lambda I$, let $A=SDS^{-1}$, with $D$ diagonal.
Then $(A+lambda I)^{-1}$ is diagonalizable with eigenvalues in $(0,1)$ and therefore
$$
lim_{lambdatoinfty}(A+lambda I)^{-1}=
SBigl(,lim_{lambdatoinfty}(D+lambda I)^{-1}Bigr)S^{-1}=0
$$
It is not necessary that $A$ is semipositive definite. Any symmetric matrix will do.
$endgroup$
add a comment |
$begingroup$
The eigenvalues of $A+lambda I$ are of the form $lambda+mu$, where $mu$ is an eigenvalue of $A$ (necessarily real). Then, for $lambda$ sufficiently large, the eigenvalues of $A+lambda I$ are all $>1$.
Note that a matrix $S$ that diagonalizes $A$ also diagonalizes $A+lambda I$, let $A=SDS^{-1}$, with $D$ diagonal.
Then $(A+lambda I)^{-1}$ is diagonalizable with eigenvalues in $(0,1)$ and therefore
$$
lim_{lambdatoinfty}(A+lambda I)^{-1}=
SBigl(,lim_{lambdatoinfty}(D+lambda I)^{-1}Bigr)S^{-1}=0
$$
It is not necessary that $A$ is semipositive definite. Any symmetric matrix will do.
$endgroup$
add a comment |
$begingroup$
The eigenvalues of $A+lambda I$ are of the form $lambda+mu$, where $mu$ is an eigenvalue of $A$ (necessarily real). Then, for $lambda$ sufficiently large, the eigenvalues of $A+lambda I$ are all $>1$.
Note that a matrix $S$ that diagonalizes $A$ also diagonalizes $A+lambda I$, let $A=SDS^{-1}$, with $D$ diagonal.
Then $(A+lambda I)^{-1}$ is diagonalizable with eigenvalues in $(0,1)$ and therefore
$$
lim_{lambdatoinfty}(A+lambda I)^{-1}=
SBigl(,lim_{lambdatoinfty}(D+lambda I)^{-1}Bigr)S^{-1}=0
$$
It is not necessary that $A$ is semipositive definite. Any symmetric matrix will do.
$endgroup$
The eigenvalues of $A+lambda I$ are of the form $lambda+mu$, where $mu$ is an eigenvalue of $A$ (necessarily real). Then, for $lambda$ sufficiently large, the eigenvalues of $A+lambda I$ are all $>1$.
Note that a matrix $S$ that diagonalizes $A$ also diagonalizes $A+lambda I$, let $A=SDS^{-1}$, with $D$ diagonal.
Then $(A+lambda I)^{-1}$ is diagonalizable with eigenvalues in $(0,1)$ and therefore
$$
lim_{lambdatoinfty}(A+lambda I)^{-1}=
SBigl(,lim_{lambdatoinfty}(D+lambda I)^{-1}Bigr)S^{-1}=0
$$
It is not necessary that $A$ is semipositive definite. Any symmetric matrix will do.
answered Jan 24 at 18:43


egregegreg
184k1486205
184k1486205
add a comment |
add a comment |
$begingroup$
The answer I liked the best was left in the comments by астон вілла олоф мэллбэрг, since it shows that $A$ does not need any special structure. Here I'm pulling his answer down and including a bit more detail.
We have
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1},
$$
and we claim that $Big(frac{1}{lambda}A + I Big)^{-1} to I^{-1} = I quad (lambda to infty)$.
Therefore,
$$
(A + lambda I)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1} to 0 cdot I = mathbf{0} quad (lambda to infty),
$$
which was the desired result.
We complete the proof by showing the claim. Since $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$, we find some $epsilon > 0 $ such that the open ball $B(I, epsilon) subseteq GL_n(mathbb{R})$. Hence, for sufficiently large $lambda$, we know that $(A/lambda + I) in B(I, epsilon) subseteq GL_n(mathbb{R})$. Also knowing that $(cdot)^{-1} : GL_n to GL_n$ is continuous, we have
$$
lim_{lambda to infty}Big(frac{1}{lambda}A + I Big)^{-1} = Big(lim_{lambda to infty} frac{1}{lambda}A + I Big)^{-1}= (I)^{-1} = I,
$$
which completes the proof.
To understand the linked proof of the continuity of $(cdot)^{-1}$, see here for justification that the determinant operator is continuous and here for justification that the adjoint operator is continuous.
$endgroup$
add a comment |
$begingroup$
The answer I liked the best was left in the comments by астон вілла олоф мэллбэрг, since it shows that $A$ does not need any special structure. Here I'm pulling his answer down and including a bit more detail.
We have
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1},
$$
and we claim that $Big(frac{1}{lambda}A + I Big)^{-1} to I^{-1} = I quad (lambda to infty)$.
Therefore,
$$
(A + lambda I)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1} to 0 cdot I = mathbf{0} quad (lambda to infty),
$$
which was the desired result.
We complete the proof by showing the claim. Since $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$, we find some $epsilon > 0 $ such that the open ball $B(I, epsilon) subseteq GL_n(mathbb{R})$. Hence, for sufficiently large $lambda$, we know that $(A/lambda + I) in B(I, epsilon) subseteq GL_n(mathbb{R})$. Also knowing that $(cdot)^{-1} : GL_n to GL_n$ is continuous, we have
$$
lim_{lambda to infty}Big(frac{1}{lambda}A + I Big)^{-1} = Big(lim_{lambda to infty} frac{1}{lambda}A + I Big)^{-1}= (I)^{-1} = I,
$$
which completes the proof.
To understand the linked proof of the continuity of $(cdot)^{-1}$, see here for justification that the determinant operator is continuous and here for justification that the adjoint operator is continuous.
$endgroup$
add a comment |
$begingroup$
The answer I liked the best was left in the comments by астон вілла олоф мэллбэрг, since it shows that $A$ does not need any special structure. Here I'm pulling his answer down and including a bit more detail.
We have
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1},
$$
and we claim that $Big(frac{1}{lambda}A + I Big)^{-1} to I^{-1} = I quad (lambda to infty)$.
Therefore,
$$
(A + lambda I)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1} to 0 cdot I = mathbf{0} quad (lambda to infty),
$$
which was the desired result.
We complete the proof by showing the claim. Since $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$, we find some $epsilon > 0 $ such that the open ball $B(I, epsilon) subseteq GL_n(mathbb{R})$. Hence, for sufficiently large $lambda$, we know that $(A/lambda + I) in B(I, epsilon) subseteq GL_n(mathbb{R})$. Also knowing that $(cdot)^{-1} : GL_n to GL_n$ is continuous, we have
$$
lim_{lambda to infty}Big(frac{1}{lambda}A + I Big)^{-1} = Big(lim_{lambda to infty} frac{1}{lambda}A + I Big)^{-1}= (I)^{-1} = I,
$$
which completes the proof.
To understand the linked proof of the continuity of $(cdot)^{-1}$, see here for justification that the determinant operator is continuous and here for justification that the adjoint operator is continuous.
$endgroup$
The answer I liked the best was left in the comments by астон вілла олоф мэллбэрг, since it shows that $A$ does not need any special structure. Here I'm pulling his answer down and including a bit more detail.
We have
$$
(A + lambda I)^{-1} = Big(lambda( frac{1}{lambda}A + I ) Big)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1},
$$
and we claim that $Big(frac{1}{lambda}A + I Big)^{-1} to I^{-1} = I quad (lambda to infty)$.
Therefore,
$$
(A + lambda I)^{-1} = frac{1}{lambda} Big(frac{1}{lambda}A + I Big)^{-1} to 0 cdot I = mathbf{0} quad (lambda to infty),
$$
which was the desired result.
We complete the proof by showing the claim. Since $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$, we find some $epsilon > 0 $ such that the open ball $B(I, epsilon) subseteq GL_n(mathbb{R})$. Hence, for sufficiently large $lambda$, we know that $(A/lambda + I) in B(I, epsilon) subseteq GL_n(mathbb{R})$. Also knowing that $(cdot)^{-1} : GL_n to GL_n$ is continuous, we have
$$
lim_{lambda to infty}Big(frac{1}{lambda}A + I Big)^{-1} = Big(lim_{lambda to infty} frac{1}{lambda}A + I Big)^{-1}= (I)^{-1} = I,
$$
which completes the proof.
To understand the linked proof of the continuity of $(cdot)^{-1}$, see here for justification that the determinant operator is continuous and here for justification that the adjoint operator is continuous.
answered Jan 26 at 20:35
zxmknzxmkn
340213
340213
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3086145%2flimit-of-matrix-inverse-lim-lambda-to-infty-a-lambda-i-1-math%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
4
$begingroup$
$A$ can be any matrix above. The point is, the inverse of a matrix is a continuous function in a neighbourhood of the identity, therefore since $A - lambda I$ is going to eventually be invertible, we may pass the limit inside the inverse by continuity, giving the desired result by the continuity of scalar multiplication.
$endgroup$
– астон вілла олоф мэллбэрг
Jan 24 at 17:27
3
$begingroup$
If $|cdot|$ is a matrix norm, then the Neumann series guarantees that $A+lambda I$ is invertible with $$(A+lambda I)^{-1} = sum_{n=0}^{infty} frac{(-1)^n}{lambda^{n+1}}A^n, $$ which converges uniformly on the region $|lambda| geq |A|+delta$ for any given $delta > 0$. By the Weierstrass M-test, the limit as $lambdatoinfty$ can be evaluated term-wise, proving the desired claim.
$endgroup$
– Sangchul Lee
Jan 24 at 17:34
$begingroup$
@астонвіллаолофмэллбэрг Great! That completes my line of reasoning. For others looking on, here's why there is a neighborhood of $I$ in $M_n(mathbb{R})$ in which $(cdot)^{-1}$ is continuous: $(cdot)^{-1} : GL_n(mathbb{R}) to GL_n(mathbb{R})$ is continuous and $GL_n(mathbb{R})$ is open in $M_n(mathbb{R})$ (see: math.stackexchange.com/a/810675/369800). [To understand the proof just linked: determinant continuous (see: math.stackexchange.com/a/121834/369800) and adjoint continuous (see: math.stackexchange.com/a/2031642/369800)]
$endgroup$
– zxmkn
Jan 24 at 19:07
$begingroup$
Recall that the inverse matrix is the adjugate matrix divided by the discriminant. Thus a "singularity" of the inversion only happens when the discriminant vanishes.
$endgroup$
– Alexey
Jan 26 at 20:45