Why $inf { | X |_2 |X in S^n }=inf{t|-tI preceq X preceq tI,X in S^n}$?












0












$begingroup$


Question 1: how convert following problem:
$$min quad | X |_2$$
to a Semi-definite Programming(SDP):
$$min quad t$$
$$s.t. quad -tI preceq X preceq tI,t ge 0$$
where X is a symmetric matrix of $n times n$.



Question 2: in fact, I'm even not sure whether it is a SDP. I tried to construct a optimization variable:



$Y={begin{bmatrix}S_1 \ & S_2 \ & & tend{bmatrix}}_{(2n+1)times(2n+1)}$ .
Where $S_1=tI-X,S_2=X+tI$.



Notice that $S_1,S_2 succeq 0$ from above constraints. Now I get $Y succeq 0$ and write it as:
$$min quad tr(CY)$$
$$s.t. quad Y succeq 0$$



Matrix $C$ can be easily constructed. Now I wonder whether is the way I construct optimization variable valid? If not, what are the formal methods?



Problem comes from an example in Boyd & Vandenberghe Convex Optimization at page 174
Original problem










share|cite|improve this question











$endgroup$












  • $begingroup$
    I think that $ X $ is symmetric, right?
    $endgroup$
    – Royi
    Sep 13 '17 at 15:27










  • $begingroup$
    Yeah, I have pointed out that at the last line in Question1.
    $endgroup$
    – Finley
    Sep 14 '17 at 0:28


















0












$begingroup$


Question 1: how convert following problem:
$$min quad | X |_2$$
to a Semi-definite Programming(SDP):
$$min quad t$$
$$s.t. quad -tI preceq X preceq tI,t ge 0$$
where X is a symmetric matrix of $n times n$.



Question 2: in fact, I'm even not sure whether it is a SDP. I tried to construct a optimization variable:



$Y={begin{bmatrix}S_1 \ & S_2 \ & & tend{bmatrix}}_{(2n+1)times(2n+1)}$ .
Where $S_1=tI-X,S_2=X+tI$.



Notice that $S_1,S_2 succeq 0$ from above constraints. Now I get $Y succeq 0$ and write it as:
$$min quad tr(CY)$$
$$s.t. quad Y succeq 0$$



Matrix $C$ can be easily constructed. Now I wonder whether is the way I construct optimization variable valid? If not, what are the formal methods?



Problem comes from an example in Boyd & Vandenberghe Convex Optimization at page 174
Original problem










share|cite|improve this question











$endgroup$












  • $begingroup$
    I think that $ X $ is symmetric, right?
    $endgroup$
    – Royi
    Sep 13 '17 at 15:27










  • $begingroup$
    Yeah, I have pointed out that at the last line in Question1.
    $endgroup$
    – Finley
    Sep 14 '17 at 0:28
















0












0








0





$begingroup$


Question 1: how convert following problem:
$$min quad | X |_2$$
to a Semi-definite Programming(SDP):
$$min quad t$$
$$s.t. quad -tI preceq X preceq tI,t ge 0$$
where X is a symmetric matrix of $n times n$.



Question 2: in fact, I'm even not sure whether it is a SDP. I tried to construct a optimization variable:



$Y={begin{bmatrix}S_1 \ & S_2 \ & & tend{bmatrix}}_{(2n+1)times(2n+1)}$ .
Where $S_1=tI-X,S_2=X+tI$.



Notice that $S_1,S_2 succeq 0$ from above constraints. Now I get $Y succeq 0$ and write it as:
$$min quad tr(CY)$$
$$s.t. quad Y succeq 0$$



Matrix $C$ can be easily constructed. Now I wonder whether is the way I construct optimization variable valid? If not, what are the formal methods?



Problem comes from an example in Boyd & Vandenberghe Convex Optimization at page 174
Original problem










share|cite|improve this question











$endgroup$




Question 1: how convert following problem:
$$min quad | X |_2$$
to a Semi-definite Programming(SDP):
$$min quad t$$
$$s.t. quad -tI preceq X preceq tI,t ge 0$$
where X is a symmetric matrix of $n times n$.



Question 2: in fact, I'm even not sure whether it is a SDP. I tried to construct a optimization variable:



$Y={begin{bmatrix}S_1 \ & S_2 \ & & tend{bmatrix}}_{(2n+1)times(2n+1)}$ .
Where $S_1=tI-X,S_2=X+tI$.



Notice that $S_1,S_2 succeq 0$ from above constraints. Now I get $Y succeq 0$ and write it as:
$$min quad tr(CY)$$
$$s.t. quad Y succeq 0$$



Matrix $C$ can be easily constructed. Now I wonder whether is the way I construct optimization variable valid? If not, what are the formal methods?



Problem comes from an example in Boyd & Vandenberghe Convex Optimization at page 174
Original problem







linear-algebra norm convex-optimization semidefinite-programming






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 3 at 20:25









A.Γ.

22.7k32656




22.7k32656










asked Sep 12 '17 at 8:33









FinleyFinley

375213




375213












  • $begingroup$
    I think that $ X $ is symmetric, right?
    $endgroup$
    – Royi
    Sep 13 '17 at 15:27










  • $begingroup$
    Yeah, I have pointed out that at the last line in Question1.
    $endgroup$
    – Finley
    Sep 14 '17 at 0:28




















  • $begingroup$
    I think that $ X $ is symmetric, right?
    $endgroup$
    – Royi
    Sep 13 '17 at 15:27










  • $begingroup$
    Yeah, I have pointed out that at the last line in Question1.
    $endgroup$
    – Finley
    Sep 14 '17 at 0:28


















$begingroup$
I think that $ X $ is symmetric, right?
$endgroup$
– Royi
Sep 13 '17 at 15:27




$begingroup$
I think that $ X $ is symmetric, right?
$endgroup$
– Royi
Sep 13 '17 at 15:27












$begingroup$
Yeah, I have pointed out that at the last line in Question1.
$endgroup$
– Finley
Sep 14 '17 at 0:28






$begingroup$
Yeah, I have pointed out that at the last line in Question1.
$endgroup$
– Finley
Sep 14 '17 at 0:28












2 Answers
2






active

oldest

votes


















1












$begingroup$

The Idea



The $ left| cdot right|_{2} $ matrix norm is given by:



$$ left| X right|_{2} = sqrt{ lambda_{max} left( {X}^{T} X right) } $$



Where $ lambda_{max} $ is the maximum eigen value of $ {X}^{T} X $.

Hence the above looks at the extreme conditions where the matrix $ X + t I $ or $ X - t I $ are PSD / NSD matrices.



Proof of $ {L}_{2} $ Matrix Norm



$$
begin{align*}
left| A right|_{2} & = max_{x neq 0} frac{ left| A x right|_{2} }{ left| x right|_{2} } = max_{x neq 0} frac{ sqrt{ {x}^{T} {A}^{T} A x } }{ left| x right|_{2} } & text{} \
& = max_{x neq 0} frac{ sqrt{ {x}^{T} Q Lambda {Q}^{T} x } }{ left| x right|_{2} } & text{Where $ {A}^{T} A = Q Lambda {Q}^{T} $ by Spectral Decomposition} \
& = max_{x neq 0} frac{ sqrt{ left( {Q}^{T} x right)^{T} Lambda left( {Q}^{T} x right) } }{ left| {Q}^{T} x right|_{2} } & text{Since $ Q $ is Unitary matrix} \
& = max_{y neq 0} frac{ sqrt{ {y}^{T} Lambda y } }{ left| y right|_{2} } & text{Where $ y = {Q}^{T} x $} \
& = max_{y neq 0} sqrt{ frac{ sum {lambda}_{i} {y}_{i}^{2} }{ sum {y}_{i}^{2} } } leq sqrt{ frac{ lambda_{max} sum {y}_{i}^{2} }{ sum {y}_{i}^{2} } } = sqrt{{lambda}_{max}}
end{align*}
$$



The above is achievable by choosing $ {y}_{j} = 1 $ for $ {lambda}_{j} = {lambda}_{max} $ and $ {y}_{j} = 0 $ otherwise.



Pay attention that $ lambda_{max} $ is always non negative (As all other eigen values of $ {A}^{T} A $ as being PSD Matrix). Basically it is the maximum of the absolute values of the eigen values of $ A $.



The Meaning of the Operations



All needed to show is that the operation of adding scaled unit matrix is changing the values of the Singular Values of the matrix.



Since $ X $ is a symmetric matrix, by Spectral Decomposition one could write:



$$ X pm t I = Q Lambda {Q}^{T} pm t I = Q left( Lambda pm t I right) {Q}^{T} $$



As can be seen above, adding the term $ t I $ shifts the eigenvalues of $ X $.



Shifting Set of Numbers



Given a set of numbers $ left{ {lambda}_{1}, {lambda}_{2}, ldots, {lambda}_{n} right} $ how could one find their maximum?



Assuming Non Negative Numbers



Assuming all numbers are non negative one could ask what's the minimum number $ t $ such that $ {lambda}_{j} - t leq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Assuming Non Positive Numbers



Assuming all numbers are non positive one could ask what's the minimum number $ t $ such that $ {lambda}_{j} + t geq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Summary



Hence, for any set of numbers if one looks for the non negative $ t $ which holds both of the above then $ t $ will be equal to the maximum absolute value of the set.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thanks! I have posted some of my immature ideas.
    $endgroup$
    – Finley
    Sep 14 '17 at 2:21










  • $begingroup$
    Could you please check the validity of my answers?:)
    $endgroup$
    – Finley
    Sep 14 '17 at 3:11



















0












$begingroup$

To my understanding:
$$|X|_2=max_{u neq 0} frac {|Xu|_2} {|u|_2}$$
$$qquad qquad =max_{u neq 0}(frac {u^TX^TXu} {u^Tu})^{1/2}$$
$$qquad qquad =max_{u neq 0}(frac {u^T X^2 u} {u^Tu})^{1/2}$$
$$qquad qquad qquad quad=max_{u neq 0}(frac {(Q^Tu)^T Lambda^2 Q^Tu} {(Q^Tu)^T Q^Tu})^{1/2}$$
$$qquad qquad qquad quad=|lambda|_{max}=max{|lambda_1|,|lambda_n|}$$



Where $|lambda|_{max}$ represents the maximum of absolute value of eigenvalues of $X$(e.g. although $lambda$ is the most negative, $|lambda|$ is the most positive) and
$$X=Q Lambda Q^T=Q begin{bmatrix} lambda_1 \ &lambda_2 \& & ddots \ & & & lambda_nend{bmatrix} Q^T$$
$lambda_1,lambda_n$ are the maximum and the minimum of eigenvalues respectively(i.e. $lambda_1 gelambda_2 gelambda_3 ge... gelambda_n$).



From Royi's thread, $X pm tI=Q(Lambda pm tI)Q^T$ means what we actually do is shifting eigenvalues with magnitude $t$(suppose $t ge 0$).



Given $|lambda|_{max}=|lambda_n|$, the minimum magnitude we have to shift is t=$|lambda_n|$ so that $X+tI succeq 0$(which means $lambda_n$ is negative, $|lambda|_n ge |lambda|_1$ and naturally $X-tI preceq 0$).



Given $|lambda|_{max}=|lambda_1|$, the minimum magnitude we have to shift is t=$|lambda_1|$ so that $X-tI preceq 0$(which means $lambda_1$ is positive, $|lambda|_1 ge |lambda|_n$ and naturally $X+tI succeq 0$).



Anyhow, we can achieve:
$$inf{t|X+tI succeq 0,X-tI preceq 0,|lambda|_{max}=|lambda_n|}=|lambda_n|$$
$$inf{t|X-tI preceq 0,X+tI succeq 0,|lambda|_{max}=|lambda_1|}=|lambda_1|$$
To sum up:
$$inf{t|X+tI succeq 0,X-tI preceq 0}=|lambda|_{max}=|X|_2$$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    It's perfectly illuminating that $X pm tI$ means shifting eigenvalues.
    $endgroup$
    – Finley
    Sep 14 '17 at 5:58











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2426333%2fwhy-inf-x-2-x-in-sn-inf-t-ti-preceq-x-preceq-ti-x-in-sn%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

The Idea



The $ left| cdot right|_{2} $ matrix norm is given by:



$$ left| X right|_{2} = sqrt{ lambda_{max} left( {X}^{T} X right) } $$



Where $ lambda_{max} $ is the maximum eigen value of $ {X}^{T} X $.

Hence the above looks at the extreme conditions where the matrix $ X + t I $ or $ X - t I $ are PSD / NSD matrices.



Proof of $ {L}_{2} $ Matrix Norm



$$
begin{align*}
left| A right|_{2} & = max_{x neq 0} frac{ left| A x right|_{2} }{ left| x right|_{2} } = max_{x neq 0} frac{ sqrt{ {x}^{T} {A}^{T} A x } }{ left| x right|_{2} } & text{} \
& = max_{x neq 0} frac{ sqrt{ {x}^{T} Q Lambda {Q}^{T} x } }{ left| x right|_{2} } & text{Where $ {A}^{T} A = Q Lambda {Q}^{T} $ by Spectral Decomposition} \
& = max_{x neq 0} frac{ sqrt{ left( {Q}^{T} x right)^{T} Lambda left( {Q}^{T} x right) } }{ left| {Q}^{T} x right|_{2} } & text{Since $ Q $ is Unitary matrix} \
& = max_{y neq 0} frac{ sqrt{ {y}^{T} Lambda y } }{ left| y right|_{2} } & text{Where $ y = {Q}^{T} x $} \
& = max_{y neq 0} sqrt{ frac{ sum {lambda}_{i} {y}_{i}^{2} }{ sum {y}_{i}^{2} } } leq sqrt{ frac{ lambda_{max} sum {y}_{i}^{2} }{ sum {y}_{i}^{2} } } = sqrt{{lambda}_{max}}
end{align*}
$$



The above is achievable by choosing $ {y}_{j} = 1 $ for $ {lambda}_{j} = {lambda}_{max} $ and $ {y}_{j} = 0 $ otherwise.



Pay attention that $ lambda_{max} $ is always non negative (As all other eigen values of $ {A}^{T} A $ as being PSD Matrix). Basically it is the maximum of the absolute values of the eigen values of $ A $.



The Meaning of the Operations



All needed to show is that the operation of adding scaled unit matrix is changing the values of the Singular Values of the matrix.



Since $ X $ is a symmetric matrix, by Spectral Decomposition one could write:



$$ X pm t I = Q Lambda {Q}^{T} pm t I = Q left( Lambda pm t I right) {Q}^{T} $$



As can be seen above, adding the term $ t I $ shifts the eigenvalues of $ X $.



Shifting Set of Numbers



Given a set of numbers $ left{ {lambda}_{1}, {lambda}_{2}, ldots, {lambda}_{n} right} $ how could one find their maximum?



Assuming Non Negative Numbers



Assuming all numbers are non negative one could ask what's the minimum number $ t $ such that $ {lambda}_{j} - t leq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Assuming Non Positive Numbers



Assuming all numbers are non positive one could ask what's the minimum number $ t $ such that $ {lambda}_{j} + t geq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Summary



Hence, for any set of numbers if one looks for the non negative $ t $ which holds both of the above then $ t $ will be equal to the maximum absolute value of the set.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thanks! I have posted some of my immature ideas.
    $endgroup$
    – Finley
    Sep 14 '17 at 2:21










  • $begingroup$
    Could you please check the validity of my answers?:)
    $endgroup$
    – Finley
    Sep 14 '17 at 3:11
















1












$begingroup$

The Idea



The $ left| cdot right|_{2} $ matrix norm is given by:



$$ left| X right|_{2} = sqrt{ lambda_{max} left( {X}^{T} X right) } $$



Where $ lambda_{max} $ is the maximum eigen value of $ {X}^{T} X $.

Hence the above looks at the extreme conditions where the matrix $ X + t I $ or $ X - t I $ are PSD / NSD matrices.



Proof of $ {L}_{2} $ Matrix Norm



$$
begin{align*}
left| A right|_{2} & = max_{x neq 0} frac{ left| A x right|_{2} }{ left| x right|_{2} } = max_{x neq 0} frac{ sqrt{ {x}^{T} {A}^{T} A x } }{ left| x right|_{2} } & text{} \
& = max_{x neq 0} frac{ sqrt{ {x}^{T} Q Lambda {Q}^{T} x } }{ left| x right|_{2} } & text{Where $ {A}^{T} A = Q Lambda {Q}^{T} $ by Spectral Decomposition} \
& = max_{x neq 0} frac{ sqrt{ left( {Q}^{T} x right)^{T} Lambda left( {Q}^{T} x right) } }{ left| {Q}^{T} x right|_{2} } & text{Since $ Q $ is Unitary matrix} \
& = max_{y neq 0} frac{ sqrt{ {y}^{T} Lambda y } }{ left| y right|_{2} } & text{Where $ y = {Q}^{T} x $} \
& = max_{y neq 0} sqrt{ frac{ sum {lambda}_{i} {y}_{i}^{2} }{ sum {y}_{i}^{2} } } leq sqrt{ frac{ lambda_{max} sum {y}_{i}^{2} }{ sum {y}_{i}^{2} } } = sqrt{{lambda}_{max}}
end{align*}
$$



The above is achievable by choosing $ {y}_{j} = 1 $ for $ {lambda}_{j} = {lambda}_{max} $ and $ {y}_{j} = 0 $ otherwise.



Pay attention that $ lambda_{max} $ is always non negative (As all other eigen values of $ {A}^{T} A $ as being PSD Matrix). Basically it is the maximum of the absolute values of the eigen values of $ A $.



The Meaning of the Operations



All needed to show is that the operation of adding scaled unit matrix is changing the values of the Singular Values of the matrix.



Since $ X $ is a symmetric matrix, by Spectral Decomposition one could write:



$$ X pm t I = Q Lambda {Q}^{T} pm t I = Q left( Lambda pm t I right) {Q}^{T} $$



As can be seen above, adding the term $ t I $ shifts the eigenvalues of $ X $.



Shifting Set of Numbers



Given a set of numbers $ left{ {lambda}_{1}, {lambda}_{2}, ldots, {lambda}_{n} right} $ how could one find their maximum?



Assuming Non Negative Numbers



Assuming all numbers are non negative one could ask what's the minimum number $ t $ such that $ {lambda}_{j} - t leq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Assuming Non Positive Numbers



Assuming all numbers are non positive one could ask what's the minimum number $ t $ such that $ {lambda}_{j} + t geq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Summary



Hence, for any set of numbers if one looks for the non negative $ t $ which holds both of the above then $ t $ will be equal to the maximum absolute value of the set.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thanks! I have posted some of my immature ideas.
    $endgroup$
    – Finley
    Sep 14 '17 at 2:21










  • $begingroup$
    Could you please check the validity of my answers?:)
    $endgroup$
    – Finley
    Sep 14 '17 at 3:11














1












1








1





$begingroup$

The Idea



The $ left| cdot right|_{2} $ matrix norm is given by:



$$ left| X right|_{2} = sqrt{ lambda_{max} left( {X}^{T} X right) } $$



Where $ lambda_{max} $ is the maximum eigen value of $ {X}^{T} X $.

Hence the above looks at the extreme conditions where the matrix $ X + t I $ or $ X - t I $ are PSD / NSD matrices.



Proof of $ {L}_{2} $ Matrix Norm



$$
begin{align*}
left| A right|_{2} & = max_{x neq 0} frac{ left| A x right|_{2} }{ left| x right|_{2} } = max_{x neq 0} frac{ sqrt{ {x}^{T} {A}^{T} A x } }{ left| x right|_{2} } & text{} \
& = max_{x neq 0} frac{ sqrt{ {x}^{T} Q Lambda {Q}^{T} x } }{ left| x right|_{2} } & text{Where $ {A}^{T} A = Q Lambda {Q}^{T} $ by Spectral Decomposition} \
& = max_{x neq 0} frac{ sqrt{ left( {Q}^{T} x right)^{T} Lambda left( {Q}^{T} x right) } }{ left| {Q}^{T} x right|_{2} } & text{Since $ Q $ is Unitary matrix} \
& = max_{y neq 0} frac{ sqrt{ {y}^{T} Lambda y } }{ left| y right|_{2} } & text{Where $ y = {Q}^{T} x $} \
& = max_{y neq 0} sqrt{ frac{ sum {lambda}_{i} {y}_{i}^{2} }{ sum {y}_{i}^{2} } } leq sqrt{ frac{ lambda_{max} sum {y}_{i}^{2} }{ sum {y}_{i}^{2} } } = sqrt{{lambda}_{max}}
end{align*}
$$



The above is achievable by choosing $ {y}_{j} = 1 $ for $ {lambda}_{j} = {lambda}_{max} $ and $ {y}_{j} = 0 $ otherwise.



Pay attention that $ lambda_{max} $ is always non negative (As all other eigen values of $ {A}^{T} A $ as being PSD Matrix). Basically it is the maximum of the absolute values of the eigen values of $ A $.



The Meaning of the Operations



All needed to show is that the operation of adding scaled unit matrix is changing the values of the Singular Values of the matrix.



Since $ X $ is a symmetric matrix, by Spectral Decomposition one could write:



$$ X pm t I = Q Lambda {Q}^{T} pm t I = Q left( Lambda pm t I right) {Q}^{T} $$



As can be seen above, adding the term $ t I $ shifts the eigenvalues of $ X $.



Shifting Set of Numbers



Given a set of numbers $ left{ {lambda}_{1}, {lambda}_{2}, ldots, {lambda}_{n} right} $ how could one find their maximum?



Assuming Non Negative Numbers



Assuming all numbers are non negative one could ask what's the minimum number $ t $ such that $ {lambda}_{j} - t leq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Assuming Non Positive Numbers



Assuming all numbers are non positive one could ask what's the minimum number $ t $ such that $ {lambda}_{j} + t geq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Summary



Hence, for any set of numbers if one looks for the non negative $ t $ which holds both of the above then $ t $ will be equal to the maximum absolute value of the set.






share|cite|improve this answer











$endgroup$



The Idea



The $ left| cdot right|_{2} $ matrix norm is given by:



$$ left| X right|_{2} = sqrt{ lambda_{max} left( {X}^{T} X right) } $$



Where $ lambda_{max} $ is the maximum eigen value of $ {X}^{T} X $.

Hence the above looks at the extreme conditions where the matrix $ X + t I $ or $ X - t I $ are PSD / NSD matrices.



Proof of $ {L}_{2} $ Matrix Norm



$$
begin{align*}
left| A right|_{2} & = max_{x neq 0} frac{ left| A x right|_{2} }{ left| x right|_{2} } = max_{x neq 0} frac{ sqrt{ {x}^{T} {A}^{T} A x } }{ left| x right|_{2} } & text{} \
& = max_{x neq 0} frac{ sqrt{ {x}^{T} Q Lambda {Q}^{T} x } }{ left| x right|_{2} } & text{Where $ {A}^{T} A = Q Lambda {Q}^{T} $ by Spectral Decomposition} \
& = max_{x neq 0} frac{ sqrt{ left( {Q}^{T} x right)^{T} Lambda left( {Q}^{T} x right) } }{ left| {Q}^{T} x right|_{2} } & text{Since $ Q $ is Unitary matrix} \
& = max_{y neq 0} frac{ sqrt{ {y}^{T} Lambda y } }{ left| y right|_{2} } & text{Where $ y = {Q}^{T} x $} \
& = max_{y neq 0} sqrt{ frac{ sum {lambda}_{i} {y}_{i}^{2} }{ sum {y}_{i}^{2} } } leq sqrt{ frac{ lambda_{max} sum {y}_{i}^{2} }{ sum {y}_{i}^{2} } } = sqrt{{lambda}_{max}}
end{align*}
$$



The above is achievable by choosing $ {y}_{j} = 1 $ for $ {lambda}_{j} = {lambda}_{max} $ and $ {y}_{j} = 0 $ otherwise.



Pay attention that $ lambda_{max} $ is always non negative (As all other eigen values of $ {A}^{T} A $ as being PSD Matrix). Basically it is the maximum of the absolute values of the eigen values of $ A $.



The Meaning of the Operations



All needed to show is that the operation of adding scaled unit matrix is changing the values of the Singular Values of the matrix.



Since $ X $ is a symmetric matrix, by Spectral Decomposition one could write:



$$ X pm t I = Q Lambda {Q}^{T} pm t I = Q left( Lambda pm t I right) {Q}^{T} $$



As can be seen above, adding the term $ t I $ shifts the eigenvalues of $ X $.



Shifting Set of Numbers



Given a set of numbers $ left{ {lambda}_{1}, {lambda}_{2}, ldots, {lambda}_{n} right} $ how could one find their maximum?



Assuming Non Negative Numbers



Assuming all numbers are non negative one could ask what's the minimum number $ t $ such that $ {lambda}_{j} - t leq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Assuming Non Positive Numbers



Assuming all numbers are non positive one could ask what's the minimum number $ t $ such that $ {lambda}_{j} + t geq 0, , forall t $. This will yield $ t = {lambda}_{max} $. As uses above, $ t $ is non negative number.



Summary



Hence, for any set of numbers if one looks for the non negative $ t $ which holds both of the above then $ t $ will be equal to the maximum absolute value of the set.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Sep 14 '17 at 6:09

























answered Sep 13 '17 at 14:23









RoyiRoyi

3,35512351




3,35512351












  • $begingroup$
    Thanks! I have posted some of my immature ideas.
    $endgroup$
    – Finley
    Sep 14 '17 at 2:21










  • $begingroup$
    Could you please check the validity of my answers?:)
    $endgroup$
    – Finley
    Sep 14 '17 at 3:11


















  • $begingroup$
    Thanks! I have posted some of my immature ideas.
    $endgroup$
    – Finley
    Sep 14 '17 at 2:21










  • $begingroup$
    Could you please check the validity of my answers?:)
    $endgroup$
    – Finley
    Sep 14 '17 at 3:11
















$begingroup$
Thanks! I have posted some of my immature ideas.
$endgroup$
– Finley
Sep 14 '17 at 2:21




$begingroup$
Thanks! I have posted some of my immature ideas.
$endgroup$
– Finley
Sep 14 '17 at 2:21












$begingroup$
Could you please check the validity of my answers?:)
$endgroup$
– Finley
Sep 14 '17 at 3:11




$begingroup$
Could you please check the validity of my answers?:)
$endgroup$
– Finley
Sep 14 '17 at 3:11











0












$begingroup$

To my understanding:
$$|X|_2=max_{u neq 0} frac {|Xu|_2} {|u|_2}$$
$$qquad qquad =max_{u neq 0}(frac {u^TX^TXu} {u^Tu})^{1/2}$$
$$qquad qquad =max_{u neq 0}(frac {u^T X^2 u} {u^Tu})^{1/2}$$
$$qquad qquad qquad quad=max_{u neq 0}(frac {(Q^Tu)^T Lambda^2 Q^Tu} {(Q^Tu)^T Q^Tu})^{1/2}$$
$$qquad qquad qquad quad=|lambda|_{max}=max{|lambda_1|,|lambda_n|}$$



Where $|lambda|_{max}$ represents the maximum of absolute value of eigenvalues of $X$(e.g. although $lambda$ is the most negative, $|lambda|$ is the most positive) and
$$X=Q Lambda Q^T=Q begin{bmatrix} lambda_1 \ &lambda_2 \& & ddots \ & & & lambda_nend{bmatrix} Q^T$$
$lambda_1,lambda_n$ are the maximum and the minimum of eigenvalues respectively(i.e. $lambda_1 gelambda_2 gelambda_3 ge... gelambda_n$).



From Royi's thread, $X pm tI=Q(Lambda pm tI)Q^T$ means what we actually do is shifting eigenvalues with magnitude $t$(suppose $t ge 0$).



Given $|lambda|_{max}=|lambda_n|$, the minimum magnitude we have to shift is t=$|lambda_n|$ so that $X+tI succeq 0$(which means $lambda_n$ is negative, $|lambda|_n ge |lambda|_1$ and naturally $X-tI preceq 0$).



Given $|lambda|_{max}=|lambda_1|$, the minimum magnitude we have to shift is t=$|lambda_1|$ so that $X-tI preceq 0$(which means $lambda_1$ is positive, $|lambda|_1 ge |lambda|_n$ and naturally $X+tI succeq 0$).



Anyhow, we can achieve:
$$inf{t|X+tI succeq 0,X-tI preceq 0,|lambda|_{max}=|lambda_n|}=|lambda_n|$$
$$inf{t|X-tI preceq 0,X+tI succeq 0,|lambda|_{max}=|lambda_1|}=|lambda_1|$$
To sum up:
$$inf{t|X+tI succeq 0,X-tI preceq 0}=|lambda|_{max}=|X|_2$$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    It's perfectly illuminating that $X pm tI$ means shifting eigenvalues.
    $endgroup$
    – Finley
    Sep 14 '17 at 5:58
















0












$begingroup$

To my understanding:
$$|X|_2=max_{u neq 0} frac {|Xu|_2} {|u|_2}$$
$$qquad qquad =max_{u neq 0}(frac {u^TX^TXu} {u^Tu})^{1/2}$$
$$qquad qquad =max_{u neq 0}(frac {u^T X^2 u} {u^Tu})^{1/2}$$
$$qquad qquad qquad quad=max_{u neq 0}(frac {(Q^Tu)^T Lambda^2 Q^Tu} {(Q^Tu)^T Q^Tu})^{1/2}$$
$$qquad qquad qquad quad=|lambda|_{max}=max{|lambda_1|,|lambda_n|}$$



Where $|lambda|_{max}$ represents the maximum of absolute value of eigenvalues of $X$(e.g. although $lambda$ is the most negative, $|lambda|$ is the most positive) and
$$X=Q Lambda Q^T=Q begin{bmatrix} lambda_1 \ &lambda_2 \& & ddots \ & & & lambda_nend{bmatrix} Q^T$$
$lambda_1,lambda_n$ are the maximum and the minimum of eigenvalues respectively(i.e. $lambda_1 gelambda_2 gelambda_3 ge... gelambda_n$).



From Royi's thread, $X pm tI=Q(Lambda pm tI)Q^T$ means what we actually do is shifting eigenvalues with magnitude $t$(suppose $t ge 0$).



Given $|lambda|_{max}=|lambda_n|$, the minimum magnitude we have to shift is t=$|lambda_n|$ so that $X+tI succeq 0$(which means $lambda_n$ is negative, $|lambda|_n ge |lambda|_1$ and naturally $X-tI preceq 0$).



Given $|lambda|_{max}=|lambda_1|$, the minimum magnitude we have to shift is t=$|lambda_1|$ so that $X-tI preceq 0$(which means $lambda_1$ is positive, $|lambda|_1 ge |lambda|_n$ and naturally $X+tI succeq 0$).



Anyhow, we can achieve:
$$inf{t|X+tI succeq 0,X-tI preceq 0,|lambda|_{max}=|lambda_n|}=|lambda_n|$$
$$inf{t|X-tI preceq 0,X+tI succeq 0,|lambda|_{max}=|lambda_1|}=|lambda_1|$$
To sum up:
$$inf{t|X+tI succeq 0,X-tI preceq 0}=|lambda|_{max}=|X|_2$$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    It's perfectly illuminating that $X pm tI$ means shifting eigenvalues.
    $endgroup$
    – Finley
    Sep 14 '17 at 5:58














0












0








0





$begingroup$

To my understanding:
$$|X|_2=max_{u neq 0} frac {|Xu|_2} {|u|_2}$$
$$qquad qquad =max_{u neq 0}(frac {u^TX^TXu} {u^Tu})^{1/2}$$
$$qquad qquad =max_{u neq 0}(frac {u^T X^2 u} {u^Tu})^{1/2}$$
$$qquad qquad qquad quad=max_{u neq 0}(frac {(Q^Tu)^T Lambda^2 Q^Tu} {(Q^Tu)^T Q^Tu})^{1/2}$$
$$qquad qquad qquad quad=|lambda|_{max}=max{|lambda_1|,|lambda_n|}$$



Where $|lambda|_{max}$ represents the maximum of absolute value of eigenvalues of $X$(e.g. although $lambda$ is the most negative, $|lambda|$ is the most positive) and
$$X=Q Lambda Q^T=Q begin{bmatrix} lambda_1 \ &lambda_2 \& & ddots \ & & & lambda_nend{bmatrix} Q^T$$
$lambda_1,lambda_n$ are the maximum and the minimum of eigenvalues respectively(i.e. $lambda_1 gelambda_2 gelambda_3 ge... gelambda_n$).



From Royi's thread, $X pm tI=Q(Lambda pm tI)Q^T$ means what we actually do is shifting eigenvalues with magnitude $t$(suppose $t ge 0$).



Given $|lambda|_{max}=|lambda_n|$, the minimum magnitude we have to shift is t=$|lambda_n|$ so that $X+tI succeq 0$(which means $lambda_n$ is negative, $|lambda|_n ge |lambda|_1$ and naturally $X-tI preceq 0$).



Given $|lambda|_{max}=|lambda_1|$, the minimum magnitude we have to shift is t=$|lambda_1|$ so that $X-tI preceq 0$(which means $lambda_1$ is positive, $|lambda|_1 ge |lambda|_n$ and naturally $X+tI succeq 0$).



Anyhow, we can achieve:
$$inf{t|X+tI succeq 0,X-tI preceq 0,|lambda|_{max}=|lambda_n|}=|lambda_n|$$
$$inf{t|X-tI preceq 0,X+tI succeq 0,|lambda|_{max}=|lambda_1|}=|lambda_1|$$
To sum up:
$$inf{t|X+tI succeq 0,X-tI preceq 0}=|lambda|_{max}=|X|_2$$






share|cite|improve this answer











$endgroup$



To my understanding:
$$|X|_2=max_{u neq 0} frac {|Xu|_2} {|u|_2}$$
$$qquad qquad =max_{u neq 0}(frac {u^TX^TXu} {u^Tu})^{1/2}$$
$$qquad qquad =max_{u neq 0}(frac {u^T X^2 u} {u^Tu})^{1/2}$$
$$qquad qquad qquad quad=max_{u neq 0}(frac {(Q^Tu)^T Lambda^2 Q^Tu} {(Q^Tu)^T Q^Tu})^{1/2}$$
$$qquad qquad qquad quad=|lambda|_{max}=max{|lambda_1|,|lambda_n|}$$



Where $|lambda|_{max}$ represents the maximum of absolute value of eigenvalues of $X$(e.g. although $lambda$ is the most negative, $|lambda|$ is the most positive) and
$$X=Q Lambda Q^T=Q begin{bmatrix} lambda_1 \ &lambda_2 \& & ddots \ & & & lambda_nend{bmatrix} Q^T$$
$lambda_1,lambda_n$ are the maximum and the minimum of eigenvalues respectively(i.e. $lambda_1 gelambda_2 gelambda_3 ge... gelambda_n$).



From Royi's thread, $X pm tI=Q(Lambda pm tI)Q^T$ means what we actually do is shifting eigenvalues with magnitude $t$(suppose $t ge 0$).



Given $|lambda|_{max}=|lambda_n|$, the minimum magnitude we have to shift is t=$|lambda_n|$ so that $X+tI succeq 0$(which means $lambda_n$ is negative, $|lambda|_n ge |lambda|_1$ and naturally $X-tI preceq 0$).



Given $|lambda|_{max}=|lambda_1|$, the minimum magnitude we have to shift is t=$|lambda_1|$ so that $X-tI preceq 0$(which means $lambda_1$ is positive, $|lambda|_1 ge |lambda|_n$ and naturally $X+tI succeq 0$).



Anyhow, we can achieve:
$$inf{t|X+tI succeq 0,X-tI preceq 0,|lambda|_{max}=|lambda_n|}=|lambda_n|$$
$$inf{t|X-tI preceq 0,X+tI succeq 0,|lambda|_{max}=|lambda_1|}=|lambda_1|$$
To sum up:
$$inf{t|X+tI succeq 0,X-tI preceq 0}=|lambda|_{max}=|X|_2$$







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Sep 14 '17 at 2:59

























answered Sep 14 '17 at 2:19









FinleyFinley

375213




375213












  • $begingroup$
    It's perfectly illuminating that $X pm tI$ means shifting eigenvalues.
    $endgroup$
    – Finley
    Sep 14 '17 at 5:58


















  • $begingroup$
    It's perfectly illuminating that $X pm tI$ means shifting eigenvalues.
    $endgroup$
    – Finley
    Sep 14 '17 at 5:58
















$begingroup$
It's perfectly illuminating that $X pm tI$ means shifting eigenvalues.
$endgroup$
– Finley
Sep 14 '17 at 5:58




$begingroup$
It's perfectly illuminating that $X pm tI$ means shifting eigenvalues.
$endgroup$
– Finley
Sep 14 '17 at 5:58


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2426333%2fwhy-inf-x-2-x-in-sn-inf-t-ti-preceq-x-preceq-ti-x-in-sn%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

'app-layout' is not a known element: how to share Component with different Modules

android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

WPF add header to Image with URL pettitions [duplicate]