Optimizing linear combination of matrices and vectors












1












$begingroup$


Given the function $G(X,y)$ = $y^T$$X$$y+$$b^T$$b$, where $y$ is a vector in $R^n$, $b$ is a constant vector in $R^n$ and $X$ is a constant symmetric, square, and invertible matrix, what value of $y$ such that $||y||=1$ minimizes the function $G$?



To minimize the function, I first thought to take the partial derivative wrt $y$ and set it equal to 0, so $y^T$$(A^T+A)=0$. But with the restriction of the magnitude of $y$, I'm not sure how to approach the rest. Any help would be appreciated!










share|cite|improve this question









$endgroup$

















    1












    $begingroup$


    Given the function $G(X,y)$ = $y^T$$X$$y+$$b^T$$b$, where $y$ is a vector in $R^n$, $b$ is a constant vector in $R^n$ and $X$ is a constant symmetric, square, and invertible matrix, what value of $y$ such that $||y||=1$ minimizes the function $G$?



    To minimize the function, I first thought to take the partial derivative wrt $y$ and set it equal to 0, so $y^T$$(A^T+A)=0$. But with the restriction of the magnitude of $y$, I'm not sure how to approach the rest. Any help would be appreciated!










    share|cite|improve this question









    $endgroup$















      1












      1








      1


      2



      $begingroup$


      Given the function $G(X,y)$ = $y^T$$X$$y+$$b^T$$b$, where $y$ is a vector in $R^n$, $b$ is a constant vector in $R^n$ and $X$ is a constant symmetric, square, and invertible matrix, what value of $y$ such that $||y||=1$ minimizes the function $G$?



      To minimize the function, I first thought to take the partial derivative wrt $y$ and set it equal to 0, so $y^T$$(A^T+A)=0$. But with the restriction of the magnitude of $y$, I'm not sure how to approach the rest. Any help would be appreciated!










      share|cite|improve this question









      $endgroup$




      Given the function $G(X,y)$ = $y^T$$X$$y+$$b^T$$b$, where $y$ is a vector in $R^n$, $b$ is a constant vector in $R^n$ and $X$ is a constant symmetric, square, and invertible matrix, what value of $y$ such that $||y||=1$ minimizes the function $G$?



      To minimize the function, I first thought to take the partial derivative wrt $y$ and set it equal to 0, so $y^T$$(A^T+A)=0$. But with the restriction of the magnitude of $y$, I'm not sure how to approach the rest. Any help would be appreciated!







      linear-algebra matrices optimization matrix-calculus






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Jan 27 at 0:24









      AnthonyAnthony

      35519




      35519






















          2 Answers
          2






          active

          oldest

          votes


















          3












          $begingroup$

          Let $w$ be an unconstrained vector and define $y$ in terms of it
          $$y = frac{w}{sqrt{w^Tw}}$$
          Consider the function
          $$eqalign{
          lambda &= y^TXy = frac{w^TXw}{w^Tw} cr
          dlambda &= bigg(frac{2Xw-2lambda w}{w^Tw}bigg)^Tdw cr
          frac{partiallambda}{partial w} &= frac{2}{w^Tw}Big(Xw-lambda wBig) cr
          }$$

          Setting this gradient to zero yields the eigenvalue equation
          $$Xw = lambda w$$
          So, in terms of the original variables:
          $,y$ is simply the (normalize) eigenvector corresponding to the smallest eigenvalue of $X$, and $G=lambda+b^Tb$.






          share|cite|improve this answer









          $endgroup$





















            2












            $begingroup$

            I take it that the problem is
            $$
            min_{y} quad f(y) = y^T A y \ text{s.t.}quad |y|_2 = 1
            tag{1}
            $$

            for $A$ symmetric, invertible.
            If this is correct, then consider using the eigendecomposition of $A$ using the Spectral Theorem. Can you see what we might want $y$ to be? We can disregard $b$ since it is independent of $y$.



            As a start, we have $y^{top} A y = sum_{i=1, k=1}^n y_i, left( sum_{j=1}^n lambda_j , [v_j]_i , [v_j]_k right) , y_k$ $= sum_{i=1, j=1, k=1}^n y_j, y_k, lambda_i , [v_i]_j , [v_i]_k$.



            Then from the last expression in the equality, we see that if we let $y_j = [v_i]_j$, that is, we let the $j$-th component of $y$ be the $j$-th component of the (unit length) eigenvector $i$, then we recover $lambda_i$. So, to minimize $f$, we can set $y$ equal to the eigenvalue associated with the smallest eigenvector.






            share|cite|improve this answer











            $endgroup$













              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "69"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              noCode: true, onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3088968%2foptimizing-linear-combination-of-matrices-and-vectors%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              3












              $begingroup$

              Let $w$ be an unconstrained vector and define $y$ in terms of it
              $$y = frac{w}{sqrt{w^Tw}}$$
              Consider the function
              $$eqalign{
              lambda &= y^TXy = frac{w^TXw}{w^Tw} cr
              dlambda &= bigg(frac{2Xw-2lambda w}{w^Tw}bigg)^Tdw cr
              frac{partiallambda}{partial w} &= frac{2}{w^Tw}Big(Xw-lambda wBig) cr
              }$$

              Setting this gradient to zero yields the eigenvalue equation
              $$Xw = lambda w$$
              So, in terms of the original variables:
              $,y$ is simply the (normalize) eigenvector corresponding to the smallest eigenvalue of $X$, and $G=lambda+b^Tb$.






              share|cite|improve this answer









              $endgroup$


















                3












                $begingroup$

                Let $w$ be an unconstrained vector and define $y$ in terms of it
                $$y = frac{w}{sqrt{w^Tw}}$$
                Consider the function
                $$eqalign{
                lambda &= y^TXy = frac{w^TXw}{w^Tw} cr
                dlambda &= bigg(frac{2Xw-2lambda w}{w^Tw}bigg)^Tdw cr
                frac{partiallambda}{partial w} &= frac{2}{w^Tw}Big(Xw-lambda wBig) cr
                }$$

                Setting this gradient to zero yields the eigenvalue equation
                $$Xw = lambda w$$
                So, in terms of the original variables:
                $,y$ is simply the (normalize) eigenvector corresponding to the smallest eigenvalue of $X$, and $G=lambda+b^Tb$.






                share|cite|improve this answer









                $endgroup$
















                  3












                  3








                  3





                  $begingroup$

                  Let $w$ be an unconstrained vector and define $y$ in terms of it
                  $$y = frac{w}{sqrt{w^Tw}}$$
                  Consider the function
                  $$eqalign{
                  lambda &= y^TXy = frac{w^TXw}{w^Tw} cr
                  dlambda &= bigg(frac{2Xw-2lambda w}{w^Tw}bigg)^Tdw cr
                  frac{partiallambda}{partial w} &= frac{2}{w^Tw}Big(Xw-lambda wBig) cr
                  }$$

                  Setting this gradient to zero yields the eigenvalue equation
                  $$Xw = lambda w$$
                  So, in terms of the original variables:
                  $,y$ is simply the (normalize) eigenvector corresponding to the smallest eigenvalue of $X$, and $G=lambda+b^Tb$.






                  share|cite|improve this answer









                  $endgroup$



                  Let $w$ be an unconstrained vector and define $y$ in terms of it
                  $$y = frac{w}{sqrt{w^Tw}}$$
                  Consider the function
                  $$eqalign{
                  lambda &= y^TXy = frac{w^TXw}{w^Tw} cr
                  dlambda &= bigg(frac{2Xw-2lambda w}{w^Tw}bigg)^Tdw cr
                  frac{partiallambda}{partial w} &= frac{2}{w^Tw}Big(Xw-lambda wBig) cr
                  }$$

                  Setting this gradient to zero yields the eigenvalue equation
                  $$Xw = lambda w$$
                  So, in terms of the original variables:
                  $,y$ is simply the (normalize) eigenvector corresponding to the smallest eigenvalue of $X$, and $G=lambda+b^Tb$.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jan 27 at 19:12









                  greggreg

                  8,9551824




                  8,9551824























                      2












                      $begingroup$

                      I take it that the problem is
                      $$
                      min_{y} quad f(y) = y^T A y \ text{s.t.}quad |y|_2 = 1
                      tag{1}
                      $$

                      for $A$ symmetric, invertible.
                      If this is correct, then consider using the eigendecomposition of $A$ using the Spectral Theorem. Can you see what we might want $y$ to be? We can disregard $b$ since it is independent of $y$.



                      As a start, we have $y^{top} A y = sum_{i=1, k=1}^n y_i, left( sum_{j=1}^n lambda_j , [v_j]_i , [v_j]_k right) , y_k$ $= sum_{i=1, j=1, k=1}^n y_j, y_k, lambda_i , [v_i]_j , [v_i]_k$.



                      Then from the last expression in the equality, we see that if we let $y_j = [v_i]_j$, that is, we let the $j$-th component of $y$ be the $j$-th component of the (unit length) eigenvector $i$, then we recover $lambda_i$. So, to minimize $f$, we can set $y$ equal to the eigenvalue associated with the smallest eigenvector.






                      share|cite|improve this answer











                      $endgroup$


















                        2












                        $begingroup$

                        I take it that the problem is
                        $$
                        min_{y} quad f(y) = y^T A y \ text{s.t.}quad |y|_2 = 1
                        tag{1}
                        $$

                        for $A$ symmetric, invertible.
                        If this is correct, then consider using the eigendecomposition of $A$ using the Spectral Theorem. Can you see what we might want $y$ to be? We can disregard $b$ since it is independent of $y$.



                        As a start, we have $y^{top} A y = sum_{i=1, k=1}^n y_i, left( sum_{j=1}^n lambda_j , [v_j]_i , [v_j]_k right) , y_k$ $= sum_{i=1, j=1, k=1}^n y_j, y_k, lambda_i , [v_i]_j , [v_i]_k$.



                        Then from the last expression in the equality, we see that if we let $y_j = [v_i]_j$, that is, we let the $j$-th component of $y$ be the $j$-th component of the (unit length) eigenvector $i$, then we recover $lambda_i$. So, to minimize $f$, we can set $y$ equal to the eigenvalue associated with the smallest eigenvector.






                        share|cite|improve this answer











                        $endgroup$
















                          2












                          2








                          2





                          $begingroup$

                          I take it that the problem is
                          $$
                          min_{y} quad f(y) = y^T A y \ text{s.t.}quad |y|_2 = 1
                          tag{1}
                          $$

                          for $A$ symmetric, invertible.
                          If this is correct, then consider using the eigendecomposition of $A$ using the Spectral Theorem. Can you see what we might want $y$ to be? We can disregard $b$ since it is independent of $y$.



                          As a start, we have $y^{top} A y = sum_{i=1, k=1}^n y_i, left( sum_{j=1}^n lambda_j , [v_j]_i , [v_j]_k right) , y_k$ $= sum_{i=1, j=1, k=1}^n y_j, y_k, lambda_i , [v_i]_j , [v_i]_k$.



                          Then from the last expression in the equality, we see that if we let $y_j = [v_i]_j$, that is, we let the $j$-th component of $y$ be the $j$-th component of the (unit length) eigenvector $i$, then we recover $lambda_i$. So, to minimize $f$, we can set $y$ equal to the eigenvalue associated with the smallest eigenvector.






                          share|cite|improve this answer











                          $endgroup$



                          I take it that the problem is
                          $$
                          min_{y} quad f(y) = y^T A y \ text{s.t.}quad |y|_2 = 1
                          tag{1}
                          $$

                          for $A$ symmetric, invertible.
                          If this is correct, then consider using the eigendecomposition of $A$ using the Spectral Theorem. Can you see what we might want $y$ to be? We can disregard $b$ since it is independent of $y$.



                          As a start, we have $y^{top} A y = sum_{i=1, k=1}^n y_i, left( sum_{j=1}^n lambda_j , [v_j]_i , [v_j]_k right) , y_k$ $= sum_{i=1, j=1, k=1}^n y_j, y_k, lambda_i , [v_i]_j , [v_i]_k$.



                          Then from the last expression in the equality, we see that if we let $y_j = [v_i]_j$, that is, we let the $j$-th component of $y$ be the $j$-th component of the (unit length) eigenvector $i$, then we recover $lambda_i$. So, to minimize $f$, we can set $y$ equal to the eigenvalue associated with the smallest eigenvector.







                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited Jan 28 at 20:29

























                          answered Jan 27 at 0:41









                          jjjjjjjjjjjj

                          1,204516




                          1,204516






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Mathematics Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3088968%2foptimizing-linear-combination-of-matrices-and-vectors%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              MongoDB - Not Authorized To Execute Command

                              in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

                              Npm cannot find a required file even through it is in the searched directory