How is the gradient of this function calculated?












0












$begingroup$


From understanding machine learning: from theory to algorithms:



How is the equation in the red box below derived?



Isn't the gradient of the objective function



$$J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2$$
equal to



$$nabla J = 2lambda w + frac{1}{m}sum_{i=1}^m (w^Tx_i - y_i)x_i$$



And $$sum_{i=1}^m (w^Tx_i - y_i)x_i = sum_{i=1}^mw^Tx_ix_i - sum_{i=1}^my_ix_i$$



How does the author get $A$ out of this gradient?





enter image description here










share|cite|improve this question









$endgroup$












  • $begingroup$
    Looks like $A$ is defined in Equation 9.6, which is repeated in 13.4. That looks right, pending transposes and such.
    $endgroup$
    – Adrian Keister
    Jan 29 at 16:10










  • $begingroup$
    Why is $sum_{i=1}^n(w^Tx_i)x_i = wsum_{i=1}^n(x_ix_i^T)$? I see that $(w^Tx_i)x_i$ is a vector, but $wx_ix_i^T$ shouldn't be defined as $w$ has shape $(n times 1)$ and $x_ix_i^T$ has shape $(n times n)$.
    $endgroup$
    – Oliver G
    Jan 29 at 16:17












  • $begingroup$
    It's not. $w$ ends up on the RHS of the expressions with $A,$ not the LHS.
    $endgroup$
    – Adrian Keister
    Jan 29 at 16:22










  • $begingroup$
    How is $x_i x_i^Tw = w^Tx_ix_i?$
    $endgroup$
    – Oliver G
    Jan 29 at 16:36


















0












$begingroup$


From understanding machine learning: from theory to algorithms:



How is the equation in the red box below derived?



Isn't the gradient of the objective function



$$J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2$$
equal to



$$nabla J = 2lambda w + frac{1}{m}sum_{i=1}^m (w^Tx_i - y_i)x_i$$



And $$sum_{i=1}^m (w^Tx_i - y_i)x_i = sum_{i=1}^mw^Tx_ix_i - sum_{i=1}^my_ix_i$$



How does the author get $A$ out of this gradient?





enter image description here










share|cite|improve this question









$endgroup$












  • $begingroup$
    Looks like $A$ is defined in Equation 9.6, which is repeated in 13.4. That looks right, pending transposes and such.
    $endgroup$
    – Adrian Keister
    Jan 29 at 16:10










  • $begingroup$
    Why is $sum_{i=1}^n(w^Tx_i)x_i = wsum_{i=1}^n(x_ix_i^T)$? I see that $(w^Tx_i)x_i$ is a vector, but $wx_ix_i^T$ shouldn't be defined as $w$ has shape $(n times 1)$ and $x_ix_i^T$ has shape $(n times n)$.
    $endgroup$
    – Oliver G
    Jan 29 at 16:17












  • $begingroup$
    It's not. $w$ ends up on the RHS of the expressions with $A,$ not the LHS.
    $endgroup$
    – Adrian Keister
    Jan 29 at 16:22










  • $begingroup$
    How is $x_i x_i^Tw = w^Tx_ix_i?$
    $endgroup$
    – Oliver G
    Jan 29 at 16:36
















0












0








0





$begingroup$


From understanding machine learning: from theory to algorithms:



How is the equation in the red box below derived?



Isn't the gradient of the objective function



$$J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2$$
equal to



$$nabla J = 2lambda w + frac{1}{m}sum_{i=1}^m (w^Tx_i - y_i)x_i$$



And $$sum_{i=1}^m (w^Tx_i - y_i)x_i = sum_{i=1}^mw^Tx_ix_i - sum_{i=1}^my_ix_i$$



How does the author get $A$ out of this gradient?





enter image description here










share|cite|improve this question









$endgroup$




From understanding machine learning: from theory to algorithms:



How is the equation in the red box below derived?



Isn't the gradient of the objective function



$$J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2$$
equal to



$$nabla J = 2lambda w + frac{1}{m}sum_{i=1}^m (w^Tx_i - y_i)x_i$$



And $$sum_{i=1}^m (w^Tx_i - y_i)x_i = sum_{i=1}^mw^Tx_ix_i - sum_{i=1}^my_ix_i$$



How does the author get $A$ out of this gradient?





enter image description here







proof-explanation machine-learning






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 29 at 16:06









Oliver GOliver G

1,3651632




1,3651632












  • $begingroup$
    Looks like $A$ is defined in Equation 9.6, which is repeated in 13.4. That looks right, pending transposes and such.
    $endgroup$
    – Adrian Keister
    Jan 29 at 16:10










  • $begingroup$
    Why is $sum_{i=1}^n(w^Tx_i)x_i = wsum_{i=1}^n(x_ix_i^T)$? I see that $(w^Tx_i)x_i$ is a vector, but $wx_ix_i^T$ shouldn't be defined as $w$ has shape $(n times 1)$ and $x_ix_i^T$ has shape $(n times n)$.
    $endgroup$
    – Oliver G
    Jan 29 at 16:17












  • $begingroup$
    It's not. $w$ ends up on the RHS of the expressions with $A,$ not the LHS.
    $endgroup$
    – Adrian Keister
    Jan 29 at 16:22










  • $begingroup$
    How is $x_i x_i^Tw = w^Tx_ix_i?$
    $endgroup$
    – Oliver G
    Jan 29 at 16:36




















  • $begingroup$
    Looks like $A$ is defined in Equation 9.6, which is repeated in 13.4. That looks right, pending transposes and such.
    $endgroup$
    – Adrian Keister
    Jan 29 at 16:10










  • $begingroup$
    Why is $sum_{i=1}^n(w^Tx_i)x_i = wsum_{i=1}^n(x_ix_i^T)$? I see that $(w^Tx_i)x_i$ is a vector, but $wx_ix_i^T$ shouldn't be defined as $w$ has shape $(n times 1)$ and $x_ix_i^T$ has shape $(n times n)$.
    $endgroup$
    – Oliver G
    Jan 29 at 16:17












  • $begingroup$
    It's not. $w$ ends up on the RHS of the expressions with $A,$ not the LHS.
    $endgroup$
    – Adrian Keister
    Jan 29 at 16:22










  • $begingroup$
    How is $x_i x_i^Tw = w^Tx_ix_i?$
    $endgroup$
    – Oliver G
    Jan 29 at 16:36


















$begingroup$
Looks like $A$ is defined in Equation 9.6, which is repeated in 13.4. That looks right, pending transposes and such.
$endgroup$
– Adrian Keister
Jan 29 at 16:10




$begingroup$
Looks like $A$ is defined in Equation 9.6, which is repeated in 13.4. That looks right, pending transposes and such.
$endgroup$
– Adrian Keister
Jan 29 at 16:10












$begingroup$
Why is $sum_{i=1}^n(w^Tx_i)x_i = wsum_{i=1}^n(x_ix_i^T)$? I see that $(w^Tx_i)x_i$ is a vector, but $wx_ix_i^T$ shouldn't be defined as $w$ has shape $(n times 1)$ and $x_ix_i^T$ has shape $(n times n)$.
$endgroup$
– Oliver G
Jan 29 at 16:17






$begingroup$
Why is $sum_{i=1}^n(w^Tx_i)x_i = wsum_{i=1}^n(x_ix_i^T)$? I see that $(w^Tx_i)x_i$ is a vector, but $wx_ix_i^T$ shouldn't be defined as $w$ has shape $(n times 1)$ and $x_ix_i^T$ has shape $(n times n)$.
$endgroup$
– Oliver G
Jan 29 at 16:17














$begingroup$
It's not. $w$ ends up on the RHS of the expressions with $A,$ not the LHS.
$endgroup$
– Adrian Keister
Jan 29 at 16:22




$begingroup$
It's not. $w$ ends up on the RHS of the expressions with $A,$ not the LHS.
$endgroup$
– Adrian Keister
Jan 29 at 16:22












$begingroup$
How is $x_i x_i^Tw = w^Tx_ix_i?$
$endgroup$
– Oliver G
Jan 29 at 16:36






$begingroup$
How is $x_i x_i^Tw = w^Tx_ix_i?$
$endgroup$
– Oliver G
Jan 29 at 16:36












1 Answer
1






active

oldest

votes


















1












$begingroup$

From your initial expression:



$$
J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2
$$



Gradient respect to the vector $w$ yields



$$
nabla J = lambda nabla (w^T w) + frac{1}{m} sum_{i=1}^m frac{1}{2} nabla left(w^Tx_i-y_i right)^2,
$$

Let's compute the gradient of the single bits.
First bit
$$
nabla (w^T w) = 2w.
$$

Second bit
$$
nabla(w^T x_i - y_i)^2 = frac{d(w^T x_i - y_i)^2}{d(w^T x_i - y_i)} nabla(w^T x_i) = 2(w^T x_i - y_i) x_i.
$$



Let's back substitute



$$
nabla J = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i - y_i) x_i = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i) x_i - sum_{i=1}^my_i x_i
$$



Now let's analyze the summation



$$
sum_{i=1}^m (w^T x_i) x_i = sum_{i=1}^m (x_i^T w) x_i = X X^T w = A w,
$$

Here $X$ is the matrix constructed as
$$
X = left[x_1 , ldots,x_d right]
$$

Observe that
$$
X X^T = sum_{i=1}^mx_i x_i^T.
$$

I suppose, apart from dimensions checking, the most rigorous way to prove the equality is to prove that the two matrices $A = XX^T$ and $B = sum_{i=1}^m x_i x_i^T$ define the same bilinear form, which means you need to check



$$
e_i^T X X^T e_j = e_i^T left( sum_{i=1}^m x_i x_i^T right) e_j
$$

are the same for all $i,j$ ($e_i$ is the $i$-th vector of the canonical basis in $mathbb{R}^m$), if you don't get confused with the indices you should be able to prove the equality.






share|cite|improve this answer











$endgroup$














    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3092356%2fhow-is-the-gradient-of-this-function-calculated%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    From your initial expression:



    $$
    J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2
    $$



    Gradient respect to the vector $w$ yields



    $$
    nabla J = lambda nabla (w^T w) + frac{1}{m} sum_{i=1}^m frac{1}{2} nabla left(w^Tx_i-y_i right)^2,
    $$

    Let's compute the gradient of the single bits.
    First bit
    $$
    nabla (w^T w) = 2w.
    $$

    Second bit
    $$
    nabla(w^T x_i - y_i)^2 = frac{d(w^T x_i - y_i)^2}{d(w^T x_i - y_i)} nabla(w^T x_i) = 2(w^T x_i - y_i) x_i.
    $$



    Let's back substitute



    $$
    nabla J = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i - y_i) x_i = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i) x_i - sum_{i=1}^my_i x_i
    $$



    Now let's analyze the summation



    $$
    sum_{i=1}^m (w^T x_i) x_i = sum_{i=1}^m (x_i^T w) x_i = X X^T w = A w,
    $$

    Here $X$ is the matrix constructed as
    $$
    X = left[x_1 , ldots,x_d right]
    $$

    Observe that
    $$
    X X^T = sum_{i=1}^mx_i x_i^T.
    $$

    I suppose, apart from dimensions checking, the most rigorous way to prove the equality is to prove that the two matrices $A = XX^T$ and $B = sum_{i=1}^m x_i x_i^T$ define the same bilinear form, which means you need to check



    $$
    e_i^T X X^T e_j = e_i^T left( sum_{i=1}^m x_i x_i^T right) e_j
    $$

    are the same for all $i,j$ ($e_i$ is the $i$-th vector of the canonical basis in $mathbb{R}^m$), if you don't get confused with the indices you should be able to prove the equality.






    share|cite|improve this answer











    $endgroup$


















      1












      $begingroup$

      From your initial expression:



      $$
      J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2
      $$



      Gradient respect to the vector $w$ yields



      $$
      nabla J = lambda nabla (w^T w) + frac{1}{m} sum_{i=1}^m frac{1}{2} nabla left(w^Tx_i-y_i right)^2,
      $$

      Let's compute the gradient of the single bits.
      First bit
      $$
      nabla (w^T w) = 2w.
      $$

      Second bit
      $$
      nabla(w^T x_i - y_i)^2 = frac{d(w^T x_i - y_i)^2}{d(w^T x_i - y_i)} nabla(w^T x_i) = 2(w^T x_i - y_i) x_i.
      $$



      Let's back substitute



      $$
      nabla J = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i - y_i) x_i = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i) x_i - sum_{i=1}^my_i x_i
      $$



      Now let's analyze the summation



      $$
      sum_{i=1}^m (w^T x_i) x_i = sum_{i=1}^m (x_i^T w) x_i = X X^T w = A w,
      $$

      Here $X$ is the matrix constructed as
      $$
      X = left[x_1 , ldots,x_d right]
      $$

      Observe that
      $$
      X X^T = sum_{i=1}^mx_i x_i^T.
      $$

      I suppose, apart from dimensions checking, the most rigorous way to prove the equality is to prove that the two matrices $A = XX^T$ and $B = sum_{i=1}^m x_i x_i^T$ define the same bilinear form, which means you need to check



      $$
      e_i^T X X^T e_j = e_i^T left( sum_{i=1}^m x_i x_i^T right) e_j
      $$

      are the same for all $i,j$ ($e_i$ is the $i$-th vector of the canonical basis in $mathbb{R}^m$), if you don't get confused with the indices you should be able to prove the equality.






      share|cite|improve this answer











      $endgroup$
















        1












        1








        1





        $begingroup$

        From your initial expression:



        $$
        J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2
        $$



        Gradient respect to the vector $w$ yields



        $$
        nabla J = lambda nabla (w^T w) + frac{1}{m} sum_{i=1}^m frac{1}{2} nabla left(w^Tx_i-y_i right)^2,
        $$

        Let's compute the gradient of the single bits.
        First bit
        $$
        nabla (w^T w) = 2w.
        $$

        Second bit
        $$
        nabla(w^T x_i - y_i)^2 = frac{d(w^T x_i - y_i)^2}{d(w^T x_i - y_i)} nabla(w^T x_i) = 2(w^T x_i - y_i) x_i.
        $$



        Let's back substitute



        $$
        nabla J = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i - y_i) x_i = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i) x_i - sum_{i=1}^my_i x_i
        $$



        Now let's analyze the summation



        $$
        sum_{i=1}^m (w^T x_i) x_i = sum_{i=1}^m (x_i^T w) x_i = X X^T w = A w,
        $$

        Here $X$ is the matrix constructed as
        $$
        X = left[x_1 , ldots,x_d right]
        $$

        Observe that
        $$
        X X^T = sum_{i=1}^mx_i x_i^T.
        $$

        I suppose, apart from dimensions checking, the most rigorous way to prove the equality is to prove that the two matrices $A = XX^T$ and $B = sum_{i=1}^m x_i x_i^T$ define the same bilinear form, which means you need to check



        $$
        e_i^T X X^T e_j = e_i^T left( sum_{i=1}^m x_i x_i^T right) e_j
        $$

        are the same for all $i,j$ ($e_i$ is the $i$-th vector of the canonical basis in $mathbb{R}^m$), if you don't get confused with the indices you should be able to prove the equality.






        share|cite|improve this answer











        $endgroup$



        From your initial expression:



        $$
        J = lambda w^T w + frac{1}{m}sum_{i=1}^m frac{1}{2}(w^Tx_i - y_i)^2
        $$



        Gradient respect to the vector $w$ yields



        $$
        nabla J = lambda nabla (w^T w) + frac{1}{m} sum_{i=1}^m frac{1}{2} nabla left(w^Tx_i-y_i right)^2,
        $$

        Let's compute the gradient of the single bits.
        First bit
        $$
        nabla (w^T w) = 2w.
        $$

        Second bit
        $$
        nabla(w^T x_i - y_i)^2 = frac{d(w^T x_i - y_i)^2}{d(w^T x_i - y_i)} nabla(w^T x_i) = 2(w^T x_i - y_i) x_i.
        $$



        Let's back substitute



        $$
        nabla J = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i - y_i) x_i = 2lambda w + frac{1}{m} sum_{i=1}^m (w^T x_i) x_i - sum_{i=1}^my_i x_i
        $$



        Now let's analyze the summation



        $$
        sum_{i=1}^m (w^T x_i) x_i = sum_{i=1}^m (x_i^T w) x_i = X X^T w = A w,
        $$

        Here $X$ is the matrix constructed as
        $$
        X = left[x_1 , ldots,x_d right]
        $$

        Observe that
        $$
        X X^T = sum_{i=1}^mx_i x_i^T.
        $$

        I suppose, apart from dimensions checking, the most rigorous way to prove the equality is to prove that the two matrices $A = XX^T$ and $B = sum_{i=1}^m x_i x_i^T$ define the same bilinear form, which means you need to check



        $$
        e_i^T X X^T e_j = e_i^T left( sum_{i=1}^m x_i x_i^T right) e_j
        $$

        are the same for all $i,j$ ($e_i$ is the $i$-th vector of the canonical basis in $mathbb{R}^m$), if you don't get confused with the indices you should be able to prove the equality.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Jan 29 at 17:15

























        answered Jan 29 at 16:30









        user8469759user8469759

        1,5681618




        1,5681618






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3092356%2fhow-is-the-gradient-of-this-function-calculated%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            How to fix TextFormField cause rebuild widget in Flutter

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith