Max Cut: Form of Graph Laplacian?











up vote
0
down vote

favorite












In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$



where $L_G$ is a matrix called the Laplacian of the graph $G$.



In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$

Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.



From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$

but the first term confuses me.










share|cite|improve this question






















  • Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
    – Jean Marie
    2 days ago















up vote
0
down vote

favorite












In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$



where $L_G$ is a matrix called the Laplacian of the graph $G$.



In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$

Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.



From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$

but the first term confuses me.










share|cite|improve this question






















  • Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
    – Jean Marie
    2 days ago













up vote
0
down vote

favorite









up vote
0
down vote

favorite











In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$



where $L_G$ is a matrix called the Laplacian of the graph $G$.



In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$

Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.



From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$

but the first term confuses me.










share|cite|improve this question













In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$



where $L_G$ is a matrix called the Laplacian of the graph $G$.



In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$

Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.



From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$

but the first term confuses me.







convex-optimization






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked 2 days ago









Dan

152




152












  • Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
    – Jean Marie
    2 days ago


















  • Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
    – Jean Marie
    2 days ago
















Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
2 days ago




Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
2 days ago










1 Answer
1






active

oldest

votes

















up vote
0
down vote













I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



The elements of the (simple) graph Laplacian are given by (from Wikipedia):
$$
L_{ij}:=
begin{cases}
text{deg}(v_i),& text{if } i=j\
-1, & text{if }isim j\
0, & text{otherwise}
end{cases}
$$

So an example graph Laplacian might look like:
$$
L_{text{example}}=begin{bmatrix}
2&-1&-1&0 \
-1&3&-1&-1\
-1&-1&2&0\
0&-1&0&1
end{bmatrix}
$$

Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
$$
x_{text{example}}=begin{bmatrix}
1\
-1\
-1\
1
end{bmatrix}
$$

so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
$$
x_{text{example}}^TL_{text{example}}x_{text{example}}=
begin{bmatrix}
1&
-1&
-1&
1
end{bmatrix}
begin{bmatrix}
4\
-4\
-2\
2
end{bmatrix}
=12
$$



Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
(Lx)_i=
text{deg}(v_i)+Bigg(sum_{
substack{jsim i,\
jtext{ other side}}
}1Bigg)
-Bigg(sum_{substack{jsim i,\
jtext{ same side}}
}1Bigg)$$

We also see that $x^TLx$ gives the sum of these:
$$
begin{align}
x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
&=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
&=4(text{# edges crossing cut})
end{align}$$

because
$$
text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
$$

Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.






share|cite|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005676%2fmax-cut-form-of-graph-laplacian%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



    The elements of the (simple) graph Laplacian are given by (from Wikipedia):
    $$
    L_{ij}:=
    begin{cases}
    text{deg}(v_i),& text{if } i=j\
    -1, & text{if }isim j\
    0, & text{otherwise}
    end{cases}
    $$

    So an example graph Laplacian might look like:
    $$
    L_{text{example}}=begin{bmatrix}
    2&-1&-1&0 \
    -1&3&-1&-1\
    -1&-1&2&0\
    0&-1&0&1
    end{bmatrix}
    $$

    Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



    Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
    $$
    x_{text{example}}=begin{bmatrix}
    1\
    -1\
    -1\
    1
    end{bmatrix}
    $$

    so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
    $$
    x_{text{example}}^TL_{text{example}}x_{text{example}}=
    begin{bmatrix}
    1&
    -1&
    -1&
    1
    end{bmatrix}
    begin{bmatrix}
    4\
    -4\
    -2\
    2
    end{bmatrix}
    =12
    $$



    Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
    (Lx)_i=
    text{deg}(v_i)+Bigg(sum_{
    substack{jsim i,\
    jtext{ other side}}
    }1Bigg)
    -Bigg(sum_{substack{jsim i,\
    jtext{ same side}}
    }1Bigg)$$

    We also see that $x^TLx$ gives the sum of these:
    $$
    begin{align}
    x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
    &=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
    &=4(text{# edges crossing cut})
    end{align}$$

    because
    $$
    text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
    $$

    Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



    Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.






    share|cite|improve this answer

























      up vote
      0
      down vote













      I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



      The elements of the (simple) graph Laplacian are given by (from Wikipedia):
      $$
      L_{ij}:=
      begin{cases}
      text{deg}(v_i),& text{if } i=j\
      -1, & text{if }isim j\
      0, & text{otherwise}
      end{cases}
      $$

      So an example graph Laplacian might look like:
      $$
      L_{text{example}}=begin{bmatrix}
      2&-1&-1&0 \
      -1&3&-1&-1\
      -1&-1&2&0\
      0&-1&0&1
      end{bmatrix}
      $$

      Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



      Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
      $$
      x_{text{example}}=begin{bmatrix}
      1\
      -1\
      -1\
      1
      end{bmatrix}
      $$

      so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
      $$
      x_{text{example}}^TL_{text{example}}x_{text{example}}=
      begin{bmatrix}
      1&
      -1&
      -1&
      1
      end{bmatrix}
      begin{bmatrix}
      4\
      -4\
      -2\
      2
      end{bmatrix}
      =12
      $$



      Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
      (Lx)_i=
      text{deg}(v_i)+Bigg(sum_{
      substack{jsim i,\
      jtext{ other side}}
      }1Bigg)
      -Bigg(sum_{substack{jsim i,\
      jtext{ same side}}
      }1Bigg)$$

      We also see that $x^TLx$ gives the sum of these:
      $$
      begin{align}
      x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
      &=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
      &=4(text{# edges crossing cut})
      end{align}$$

      because
      $$
      text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
      $$

      Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



      Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.






      share|cite|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



        The elements of the (simple) graph Laplacian are given by (from Wikipedia):
        $$
        L_{ij}:=
        begin{cases}
        text{deg}(v_i),& text{if } i=j\
        -1, & text{if }isim j\
        0, & text{otherwise}
        end{cases}
        $$

        So an example graph Laplacian might look like:
        $$
        L_{text{example}}=begin{bmatrix}
        2&-1&-1&0 \
        -1&3&-1&-1\
        -1&-1&2&0\
        0&-1&0&1
        end{bmatrix}
        $$

        Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



        Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
        $$
        x_{text{example}}=begin{bmatrix}
        1\
        -1\
        -1\
        1
        end{bmatrix}
        $$

        so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
        $$
        x_{text{example}}^TL_{text{example}}x_{text{example}}=
        begin{bmatrix}
        1&
        -1&
        -1&
        1
        end{bmatrix}
        begin{bmatrix}
        4\
        -4\
        -2\
        2
        end{bmatrix}
        =12
        $$



        Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
        (Lx)_i=
        text{deg}(v_i)+Bigg(sum_{
        substack{jsim i,\
        jtext{ other side}}
        }1Bigg)
        -Bigg(sum_{substack{jsim i,\
        jtext{ same side}}
        }1Bigg)$$

        We also see that $x^TLx$ gives the sum of these:
        $$
        begin{align}
        x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
        &=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
        &=4(text{# edges crossing cut})
        end{align}$$

        because
        $$
        text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
        $$

        Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



        Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.






        share|cite|improve this answer












        I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



        The elements of the (simple) graph Laplacian are given by (from Wikipedia):
        $$
        L_{ij}:=
        begin{cases}
        text{deg}(v_i),& text{if } i=j\
        -1, & text{if }isim j\
        0, & text{otherwise}
        end{cases}
        $$

        So an example graph Laplacian might look like:
        $$
        L_{text{example}}=begin{bmatrix}
        2&-1&-1&0 \
        -1&3&-1&-1\
        -1&-1&2&0\
        0&-1&0&1
        end{bmatrix}
        $$

        Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



        Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
        $$
        x_{text{example}}=begin{bmatrix}
        1\
        -1\
        -1\
        1
        end{bmatrix}
        $$

        so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
        $$
        x_{text{example}}^TL_{text{example}}x_{text{example}}=
        begin{bmatrix}
        1&
        -1&
        -1&
        1
        end{bmatrix}
        begin{bmatrix}
        4\
        -4\
        -2\
        2
        end{bmatrix}
        =12
        $$



        Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
        (Lx)_i=
        text{deg}(v_i)+Bigg(sum_{
        substack{jsim i,\
        jtext{ other side}}
        }1Bigg)
        -Bigg(sum_{substack{jsim i,\
        jtext{ same side}}
        }1Bigg)$$

        We also see that $x^TLx$ gives the sum of these:
        $$
        begin{align}
        x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
        &=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
        &=4(text{# edges crossing cut})
        end{align}$$

        because
        $$
        text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
        $$

        Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



        Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered 2 days ago









        Dan

        152




        152






























             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005676%2fmax-cut-form-of-graph-laplacian%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

            SQL update select statement

            'app-layout' is not a known element: how to share Component with different Modules