Max Cut: Form of Graph Laplacian?
up vote
0
down vote
favorite
In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$
where $L_G$ is a matrix called the Laplacian of the graph $G$.
In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$
Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.
From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$
but the first term confuses me.
convex-optimization
add a comment |
up vote
0
down vote
favorite
In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$
where $L_G$ is a matrix called the Laplacian of the graph $G$.
In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$
Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.
From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$
but the first term confuses me.
convex-optimization
Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
2 days ago
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$
where $L_G$ is a matrix called the Laplacian of the graph $G$.
In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$
Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.
From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$
but the first term confuses me.
convex-optimization
In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$
where $L_G$ is a matrix called the Laplacian of the graph $G$.
In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$
Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.
From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$
but the first term confuses me.
convex-optimization
convex-optimization
asked 2 days ago
Dan
152
152
Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
2 days ago
add a comment |
Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
2 days ago
Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
2 days ago
Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
2 days ago
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:
The elements of the (simple) graph Laplacian are given by (from Wikipedia):
$$
L_{ij}:=
begin{cases}
text{deg}(v_i),& text{if } i=j\
-1, & text{if }isim j\
0, & text{otherwise}
end{cases}
$$
So an example graph Laplacian might look like:
$$
L_{text{example}}=begin{bmatrix}
2&-1&-1&0 \
-1&3&-1&-1\
-1&-1&2&0\
0&-1&0&1
end{bmatrix}
$$
Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).
Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
$$
x_{text{example}}=begin{bmatrix}
1\
-1\
-1\
1
end{bmatrix}
$$
so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
$$
x_{text{example}}^TL_{text{example}}x_{text{example}}=
begin{bmatrix}
1&
-1&
-1&
1
end{bmatrix}
begin{bmatrix}
4\
-4\
-2\
2
end{bmatrix}
=12
$$
Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
(Lx)_i=
text{deg}(v_i)+Bigg(sum_{
substack{jsim i,\
jtext{ other side}}
}1Bigg)
-Bigg(sum_{substack{jsim i,\
jtext{ same side}}
}1Bigg)$$
We also see that $x^TLx$ gives the sum of these:
$$
begin{align}
x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
&=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
&=4(text{# edges crossing cut})
end{align}$$
because
$$
text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
$$
Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.
Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:
The elements of the (simple) graph Laplacian are given by (from Wikipedia):
$$
L_{ij}:=
begin{cases}
text{deg}(v_i),& text{if } i=j\
-1, & text{if }isim j\
0, & text{otherwise}
end{cases}
$$
So an example graph Laplacian might look like:
$$
L_{text{example}}=begin{bmatrix}
2&-1&-1&0 \
-1&3&-1&-1\
-1&-1&2&0\
0&-1&0&1
end{bmatrix}
$$
Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).
Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
$$
x_{text{example}}=begin{bmatrix}
1\
-1\
-1\
1
end{bmatrix}
$$
so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
$$
x_{text{example}}^TL_{text{example}}x_{text{example}}=
begin{bmatrix}
1&
-1&
-1&
1
end{bmatrix}
begin{bmatrix}
4\
-4\
-2\
2
end{bmatrix}
=12
$$
Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
(Lx)_i=
text{deg}(v_i)+Bigg(sum_{
substack{jsim i,\
jtext{ other side}}
}1Bigg)
-Bigg(sum_{substack{jsim i,\
jtext{ same side}}
}1Bigg)$$
We also see that $x^TLx$ gives the sum of these:
$$
begin{align}
x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
&=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
&=4(text{# edges crossing cut})
end{align}$$
because
$$
text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
$$
Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.
Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.
add a comment |
up vote
0
down vote
I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:
The elements of the (simple) graph Laplacian are given by (from Wikipedia):
$$
L_{ij}:=
begin{cases}
text{deg}(v_i),& text{if } i=j\
-1, & text{if }isim j\
0, & text{otherwise}
end{cases}
$$
So an example graph Laplacian might look like:
$$
L_{text{example}}=begin{bmatrix}
2&-1&-1&0 \
-1&3&-1&-1\
-1&-1&2&0\
0&-1&0&1
end{bmatrix}
$$
Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).
Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
$$
x_{text{example}}=begin{bmatrix}
1\
-1\
-1\
1
end{bmatrix}
$$
so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
$$
x_{text{example}}^TL_{text{example}}x_{text{example}}=
begin{bmatrix}
1&
-1&
-1&
1
end{bmatrix}
begin{bmatrix}
4\
-4\
-2\
2
end{bmatrix}
=12
$$
Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
(Lx)_i=
text{deg}(v_i)+Bigg(sum_{
substack{jsim i,\
jtext{ other side}}
}1Bigg)
-Bigg(sum_{substack{jsim i,\
jtext{ same side}}
}1Bigg)$$
We also see that $x^TLx$ gives the sum of these:
$$
begin{align}
x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
&=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
&=4(text{# edges crossing cut})
end{align}$$
because
$$
text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
$$
Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.
Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.
add a comment |
up vote
0
down vote
up vote
0
down vote
I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:
The elements of the (simple) graph Laplacian are given by (from Wikipedia):
$$
L_{ij}:=
begin{cases}
text{deg}(v_i),& text{if } i=j\
-1, & text{if }isim j\
0, & text{otherwise}
end{cases}
$$
So an example graph Laplacian might look like:
$$
L_{text{example}}=begin{bmatrix}
2&-1&-1&0 \
-1&3&-1&-1\
-1&-1&2&0\
0&-1&0&1
end{bmatrix}
$$
Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).
Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
$$
x_{text{example}}=begin{bmatrix}
1\
-1\
-1\
1
end{bmatrix}
$$
so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
$$
x_{text{example}}^TL_{text{example}}x_{text{example}}=
begin{bmatrix}
1&
-1&
-1&
1
end{bmatrix}
begin{bmatrix}
4\
-4\
-2\
2
end{bmatrix}
=12
$$
Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
(Lx)_i=
text{deg}(v_i)+Bigg(sum_{
substack{jsim i,\
jtext{ other side}}
}1Bigg)
-Bigg(sum_{substack{jsim i,\
jtext{ same side}}
}1Bigg)$$
We also see that $x^TLx$ gives the sum of these:
$$
begin{align}
x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
&=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
&=4(text{# edges crossing cut})
end{align}$$
because
$$
text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
$$
Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.
Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.
I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:
The elements of the (simple) graph Laplacian are given by (from Wikipedia):
$$
L_{ij}:=
begin{cases}
text{deg}(v_i),& text{if } i=j\
-1, & text{if }isim j\
0, & text{otherwise}
end{cases}
$$
So an example graph Laplacian might look like:
$$
L_{text{example}}=begin{bmatrix}
2&-1&-1&0 \
-1&3&-1&-1\
-1&-1&2&0\
0&-1&0&1
end{bmatrix}
$$
Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).
Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
$$
x_{text{example}}=begin{bmatrix}
1\
-1\
-1\
1
end{bmatrix}
$$
so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
$$
x_{text{example}}^TL_{text{example}}x_{text{example}}=
begin{bmatrix}
1&
-1&
-1&
1
end{bmatrix}
begin{bmatrix}
4\
-4\
-2\
2
end{bmatrix}
=12
$$
Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
(Lx)_i=
text{deg}(v_i)+Bigg(sum_{
substack{jsim i,\
jtext{ other side}}
}1Bigg)
-Bigg(sum_{substack{jsim i,\
jtext{ same side}}
}1Bigg)$$
We also see that $x^TLx$ gives the sum of these:
$$
begin{align}
x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
&=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
&=4(text{# edges crossing cut})
end{align}$$
because
$$
text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
$$
Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.
Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.
answered 2 days ago
Dan
152
152
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005676%2fmax-cut-form-of-graph-laplacian%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
2 days ago