Geometric justification of a rotation matrix












1












$begingroup$


From S.L Linear Algebra:




We can define a rotation in terms of matrices.



Indeed, we call a linear map $L: mathbb{R}^2 rightarrow
mathbb{R}^2$
a rotation if its associated matrix can be written in
the form:



$$begin{pmatrix} cos(theta) & -sin(theta) \ sin(theta) & ,
cos(theta) end{pmatrix}$$



The geometric justification for this definition comes from Fig. 1.



enter image description here



We see that:



$$L(E^1) = (cos theta)E^1 + (sin theta)E^2$$



$$L(E^2) = (-sin theta)E^1 + (cos theta)E^2$$



Thus our definition corresponds precisely to the picture. When the
matrix of the rotation is as above, we say that the rotation is by an
angle $theta$.



For example, the matrix associated with a rotation by an angle
$frac{pi}{2}$ is:



$$R(frac{pi}{2})=begin{pmatrix} 0 & -1 \ 1 & , , , 0
end{pmatrix}$$




Linear Transformation Perspective:



I think that $L(E^1)$ and $L(E^2)$ are basis for the column space of the matrix $A$ (hence the basis for image under linear transformation $L$).



It is known, that $L=AX$ where $A$ is the matrix associated with $L$ and $X=(x_1, x_2)$ is input of $L$'s definition. Also $AX=b$ where $b$ is the element of 2-dimensional image subspace (correct?).



On the basis thereof, I think we get:



$$begin{pmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
end{pmatrix}begin{pmatrix}
x_1 \
x_2
end{pmatrix}=begin{pmatrix}
b_1 \
b_2
end{pmatrix}$$



where $A=begin{pmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
end{pmatrix}=begin{pmatrix} cos(theta) & -sin(theta) \ sin(theta) & ,
cos(theta) end{pmatrix}$



For example, $cos(theta)x_1 + sin(theta)x_2=b_1$ which seems to equivalent of $L(E^1) = (cos theta)E^1 + (sin theta)E^2$.



Geometry Perspective (problem is here):



This is where it gets confusing for me, $E_1$ and $E_2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively. If so, is there a proof that $||E_1||=||x_1||=1$ and that $||E_2||=||x_2||=1$, if not, what do they represent?



Furthermore, I'm aware from basic trigonometry that sine function represents a vertical leg of triangle in the unit circle, whereas cosine represents a horizontal one, does this have to do anything with the figure 1?



In short:



Is there any deeper explanation of geometric justification above? I'm unable to understand it completely.



Thank you!










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    What are $x_1$ and $x_2$? Also, the picture does not seem to imply that $E_1$ and $E_2$ are unit vectors, just that they are perpendicular vectors of the same nonzero length.
    $endgroup$
    – Servaes
    Jan 21 at 18:36












  • $begingroup$
    @Servaes $x_1$ and $x_2$ are elements of input vector $X$ such that $L(X)=b$
    $endgroup$
    – ShellRox
    Jan 21 at 18:45






  • 1




    $begingroup$
    That raises the question of what $X$ and $b$ are. What do you mean by $||E_1||=||x_1||=1$? What part of this should there be a proof of? Also, I do not know from which book this excerpt comes. Perhaps $E_1$ and $E_2$ denote standard basis vectors in $Bbb{R}^2$?
    $endgroup$
    – Servaes
    Jan 21 at 18:46












  • $begingroup$
    I apologize for misunderstanding, I thought $x_1=E^1$. $X$ defines a domain, whereas $b$ shall define the image. But in essence, $b$ is also a column space which has basis $L(E_1)$ and $L(E_2)$ (since image=column space). But "basis" for domain should be $X=(x_1, x_2)=(E^1, E^2)$, correct?
    $endgroup$
    – ShellRox
    Jan 21 at 18:50








  • 1




    $begingroup$
    This is not making any sense. From the picture and the text $E^1$ and $E^2$ are both vectors, so $(E^1,E^2)$ does not make sense (it is not an element of $Bbb{R}^2$).
    $endgroup$
    – Servaes
    Jan 21 at 18:51
















1












$begingroup$


From S.L Linear Algebra:




We can define a rotation in terms of matrices.



Indeed, we call a linear map $L: mathbb{R}^2 rightarrow
mathbb{R}^2$
a rotation if its associated matrix can be written in
the form:



$$begin{pmatrix} cos(theta) & -sin(theta) \ sin(theta) & ,
cos(theta) end{pmatrix}$$



The geometric justification for this definition comes from Fig. 1.



enter image description here



We see that:



$$L(E^1) = (cos theta)E^1 + (sin theta)E^2$$



$$L(E^2) = (-sin theta)E^1 + (cos theta)E^2$$



Thus our definition corresponds precisely to the picture. When the
matrix of the rotation is as above, we say that the rotation is by an
angle $theta$.



For example, the matrix associated with a rotation by an angle
$frac{pi}{2}$ is:



$$R(frac{pi}{2})=begin{pmatrix} 0 & -1 \ 1 & , , , 0
end{pmatrix}$$




Linear Transformation Perspective:



I think that $L(E^1)$ and $L(E^2)$ are basis for the column space of the matrix $A$ (hence the basis for image under linear transformation $L$).



It is known, that $L=AX$ where $A$ is the matrix associated with $L$ and $X=(x_1, x_2)$ is input of $L$'s definition. Also $AX=b$ where $b$ is the element of 2-dimensional image subspace (correct?).



On the basis thereof, I think we get:



$$begin{pmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
end{pmatrix}begin{pmatrix}
x_1 \
x_2
end{pmatrix}=begin{pmatrix}
b_1 \
b_2
end{pmatrix}$$



where $A=begin{pmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
end{pmatrix}=begin{pmatrix} cos(theta) & -sin(theta) \ sin(theta) & ,
cos(theta) end{pmatrix}$



For example, $cos(theta)x_1 + sin(theta)x_2=b_1$ which seems to equivalent of $L(E^1) = (cos theta)E^1 + (sin theta)E^2$.



Geometry Perspective (problem is here):



This is where it gets confusing for me, $E_1$ and $E_2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively. If so, is there a proof that $||E_1||=||x_1||=1$ and that $||E_2||=||x_2||=1$, if not, what do they represent?



Furthermore, I'm aware from basic trigonometry that sine function represents a vertical leg of triangle in the unit circle, whereas cosine represents a horizontal one, does this have to do anything with the figure 1?



In short:



Is there any deeper explanation of geometric justification above? I'm unable to understand it completely.



Thank you!










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    What are $x_1$ and $x_2$? Also, the picture does not seem to imply that $E_1$ and $E_2$ are unit vectors, just that they are perpendicular vectors of the same nonzero length.
    $endgroup$
    – Servaes
    Jan 21 at 18:36












  • $begingroup$
    @Servaes $x_1$ and $x_2$ are elements of input vector $X$ such that $L(X)=b$
    $endgroup$
    – ShellRox
    Jan 21 at 18:45






  • 1




    $begingroup$
    That raises the question of what $X$ and $b$ are. What do you mean by $||E_1||=||x_1||=1$? What part of this should there be a proof of? Also, I do not know from which book this excerpt comes. Perhaps $E_1$ and $E_2$ denote standard basis vectors in $Bbb{R}^2$?
    $endgroup$
    – Servaes
    Jan 21 at 18:46












  • $begingroup$
    I apologize for misunderstanding, I thought $x_1=E^1$. $X$ defines a domain, whereas $b$ shall define the image. But in essence, $b$ is also a column space which has basis $L(E_1)$ and $L(E_2)$ (since image=column space). But "basis" for domain should be $X=(x_1, x_2)=(E^1, E^2)$, correct?
    $endgroup$
    – ShellRox
    Jan 21 at 18:50








  • 1




    $begingroup$
    This is not making any sense. From the picture and the text $E^1$ and $E^2$ are both vectors, so $(E^1,E^2)$ does not make sense (it is not an element of $Bbb{R}^2$).
    $endgroup$
    – Servaes
    Jan 21 at 18:51














1












1








1





$begingroup$


From S.L Linear Algebra:




We can define a rotation in terms of matrices.



Indeed, we call a linear map $L: mathbb{R}^2 rightarrow
mathbb{R}^2$
a rotation if its associated matrix can be written in
the form:



$$begin{pmatrix} cos(theta) & -sin(theta) \ sin(theta) & ,
cos(theta) end{pmatrix}$$



The geometric justification for this definition comes from Fig. 1.



enter image description here



We see that:



$$L(E^1) = (cos theta)E^1 + (sin theta)E^2$$



$$L(E^2) = (-sin theta)E^1 + (cos theta)E^2$$



Thus our definition corresponds precisely to the picture. When the
matrix of the rotation is as above, we say that the rotation is by an
angle $theta$.



For example, the matrix associated with a rotation by an angle
$frac{pi}{2}$ is:



$$R(frac{pi}{2})=begin{pmatrix} 0 & -1 \ 1 & , , , 0
end{pmatrix}$$




Linear Transformation Perspective:



I think that $L(E^1)$ and $L(E^2)$ are basis for the column space of the matrix $A$ (hence the basis for image under linear transformation $L$).



It is known, that $L=AX$ where $A$ is the matrix associated with $L$ and $X=(x_1, x_2)$ is input of $L$'s definition. Also $AX=b$ where $b$ is the element of 2-dimensional image subspace (correct?).



On the basis thereof, I think we get:



$$begin{pmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
end{pmatrix}begin{pmatrix}
x_1 \
x_2
end{pmatrix}=begin{pmatrix}
b_1 \
b_2
end{pmatrix}$$



where $A=begin{pmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
end{pmatrix}=begin{pmatrix} cos(theta) & -sin(theta) \ sin(theta) & ,
cos(theta) end{pmatrix}$



For example, $cos(theta)x_1 + sin(theta)x_2=b_1$ which seems to equivalent of $L(E^1) = (cos theta)E^1 + (sin theta)E^2$.



Geometry Perspective (problem is here):



This is where it gets confusing for me, $E_1$ and $E_2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively. If so, is there a proof that $||E_1||=||x_1||=1$ and that $||E_2||=||x_2||=1$, if not, what do they represent?



Furthermore, I'm aware from basic trigonometry that sine function represents a vertical leg of triangle in the unit circle, whereas cosine represents a horizontal one, does this have to do anything with the figure 1?



In short:



Is there any deeper explanation of geometric justification above? I'm unable to understand it completely.



Thank you!










share|cite|improve this question









$endgroup$




From S.L Linear Algebra:




We can define a rotation in terms of matrices.



Indeed, we call a linear map $L: mathbb{R}^2 rightarrow
mathbb{R}^2$
a rotation if its associated matrix can be written in
the form:



$$begin{pmatrix} cos(theta) & -sin(theta) \ sin(theta) & ,
cos(theta) end{pmatrix}$$



The geometric justification for this definition comes from Fig. 1.



enter image description here



We see that:



$$L(E^1) = (cos theta)E^1 + (sin theta)E^2$$



$$L(E^2) = (-sin theta)E^1 + (cos theta)E^2$$



Thus our definition corresponds precisely to the picture. When the
matrix of the rotation is as above, we say that the rotation is by an
angle $theta$.



For example, the matrix associated with a rotation by an angle
$frac{pi}{2}$ is:



$$R(frac{pi}{2})=begin{pmatrix} 0 & -1 \ 1 & , , , 0
end{pmatrix}$$




Linear Transformation Perspective:



I think that $L(E^1)$ and $L(E^2)$ are basis for the column space of the matrix $A$ (hence the basis for image under linear transformation $L$).



It is known, that $L=AX$ where $A$ is the matrix associated with $L$ and $X=(x_1, x_2)$ is input of $L$'s definition. Also $AX=b$ where $b$ is the element of 2-dimensional image subspace (correct?).



On the basis thereof, I think we get:



$$begin{pmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
end{pmatrix}begin{pmatrix}
x_1 \
x_2
end{pmatrix}=begin{pmatrix}
b_1 \
b_2
end{pmatrix}$$



where $A=begin{pmatrix}
a_{11} & a_{12} \
a_{21} & a_{22}
end{pmatrix}=begin{pmatrix} cos(theta) & -sin(theta) \ sin(theta) & ,
cos(theta) end{pmatrix}$



For example, $cos(theta)x_1 + sin(theta)x_2=b_1$ which seems to equivalent of $L(E^1) = (cos theta)E^1 + (sin theta)E^2$.



Geometry Perspective (problem is here):



This is where it gets confusing for me, $E_1$ and $E_2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively. If so, is there a proof that $||E_1||=||x_1||=1$ and that $||E_2||=||x_2||=1$, if not, what do they represent?



Furthermore, I'm aware from basic trigonometry that sine function represents a vertical leg of triangle in the unit circle, whereas cosine represents a horizontal one, does this have to do anything with the figure 1?



In short:



Is there any deeper explanation of geometric justification above? I'm unable to understand it completely.



Thank you!







linear-algebra matrices trigonometry linear-transformations






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 21 at 18:30









ShellRoxShellRox

31328




31328








  • 1




    $begingroup$
    What are $x_1$ and $x_2$? Also, the picture does not seem to imply that $E_1$ and $E_2$ are unit vectors, just that they are perpendicular vectors of the same nonzero length.
    $endgroup$
    – Servaes
    Jan 21 at 18:36












  • $begingroup$
    @Servaes $x_1$ and $x_2$ are elements of input vector $X$ such that $L(X)=b$
    $endgroup$
    – ShellRox
    Jan 21 at 18:45






  • 1




    $begingroup$
    That raises the question of what $X$ and $b$ are. What do you mean by $||E_1||=||x_1||=1$? What part of this should there be a proof of? Also, I do not know from which book this excerpt comes. Perhaps $E_1$ and $E_2$ denote standard basis vectors in $Bbb{R}^2$?
    $endgroup$
    – Servaes
    Jan 21 at 18:46












  • $begingroup$
    I apologize for misunderstanding, I thought $x_1=E^1$. $X$ defines a domain, whereas $b$ shall define the image. But in essence, $b$ is also a column space which has basis $L(E_1)$ and $L(E_2)$ (since image=column space). But "basis" for domain should be $X=(x_1, x_2)=(E^1, E^2)$, correct?
    $endgroup$
    – ShellRox
    Jan 21 at 18:50








  • 1




    $begingroup$
    This is not making any sense. From the picture and the text $E^1$ and $E^2$ are both vectors, so $(E^1,E^2)$ does not make sense (it is not an element of $Bbb{R}^2$).
    $endgroup$
    – Servaes
    Jan 21 at 18:51














  • 1




    $begingroup$
    What are $x_1$ and $x_2$? Also, the picture does not seem to imply that $E_1$ and $E_2$ are unit vectors, just that they are perpendicular vectors of the same nonzero length.
    $endgroup$
    – Servaes
    Jan 21 at 18:36












  • $begingroup$
    @Servaes $x_1$ and $x_2$ are elements of input vector $X$ such that $L(X)=b$
    $endgroup$
    – ShellRox
    Jan 21 at 18:45






  • 1




    $begingroup$
    That raises the question of what $X$ and $b$ are. What do you mean by $||E_1||=||x_1||=1$? What part of this should there be a proof of? Also, I do not know from which book this excerpt comes. Perhaps $E_1$ and $E_2$ denote standard basis vectors in $Bbb{R}^2$?
    $endgroup$
    – Servaes
    Jan 21 at 18:46












  • $begingroup$
    I apologize for misunderstanding, I thought $x_1=E^1$. $X$ defines a domain, whereas $b$ shall define the image. But in essence, $b$ is also a column space which has basis $L(E_1)$ and $L(E_2)$ (since image=column space). But "basis" for domain should be $X=(x_1, x_2)=(E^1, E^2)$, correct?
    $endgroup$
    – ShellRox
    Jan 21 at 18:50








  • 1




    $begingroup$
    This is not making any sense. From the picture and the text $E^1$ and $E^2$ are both vectors, so $(E^1,E^2)$ does not make sense (it is not an element of $Bbb{R}^2$).
    $endgroup$
    – Servaes
    Jan 21 at 18:51








1




1




$begingroup$
What are $x_1$ and $x_2$? Also, the picture does not seem to imply that $E_1$ and $E_2$ are unit vectors, just that they are perpendicular vectors of the same nonzero length.
$endgroup$
– Servaes
Jan 21 at 18:36






$begingroup$
What are $x_1$ and $x_2$? Also, the picture does not seem to imply that $E_1$ and $E_2$ are unit vectors, just that they are perpendicular vectors of the same nonzero length.
$endgroup$
– Servaes
Jan 21 at 18:36














$begingroup$
@Servaes $x_1$ and $x_2$ are elements of input vector $X$ such that $L(X)=b$
$endgroup$
– ShellRox
Jan 21 at 18:45




$begingroup$
@Servaes $x_1$ and $x_2$ are elements of input vector $X$ such that $L(X)=b$
$endgroup$
– ShellRox
Jan 21 at 18:45




1




1




$begingroup$
That raises the question of what $X$ and $b$ are. What do you mean by $||E_1||=||x_1||=1$? What part of this should there be a proof of? Also, I do not know from which book this excerpt comes. Perhaps $E_1$ and $E_2$ denote standard basis vectors in $Bbb{R}^2$?
$endgroup$
– Servaes
Jan 21 at 18:46






$begingroup$
That raises the question of what $X$ and $b$ are. What do you mean by $||E_1||=||x_1||=1$? What part of this should there be a proof of? Also, I do not know from which book this excerpt comes. Perhaps $E_1$ and $E_2$ denote standard basis vectors in $Bbb{R}^2$?
$endgroup$
– Servaes
Jan 21 at 18:46














$begingroup$
I apologize for misunderstanding, I thought $x_1=E^1$. $X$ defines a domain, whereas $b$ shall define the image. But in essence, $b$ is also a column space which has basis $L(E_1)$ and $L(E_2)$ (since image=column space). But "basis" for domain should be $X=(x_1, x_2)=(E^1, E^2)$, correct?
$endgroup$
– ShellRox
Jan 21 at 18:50






$begingroup$
I apologize for misunderstanding, I thought $x_1=E^1$. $X$ defines a domain, whereas $b$ shall define the image. But in essence, $b$ is also a column space which has basis $L(E_1)$ and $L(E_2)$ (since image=column space). But "basis" for domain should be $X=(x_1, x_2)=(E^1, E^2)$, correct?
$endgroup$
– ShellRox
Jan 21 at 18:50






1




1




$begingroup$
This is not making any sense. From the picture and the text $E^1$ and $E^2$ are both vectors, so $(E^1,E^2)$ does not make sense (it is not an element of $Bbb{R}^2$).
$endgroup$
– Servaes
Jan 21 at 18:51




$begingroup$
This is not making any sense. From the picture and the text $E^1$ and $E^2$ are both vectors, so $(E^1,E^2)$ does not make sense (it is not an element of $Bbb{R}^2$).
$endgroup$
– Servaes
Jan 21 at 18:51










2 Answers
2






active

oldest

votes


















1












$begingroup$

I do not know which book the excerpt is from, so I do not know what exactly is meant by $E^1$ and $E^2$; the picture only suggests that $E^1$ and $E^2$ are perpendicular vectors of the same nonzero length, but perhaps in the context of the book $E^1$ and $E^2$ are the standard basis vectors for $Bbb{R}^2$. I'll take a guess at what the geometric idea is:



Let $L: Bbb{R}^2 longrightarrow Bbb{R}^2$ be a linear map given by a matrix $tbinom{hphantom{-}costheta sintheta}{-sintheta costheta}$. Let $e_1$ and $e_2$ be the standard basis vectors of $Bbb{R}^2$. Then
begin{eqnarray*}
L(e_1)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}1 \0 end{pmatrix}
=begin{pmatrix}
costheta \
sintheta
end{pmatrix},\
L(e_2)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}0 \1 end{pmatrix}
=begin{pmatrix}
-sintheta \
hphantom{-}costheta
end{pmatrix},
end{eqnarray*}

and the picture shows, by elementary trigonometry, that these vectors are precisely the standard basis vectors rotated over an angle $theta$ about the origin. Because rotations are linear maps, by extension every vector $X=(x_1,x_2)inBbb{R}^2$ is rotated over an angle $theta$ about the origin, and hence we call $L$ a rotation.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Well that makes sense, thank you for the answer. The book is called "Linear Algebra" (written by Serge Lang). By the way, on your second standard basis equation I assume you meant $L(e_2)$ instead of $L(e_1)$?
    $endgroup$
    – ShellRox
    Jan 21 at 19:23










  • $begingroup$
    Indeed, edited. And thanks for providing the reference.
    $endgroup$
    – Servaes
    Jan 21 at 19:24



















1












$begingroup$


This is where it gets confusing for me, $E^1$ and $E^2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively.




That's just an unfortunate choice of example vectors.



In general, we can write any 2D rotation matrix $mathbf{R}$ as
$$bbox{ mathbf{R} = left [ begin{matrix} r_{11} & r_{12} \ r_{21} & r_{22} end{matrix} right ] }$$
where unit vectors
$$bbox{ hat{e}_1 = left [ begin{matrix} r_{11} \ r_{21} end{matrix} right ] } , quad bbox{ hat{e}_2 = left [ begin{matrix} r_{12} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after rotation. (The corresponding basis vector before rotation are of course $left[ begin{matrix} 1 \ 0 end{matrix} right]$ and $left[ begin{matrix} 0 \ 1 end{matrix} right]$.) Because pure rotation matrices are orthonormal, $mathbf{R}^{-1} = mathbf{R}^T$, the unit vectors
$$bbox{ hat{epsilon}_1 = left [ begin{matrix} r_{11} \ r_{12} end{matrix} right ] } , quad bbox{ hat{epsilon}_2 = left [ begin{matrix} r_{21} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after the inverse rotation.



If we look at the 2D counterclockwise rotation by $varphi$,
$$bbox{ mathbf{R} = left [ begin{matrix} cosvarphi & -sinvarphi \ sinvarphi & cosvarphi end{matrix} right ] }$$
where
$$bbox{
hat{e}_1 = left[ begin{matrix} cosvarphi \ sinvarphi end{matrix} right]
} , quad bbox{
hat{e}_2 = left[ begin{matrix} -sinvarphi \ cosvarphi end{matrix} right]
} , quad bbox{
hat{epsilon}_1 = left[ begin{matrix} cosvarphi \ -sinvarphi end{matrix} right] } , quad bbox{
hat{epsilon}_2 = left[ begin{matrix} sinvarphi \ cosvarphi end{matrix} right] }$$

we notice that $hat{epsilon}_1$ and $hat{epsilon}_2$ are equivalent to $hat{e}_1$ and $hat{e}_2$, respectively, if we negate $varphi$; and that $lVerthat{e}_1rVert = lVert hat{e}_2 rVert = lVert hat{epsilon}_1 rVert = lVert hat{epsilon}_2 rVert = 1$ and $hat{e}_1 cdot hat{e}_2 = hat{epsilon}_1 cdot hat{epsilon}_2 = 0$.



If we look at the definition of orthogonal matrices, we have
$$bbox{ mathbf{R}^T mathbf{R} = mathbf{R} mathbf{R}^T = mathbf{I} }$$
If we were to explore these properties, we'd find that the column vectors of $mathbf{R}$ must form an orthonormal basis, as must the row vectors of $mathbf{R}$. $mathbf{R}$ must also always have a determinant of $+1$ or $-1$.



The final wrinkle is that only orthogonal matrices with determinant $+1$ are pure rotation matrices. Those that have determinant $-1$ correspond to matrices with a reflection. The above rotation matrix has determinant $(cosvarphi)^2 + (sinvarphi)^2 = 1$. If you negate $hat{e}_1$, the determinant becomes $-(cosvarphi)^2 - (sinvarphi)^2 = -1$, as one would expect, as you essentially add reflection along the first basis vector after rotation to $mathbf{R}$.



All of the above also applies to 3D rotation matrices. (For exploration on that, pick a random unit axis vector $hat{a}$, and a rotation around it $varphi$. The rotation matrix that corresponds to is shown in the Rotation matrix Wikipedia article. Versors, or unit quaternions, can be easily used to represent an orientation analogously to the axis-angle formalism. Quaternion algebra makes combining rotations very easy.)






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    @ShellRox: You're welcome! I am not a mathematician, but as I do quite a bit of 2D/3D visualization (of atomic models and such), I wanted to show the geometric properties of rotation matrices that are particularly useful for programmers. Basically, if you tack on the versor/unit quaternion stuff on top, then something on how to combine rotations (say, for skeleton models and such), you quickly get a pretty comprehensive toolkit with everything with a direct geometric justification/description. I thought it might be useful; but I do agree that Servaes' answer is the more proper one.
    $endgroup$
    – Nominal Animal
    Jan 22 at 15:23






  • 1




    $begingroup$
    @ShellRox: I think it is a case of using the same variable to refer to different things. I see it often in math books, because authors assume readers will infer the meaning correctly from the context :(. In the context of this question, $E^1$ and $E^2$ are just some arbitrary vectors, that are being rotated by the linear transform $L$. In this notation, $L(E^1)$ and $L(E^2)$ are just those two vectors, rotated. The basis vectors before any rotation are $[1, 0]^T$ and $[0, 1]^T$. After rotation, they are $L([1,0]^T)$ and $L([0,1]^T)$, but these we can see directly from the matrix.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:09






  • 1




    $begingroup$
    The reverse (meaning if we consider our basis vectors $[1, 0]^T$ and $[0, 1]^T$ after the rotation) is $L^{-1}([1,0]^T)$ and $L^{-1}([0,1]^T)$. It is an arbitrary choice, because the inverse of any rotation is a valid rotation too. In general, $L(vec{v}) = mathbf{R}vec{v}$ and $L^{-1}(vec{v}) = mathbf{R}^{-1} vec{v} = mathbf{R}^Tvec{v}$, if that matters.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:12








  • 1




    $begingroup$
    And that hence general inverse linear mapping is from properties of orthogonal matrix. Also I see that every linear map associated with orthogonal matrices is isomorphism, injective map? Interesting...
    $endgroup$
    – ShellRox
    Jan 22 at 17:09






  • 1




    $begingroup$
    @ShellRox: I personally find it useful to remember that the geometric interpretation of $hat{e}_1$, $hat{e}_2$, and $hat{epsilon}_1$ and $hat{epsilon}_2$ basis vectors also apply to all nondegenerate transformation matrices. That is, that if you know how the basis vectors are transformed, you immediately have the transformation matrix, and can (easily, algebraically!) find the inverse transformation as well. For me, this has been an indispendable tool for simplifying problems like 3D trilateration.
    $endgroup$
    – Nominal Animal
    Jan 22 at 18:06











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082217%2fgeometric-justification-of-a-rotation-matrix%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

I do not know which book the excerpt is from, so I do not know what exactly is meant by $E^1$ and $E^2$; the picture only suggests that $E^1$ and $E^2$ are perpendicular vectors of the same nonzero length, but perhaps in the context of the book $E^1$ and $E^2$ are the standard basis vectors for $Bbb{R}^2$. I'll take a guess at what the geometric idea is:



Let $L: Bbb{R}^2 longrightarrow Bbb{R}^2$ be a linear map given by a matrix $tbinom{hphantom{-}costheta sintheta}{-sintheta costheta}$. Let $e_1$ and $e_2$ be the standard basis vectors of $Bbb{R}^2$. Then
begin{eqnarray*}
L(e_1)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}1 \0 end{pmatrix}
=begin{pmatrix}
costheta \
sintheta
end{pmatrix},\
L(e_2)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}0 \1 end{pmatrix}
=begin{pmatrix}
-sintheta \
hphantom{-}costheta
end{pmatrix},
end{eqnarray*}

and the picture shows, by elementary trigonometry, that these vectors are precisely the standard basis vectors rotated over an angle $theta$ about the origin. Because rotations are linear maps, by extension every vector $X=(x_1,x_2)inBbb{R}^2$ is rotated over an angle $theta$ about the origin, and hence we call $L$ a rotation.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Well that makes sense, thank you for the answer. The book is called "Linear Algebra" (written by Serge Lang). By the way, on your second standard basis equation I assume you meant $L(e_2)$ instead of $L(e_1)$?
    $endgroup$
    – ShellRox
    Jan 21 at 19:23










  • $begingroup$
    Indeed, edited. And thanks for providing the reference.
    $endgroup$
    – Servaes
    Jan 21 at 19:24
















1












$begingroup$

I do not know which book the excerpt is from, so I do not know what exactly is meant by $E^1$ and $E^2$; the picture only suggests that $E^1$ and $E^2$ are perpendicular vectors of the same nonzero length, but perhaps in the context of the book $E^1$ and $E^2$ are the standard basis vectors for $Bbb{R}^2$. I'll take a guess at what the geometric idea is:



Let $L: Bbb{R}^2 longrightarrow Bbb{R}^2$ be a linear map given by a matrix $tbinom{hphantom{-}costheta sintheta}{-sintheta costheta}$. Let $e_1$ and $e_2$ be the standard basis vectors of $Bbb{R}^2$. Then
begin{eqnarray*}
L(e_1)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}1 \0 end{pmatrix}
=begin{pmatrix}
costheta \
sintheta
end{pmatrix},\
L(e_2)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}0 \1 end{pmatrix}
=begin{pmatrix}
-sintheta \
hphantom{-}costheta
end{pmatrix},
end{eqnarray*}

and the picture shows, by elementary trigonometry, that these vectors are precisely the standard basis vectors rotated over an angle $theta$ about the origin. Because rotations are linear maps, by extension every vector $X=(x_1,x_2)inBbb{R}^2$ is rotated over an angle $theta$ about the origin, and hence we call $L$ a rotation.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Well that makes sense, thank you for the answer. The book is called "Linear Algebra" (written by Serge Lang). By the way, on your second standard basis equation I assume you meant $L(e_2)$ instead of $L(e_1)$?
    $endgroup$
    – ShellRox
    Jan 21 at 19:23










  • $begingroup$
    Indeed, edited. And thanks for providing the reference.
    $endgroup$
    – Servaes
    Jan 21 at 19:24














1












1








1





$begingroup$

I do not know which book the excerpt is from, so I do not know what exactly is meant by $E^1$ and $E^2$; the picture only suggests that $E^1$ and $E^2$ are perpendicular vectors of the same nonzero length, but perhaps in the context of the book $E^1$ and $E^2$ are the standard basis vectors for $Bbb{R}^2$. I'll take a guess at what the geometric idea is:



Let $L: Bbb{R}^2 longrightarrow Bbb{R}^2$ be a linear map given by a matrix $tbinom{hphantom{-}costheta sintheta}{-sintheta costheta}$. Let $e_1$ and $e_2$ be the standard basis vectors of $Bbb{R}^2$. Then
begin{eqnarray*}
L(e_1)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}1 \0 end{pmatrix}
=begin{pmatrix}
costheta \
sintheta
end{pmatrix},\
L(e_2)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}0 \1 end{pmatrix}
=begin{pmatrix}
-sintheta \
hphantom{-}costheta
end{pmatrix},
end{eqnarray*}

and the picture shows, by elementary trigonometry, that these vectors are precisely the standard basis vectors rotated over an angle $theta$ about the origin. Because rotations are linear maps, by extension every vector $X=(x_1,x_2)inBbb{R}^2$ is rotated over an angle $theta$ about the origin, and hence we call $L$ a rotation.






share|cite|improve this answer











$endgroup$



I do not know which book the excerpt is from, so I do not know what exactly is meant by $E^1$ and $E^2$; the picture only suggests that $E^1$ and $E^2$ are perpendicular vectors of the same nonzero length, but perhaps in the context of the book $E^1$ and $E^2$ are the standard basis vectors for $Bbb{R}^2$. I'll take a guess at what the geometric idea is:



Let $L: Bbb{R}^2 longrightarrow Bbb{R}^2$ be a linear map given by a matrix $tbinom{hphantom{-}costheta sintheta}{-sintheta costheta}$. Let $e_1$ and $e_2$ be the standard basis vectors of $Bbb{R}^2$. Then
begin{eqnarray*}
L(e_1)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}1 \0 end{pmatrix}
=begin{pmatrix}
costheta \
sintheta
end{pmatrix},\
L(e_2)&=&begin{pmatrix} costheta & -sintheta \ sintheta & ,
costheta end{pmatrix}begin{pmatrix}0 \1 end{pmatrix}
=begin{pmatrix}
-sintheta \
hphantom{-}costheta
end{pmatrix},
end{eqnarray*}

and the picture shows, by elementary trigonometry, that these vectors are precisely the standard basis vectors rotated over an angle $theta$ about the origin. Because rotations are linear maps, by extension every vector $X=(x_1,x_2)inBbb{R}^2$ is rotated over an angle $theta$ about the origin, and hence we call $L$ a rotation.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Jan 21 at 19:24

























answered Jan 21 at 19:16









ServaesServaes

27.5k34098




27.5k34098












  • $begingroup$
    Well that makes sense, thank you for the answer. The book is called "Linear Algebra" (written by Serge Lang). By the way, on your second standard basis equation I assume you meant $L(e_2)$ instead of $L(e_1)$?
    $endgroup$
    – ShellRox
    Jan 21 at 19:23










  • $begingroup$
    Indeed, edited. And thanks for providing the reference.
    $endgroup$
    – Servaes
    Jan 21 at 19:24


















  • $begingroup$
    Well that makes sense, thank you for the answer. The book is called "Linear Algebra" (written by Serge Lang). By the way, on your second standard basis equation I assume you meant $L(e_2)$ instead of $L(e_1)$?
    $endgroup$
    – ShellRox
    Jan 21 at 19:23










  • $begingroup$
    Indeed, edited. And thanks for providing the reference.
    $endgroup$
    – Servaes
    Jan 21 at 19:24
















$begingroup$
Well that makes sense, thank you for the answer. The book is called "Linear Algebra" (written by Serge Lang). By the way, on your second standard basis equation I assume you meant $L(e_2)$ instead of $L(e_1)$?
$endgroup$
– ShellRox
Jan 21 at 19:23




$begingroup$
Well that makes sense, thank you for the answer. The book is called "Linear Algebra" (written by Serge Lang). By the way, on your second standard basis equation I assume you meant $L(e_2)$ instead of $L(e_1)$?
$endgroup$
– ShellRox
Jan 21 at 19:23












$begingroup$
Indeed, edited. And thanks for providing the reference.
$endgroup$
– Servaes
Jan 21 at 19:24




$begingroup$
Indeed, edited. And thanks for providing the reference.
$endgroup$
– Servaes
Jan 21 at 19:24











1












$begingroup$


This is where it gets confusing for me, $E^1$ and $E^2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively.




That's just an unfortunate choice of example vectors.



In general, we can write any 2D rotation matrix $mathbf{R}$ as
$$bbox{ mathbf{R} = left [ begin{matrix} r_{11} & r_{12} \ r_{21} & r_{22} end{matrix} right ] }$$
where unit vectors
$$bbox{ hat{e}_1 = left [ begin{matrix} r_{11} \ r_{21} end{matrix} right ] } , quad bbox{ hat{e}_2 = left [ begin{matrix} r_{12} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after rotation. (The corresponding basis vector before rotation are of course $left[ begin{matrix} 1 \ 0 end{matrix} right]$ and $left[ begin{matrix} 0 \ 1 end{matrix} right]$.) Because pure rotation matrices are orthonormal, $mathbf{R}^{-1} = mathbf{R}^T$, the unit vectors
$$bbox{ hat{epsilon}_1 = left [ begin{matrix} r_{11} \ r_{12} end{matrix} right ] } , quad bbox{ hat{epsilon}_2 = left [ begin{matrix} r_{21} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after the inverse rotation.



If we look at the 2D counterclockwise rotation by $varphi$,
$$bbox{ mathbf{R} = left [ begin{matrix} cosvarphi & -sinvarphi \ sinvarphi & cosvarphi end{matrix} right ] }$$
where
$$bbox{
hat{e}_1 = left[ begin{matrix} cosvarphi \ sinvarphi end{matrix} right]
} , quad bbox{
hat{e}_2 = left[ begin{matrix} -sinvarphi \ cosvarphi end{matrix} right]
} , quad bbox{
hat{epsilon}_1 = left[ begin{matrix} cosvarphi \ -sinvarphi end{matrix} right] } , quad bbox{
hat{epsilon}_2 = left[ begin{matrix} sinvarphi \ cosvarphi end{matrix} right] }$$

we notice that $hat{epsilon}_1$ and $hat{epsilon}_2$ are equivalent to $hat{e}_1$ and $hat{e}_2$, respectively, if we negate $varphi$; and that $lVerthat{e}_1rVert = lVert hat{e}_2 rVert = lVert hat{epsilon}_1 rVert = lVert hat{epsilon}_2 rVert = 1$ and $hat{e}_1 cdot hat{e}_2 = hat{epsilon}_1 cdot hat{epsilon}_2 = 0$.



If we look at the definition of orthogonal matrices, we have
$$bbox{ mathbf{R}^T mathbf{R} = mathbf{R} mathbf{R}^T = mathbf{I} }$$
If we were to explore these properties, we'd find that the column vectors of $mathbf{R}$ must form an orthonormal basis, as must the row vectors of $mathbf{R}$. $mathbf{R}$ must also always have a determinant of $+1$ or $-1$.



The final wrinkle is that only orthogonal matrices with determinant $+1$ are pure rotation matrices. Those that have determinant $-1$ correspond to matrices with a reflection. The above rotation matrix has determinant $(cosvarphi)^2 + (sinvarphi)^2 = 1$. If you negate $hat{e}_1$, the determinant becomes $-(cosvarphi)^2 - (sinvarphi)^2 = -1$, as one would expect, as you essentially add reflection along the first basis vector after rotation to $mathbf{R}$.



All of the above also applies to 3D rotation matrices. (For exploration on that, pick a random unit axis vector $hat{a}$, and a rotation around it $varphi$. The rotation matrix that corresponds to is shown in the Rotation matrix Wikipedia article. Versors, or unit quaternions, can be easily used to represent an orientation analogously to the axis-angle formalism. Quaternion algebra makes combining rotations very easy.)






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    @ShellRox: You're welcome! I am not a mathematician, but as I do quite a bit of 2D/3D visualization (of atomic models and such), I wanted to show the geometric properties of rotation matrices that are particularly useful for programmers. Basically, if you tack on the versor/unit quaternion stuff on top, then something on how to combine rotations (say, for skeleton models and such), you quickly get a pretty comprehensive toolkit with everything with a direct geometric justification/description. I thought it might be useful; but I do agree that Servaes' answer is the more proper one.
    $endgroup$
    – Nominal Animal
    Jan 22 at 15:23






  • 1




    $begingroup$
    @ShellRox: I think it is a case of using the same variable to refer to different things. I see it often in math books, because authors assume readers will infer the meaning correctly from the context :(. In the context of this question, $E^1$ and $E^2$ are just some arbitrary vectors, that are being rotated by the linear transform $L$. In this notation, $L(E^1)$ and $L(E^2)$ are just those two vectors, rotated. The basis vectors before any rotation are $[1, 0]^T$ and $[0, 1]^T$. After rotation, they are $L([1,0]^T)$ and $L([0,1]^T)$, but these we can see directly from the matrix.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:09






  • 1




    $begingroup$
    The reverse (meaning if we consider our basis vectors $[1, 0]^T$ and $[0, 1]^T$ after the rotation) is $L^{-1}([1,0]^T)$ and $L^{-1}([0,1]^T)$. It is an arbitrary choice, because the inverse of any rotation is a valid rotation too. In general, $L(vec{v}) = mathbf{R}vec{v}$ and $L^{-1}(vec{v}) = mathbf{R}^{-1} vec{v} = mathbf{R}^Tvec{v}$, if that matters.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:12








  • 1




    $begingroup$
    And that hence general inverse linear mapping is from properties of orthogonal matrix. Also I see that every linear map associated with orthogonal matrices is isomorphism, injective map? Interesting...
    $endgroup$
    – ShellRox
    Jan 22 at 17:09






  • 1




    $begingroup$
    @ShellRox: I personally find it useful to remember that the geometric interpretation of $hat{e}_1$, $hat{e}_2$, and $hat{epsilon}_1$ and $hat{epsilon}_2$ basis vectors also apply to all nondegenerate transformation matrices. That is, that if you know how the basis vectors are transformed, you immediately have the transformation matrix, and can (easily, algebraically!) find the inverse transformation as well. For me, this has been an indispendable tool for simplifying problems like 3D trilateration.
    $endgroup$
    – Nominal Animal
    Jan 22 at 18:06
















1












$begingroup$


This is where it gets confusing for me, $E^1$ and $E^2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively.




That's just an unfortunate choice of example vectors.



In general, we can write any 2D rotation matrix $mathbf{R}$ as
$$bbox{ mathbf{R} = left [ begin{matrix} r_{11} & r_{12} \ r_{21} & r_{22} end{matrix} right ] }$$
where unit vectors
$$bbox{ hat{e}_1 = left [ begin{matrix} r_{11} \ r_{21} end{matrix} right ] } , quad bbox{ hat{e}_2 = left [ begin{matrix} r_{12} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after rotation. (The corresponding basis vector before rotation are of course $left[ begin{matrix} 1 \ 0 end{matrix} right]$ and $left[ begin{matrix} 0 \ 1 end{matrix} right]$.) Because pure rotation matrices are orthonormal, $mathbf{R}^{-1} = mathbf{R}^T$, the unit vectors
$$bbox{ hat{epsilon}_1 = left [ begin{matrix} r_{11} \ r_{12} end{matrix} right ] } , quad bbox{ hat{epsilon}_2 = left [ begin{matrix} r_{21} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after the inverse rotation.



If we look at the 2D counterclockwise rotation by $varphi$,
$$bbox{ mathbf{R} = left [ begin{matrix} cosvarphi & -sinvarphi \ sinvarphi & cosvarphi end{matrix} right ] }$$
where
$$bbox{
hat{e}_1 = left[ begin{matrix} cosvarphi \ sinvarphi end{matrix} right]
} , quad bbox{
hat{e}_2 = left[ begin{matrix} -sinvarphi \ cosvarphi end{matrix} right]
} , quad bbox{
hat{epsilon}_1 = left[ begin{matrix} cosvarphi \ -sinvarphi end{matrix} right] } , quad bbox{
hat{epsilon}_2 = left[ begin{matrix} sinvarphi \ cosvarphi end{matrix} right] }$$

we notice that $hat{epsilon}_1$ and $hat{epsilon}_2$ are equivalent to $hat{e}_1$ and $hat{e}_2$, respectively, if we negate $varphi$; and that $lVerthat{e}_1rVert = lVert hat{e}_2 rVert = lVert hat{epsilon}_1 rVert = lVert hat{epsilon}_2 rVert = 1$ and $hat{e}_1 cdot hat{e}_2 = hat{epsilon}_1 cdot hat{epsilon}_2 = 0$.



If we look at the definition of orthogonal matrices, we have
$$bbox{ mathbf{R}^T mathbf{R} = mathbf{R} mathbf{R}^T = mathbf{I} }$$
If we were to explore these properties, we'd find that the column vectors of $mathbf{R}$ must form an orthonormal basis, as must the row vectors of $mathbf{R}$. $mathbf{R}$ must also always have a determinant of $+1$ or $-1$.



The final wrinkle is that only orthogonal matrices with determinant $+1$ are pure rotation matrices. Those that have determinant $-1$ correspond to matrices with a reflection. The above rotation matrix has determinant $(cosvarphi)^2 + (sinvarphi)^2 = 1$. If you negate $hat{e}_1$, the determinant becomes $-(cosvarphi)^2 - (sinvarphi)^2 = -1$, as one would expect, as you essentially add reflection along the first basis vector after rotation to $mathbf{R}$.



All of the above also applies to 3D rotation matrices. (For exploration on that, pick a random unit axis vector $hat{a}$, and a rotation around it $varphi$. The rotation matrix that corresponds to is shown in the Rotation matrix Wikipedia article. Versors, or unit quaternions, can be easily used to represent an orientation analogously to the axis-angle formalism. Quaternion algebra makes combining rotations very easy.)






share|cite|improve this answer









$endgroup$









  • 1




    $begingroup$
    @ShellRox: You're welcome! I am not a mathematician, but as I do quite a bit of 2D/3D visualization (of atomic models and such), I wanted to show the geometric properties of rotation matrices that are particularly useful for programmers. Basically, if you tack on the versor/unit quaternion stuff on top, then something on how to combine rotations (say, for skeleton models and such), you quickly get a pretty comprehensive toolkit with everything with a direct geometric justification/description. I thought it might be useful; but I do agree that Servaes' answer is the more proper one.
    $endgroup$
    – Nominal Animal
    Jan 22 at 15:23






  • 1




    $begingroup$
    @ShellRox: I think it is a case of using the same variable to refer to different things. I see it often in math books, because authors assume readers will infer the meaning correctly from the context :(. In the context of this question, $E^1$ and $E^2$ are just some arbitrary vectors, that are being rotated by the linear transform $L$. In this notation, $L(E^1)$ and $L(E^2)$ are just those two vectors, rotated. The basis vectors before any rotation are $[1, 0]^T$ and $[0, 1]^T$. After rotation, they are $L([1,0]^T)$ and $L([0,1]^T)$, but these we can see directly from the matrix.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:09






  • 1




    $begingroup$
    The reverse (meaning if we consider our basis vectors $[1, 0]^T$ and $[0, 1]^T$ after the rotation) is $L^{-1}([1,0]^T)$ and $L^{-1}([0,1]^T)$. It is an arbitrary choice, because the inverse of any rotation is a valid rotation too. In general, $L(vec{v}) = mathbf{R}vec{v}$ and $L^{-1}(vec{v}) = mathbf{R}^{-1} vec{v} = mathbf{R}^Tvec{v}$, if that matters.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:12








  • 1




    $begingroup$
    And that hence general inverse linear mapping is from properties of orthogonal matrix. Also I see that every linear map associated with orthogonal matrices is isomorphism, injective map? Interesting...
    $endgroup$
    – ShellRox
    Jan 22 at 17:09






  • 1




    $begingroup$
    @ShellRox: I personally find it useful to remember that the geometric interpretation of $hat{e}_1$, $hat{e}_2$, and $hat{epsilon}_1$ and $hat{epsilon}_2$ basis vectors also apply to all nondegenerate transformation matrices. That is, that if you know how the basis vectors are transformed, you immediately have the transformation matrix, and can (easily, algebraically!) find the inverse transformation as well. For me, this has been an indispendable tool for simplifying problems like 3D trilateration.
    $endgroup$
    – Nominal Animal
    Jan 22 at 18:06














1












1








1





$begingroup$


This is where it gets confusing for me, $E^1$ and $E^2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively.




That's just an unfortunate choice of example vectors.



In general, we can write any 2D rotation matrix $mathbf{R}$ as
$$bbox{ mathbf{R} = left [ begin{matrix} r_{11} & r_{12} \ r_{21} & r_{22} end{matrix} right ] }$$
where unit vectors
$$bbox{ hat{e}_1 = left [ begin{matrix} r_{11} \ r_{21} end{matrix} right ] } , quad bbox{ hat{e}_2 = left [ begin{matrix} r_{12} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after rotation. (The corresponding basis vector before rotation are of course $left[ begin{matrix} 1 \ 0 end{matrix} right]$ and $left[ begin{matrix} 0 \ 1 end{matrix} right]$.) Because pure rotation matrices are orthonormal, $mathbf{R}^{-1} = mathbf{R}^T$, the unit vectors
$$bbox{ hat{epsilon}_1 = left [ begin{matrix} r_{11} \ r_{12} end{matrix} right ] } , quad bbox{ hat{epsilon}_2 = left [ begin{matrix} r_{21} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after the inverse rotation.



If we look at the 2D counterclockwise rotation by $varphi$,
$$bbox{ mathbf{R} = left [ begin{matrix} cosvarphi & -sinvarphi \ sinvarphi & cosvarphi end{matrix} right ] }$$
where
$$bbox{
hat{e}_1 = left[ begin{matrix} cosvarphi \ sinvarphi end{matrix} right]
} , quad bbox{
hat{e}_2 = left[ begin{matrix} -sinvarphi \ cosvarphi end{matrix} right]
} , quad bbox{
hat{epsilon}_1 = left[ begin{matrix} cosvarphi \ -sinvarphi end{matrix} right] } , quad bbox{
hat{epsilon}_2 = left[ begin{matrix} sinvarphi \ cosvarphi end{matrix} right] }$$

we notice that $hat{epsilon}_1$ and $hat{epsilon}_2$ are equivalent to $hat{e}_1$ and $hat{e}_2$, respectively, if we negate $varphi$; and that $lVerthat{e}_1rVert = lVert hat{e}_2 rVert = lVert hat{epsilon}_1 rVert = lVert hat{epsilon}_2 rVert = 1$ and $hat{e}_1 cdot hat{e}_2 = hat{epsilon}_1 cdot hat{epsilon}_2 = 0$.



If we look at the definition of orthogonal matrices, we have
$$bbox{ mathbf{R}^T mathbf{R} = mathbf{R} mathbf{R}^T = mathbf{I} }$$
If we were to explore these properties, we'd find that the column vectors of $mathbf{R}$ must form an orthonormal basis, as must the row vectors of $mathbf{R}$. $mathbf{R}$ must also always have a determinant of $+1$ or $-1$.



The final wrinkle is that only orthogonal matrices with determinant $+1$ are pure rotation matrices. Those that have determinant $-1$ correspond to matrices with a reflection. The above rotation matrix has determinant $(cosvarphi)^2 + (sinvarphi)^2 = 1$. If you negate $hat{e}_1$, the determinant becomes $-(cosvarphi)^2 - (sinvarphi)^2 = -1$, as one would expect, as you essentially add reflection along the first basis vector after rotation to $mathbf{R}$.



All of the above also applies to 3D rotation matrices. (For exploration on that, pick a random unit axis vector $hat{a}$, and a rotation around it $varphi$. The rotation matrix that corresponds to is shown in the Rotation matrix Wikipedia article. Versors, or unit quaternions, can be easily used to represent an orientation analogously to the axis-angle formalism. Quaternion algebra makes combining rotations very easy.)






share|cite|improve this answer









$endgroup$




This is where it gets confusing for me, $E^1$ and $E^2$ from the figure 1 look like unit vectors in the $x$ and $y$ direction respectively.




That's just an unfortunate choice of example vectors.



In general, we can write any 2D rotation matrix $mathbf{R}$ as
$$bbox{ mathbf{R} = left [ begin{matrix} r_{11} & r_{12} \ r_{21} & r_{22} end{matrix} right ] }$$
where unit vectors
$$bbox{ hat{e}_1 = left [ begin{matrix} r_{11} \ r_{21} end{matrix} right ] } , quad bbox{ hat{e}_2 = left [ begin{matrix} r_{12} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after rotation. (The corresponding basis vector before rotation are of course $left[ begin{matrix} 1 \ 0 end{matrix} right]$ and $left[ begin{matrix} 0 \ 1 end{matrix} right]$.) Because pure rotation matrices are orthonormal, $mathbf{R}^{-1} = mathbf{R}^T$, the unit vectors
$$bbox{ hat{epsilon}_1 = left [ begin{matrix} r_{11} \ r_{12} end{matrix} right ] } , quad bbox{ hat{epsilon}_2 = left [ begin{matrix} r_{21} \ r_{22} end{matrix} right ] }$$
describe the basis vectors after the inverse rotation.



If we look at the 2D counterclockwise rotation by $varphi$,
$$bbox{ mathbf{R} = left [ begin{matrix} cosvarphi & -sinvarphi \ sinvarphi & cosvarphi end{matrix} right ] }$$
where
$$bbox{
hat{e}_1 = left[ begin{matrix} cosvarphi \ sinvarphi end{matrix} right]
} , quad bbox{
hat{e}_2 = left[ begin{matrix} -sinvarphi \ cosvarphi end{matrix} right]
} , quad bbox{
hat{epsilon}_1 = left[ begin{matrix} cosvarphi \ -sinvarphi end{matrix} right] } , quad bbox{
hat{epsilon}_2 = left[ begin{matrix} sinvarphi \ cosvarphi end{matrix} right] }$$

we notice that $hat{epsilon}_1$ and $hat{epsilon}_2$ are equivalent to $hat{e}_1$ and $hat{e}_2$, respectively, if we negate $varphi$; and that $lVerthat{e}_1rVert = lVert hat{e}_2 rVert = lVert hat{epsilon}_1 rVert = lVert hat{epsilon}_2 rVert = 1$ and $hat{e}_1 cdot hat{e}_2 = hat{epsilon}_1 cdot hat{epsilon}_2 = 0$.



If we look at the definition of orthogonal matrices, we have
$$bbox{ mathbf{R}^T mathbf{R} = mathbf{R} mathbf{R}^T = mathbf{I} }$$
If we were to explore these properties, we'd find that the column vectors of $mathbf{R}$ must form an orthonormal basis, as must the row vectors of $mathbf{R}$. $mathbf{R}$ must also always have a determinant of $+1$ or $-1$.



The final wrinkle is that only orthogonal matrices with determinant $+1$ are pure rotation matrices. Those that have determinant $-1$ correspond to matrices with a reflection. The above rotation matrix has determinant $(cosvarphi)^2 + (sinvarphi)^2 = 1$. If you negate $hat{e}_1$, the determinant becomes $-(cosvarphi)^2 - (sinvarphi)^2 = -1$, as one would expect, as you essentially add reflection along the first basis vector after rotation to $mathbf{R}$.



All of the above also applies to 3D rotation matrices. (For exploration on that, pick a random unit axis vector $hat{a}$, and a rotation around it $varphi$. The rotation matrix that corresponds to is shown in the Rotation matrix Wikipedia article. Versors, or unit quaternions, can be easily used to represent an orientation analogously to the axis-angle formalism. Quaternion algebra makes combining rotations very easy.)







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jan 21 at 19:42









Nominal AnimalNominal Animal

7,0632617




7,0632617








  • 1




    $begingroup$
    @ShellRox: You're welcome! I am not a mathematician, but as I do quite a bit of 2D/3D visualization (of atomic models and such), I wanted to show the geometric properties of rotation matrices that are particularly useful for programmers. Basically, if you tack on the versor/unit quaternion stuff on top, then something on how to combine rotations (say, for skeleton models and such), you quickly get a pretty comprehensive toolkit with everything with a direct geometric justification/description. I thought it might be useful; but I do agree that Servaes' answer is the more proper one.
    $endgroup$
    – Nominal Animal
    Jan 22 at 15:23






  • 1




    $begingroup$
    @ShellRox: I think it is a case of using the same variable to refer to different things. I see it often in math books, because authors assume readers will infer the meaning correctly from the context :(. In the context of this question, $E^1$ and $E^2$ are just some arbitrary vectors, that are being rotated by the linear transform $L$. In this notation, $L(E^1)$ and $L(E^2)$ are just those two vectors, rotated. The basis vectors before any rotation are $[1, 0]^T$ and $[0, 1]^T$. After rotation, they are $L([1,0]^T)$ and $L([0,1]^T)$, but these we can see directly from the matrix.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:09






  • 1




    $begingroup$
    The reverse (meaning if we consider our basis vectors $[1, 0]^T$ and $[0, 1]^T$ after the rotation) is $L^{-1}([1,0]^T)$ and $L^{-1}([0,1]^T)$. It is an arbitrary choice, because the inverse of any rotation is a valid rotation too. In general, $L(vec{v}) = mathbf{R}vec{v}$ and $L^{-1}(vec{v}) = mathbf{R}^{-1} vec{v} = mathbf{R}^Tvec{v}$, if that matters.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:12








  • 1




    $begingroup$
    And that hence general inverse linear mapping is from properties of orthogonal matrix. Also I see that every linear map associated with orthogonal matrices is isomorphism, injective map? Interesting...
    $endgroup$
    – ShellRox
    Jan 22 at 17:09






  • 1




    $begingroup$
    @ShellRox: I personally find it useful to remember that the geometric interpretation of $hat{e}_1$, $hat{e}_2$, and $hat{epsilon}_1$ and $hat{epsilon}_2$ basis vectors also apply to all nondegenerate transformation matrices. That is, that if you know how the basis vectors are transformed, you immediately have the transformation matrix, and can (easily, algebraically!) find the inverse transformation as well. For me, this has been an indispendable tool for simplifying problems like 3D trilateration.
    $endgroup$
    – Nominal Animal
    Jan 22 at 18:06














  • 1




    $begingroup$
    @ShellRox: You're welcome! I am not a mathematician, but as I do quite a bit of 2D/3D visualization (of atomic models and such), I wanted to show the geometric properties of rotation matrices that are particularly useful for programmers. Basically, if you tack on the versor/unit quaternion stuff on top, then something on how to combine rotations (say, for skeleton models and such), you quickly get a pretty comprehensive toolkit with everything with a direct geometric justification/description. I thought it might be useful; but I do agree that Servaes' answer is the more proper one.
    $endgroup$
    – Nominal Animal
    Jan 22 at 15:23






  • 1




    $begingroup$
    @ShellRox: I think it is a case of using the same variable to refer to different things. I see it often in math books, because authors assume readers will infer the meaning correctly from the context :(. In the context of this question, $E^1$ and $E^2$ are just some arbitrary vectors, that are being rotated by the linear transform $L$. In this notation, $L(E^1)$ and $L(E^2)$ are just those two vectors, rotated. The basis vectors before any rotation are $[1, 0]^T$ and $[0, 1]^T$. After rotation, they are $L([1,0]^T)$ and $L([0,1]^T)$, but these we can see directly from the matrix.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:09






  • 1




    $begingroup$
    The reverse (meaning if we consider our basis vectors $[1, 0]^T$ and $[0, 1]^T$ after the rotation) is $L^{-1}([1,0]^T)$ and $L^{-1}([0,1]^T)$. It is an arbitrary choice, because the inverse of any rotation is a valid rotation too. In general, $L(vec{v}) = mathbf{R}vec{v}$ and $L^{-1}(vec{v}) = mathbf{R}^{-1} vec{v} = mathbf{R}^Tvec{v}$, if that matters.
    $endgroup$
    – Nominal Animal
    Jan 22 at 16:12








  • 1




    $begingroup$
    And that hence general inverse linear mapping is from properties of orthogonal matrix. Also I see that every linear map associated with orthogonal matrices is isomorphism, injective map? Interesting...
    $endgroup$
    – ShellRox
    Jan 22 at 17:09






  • 1




    $begingroup$
    @ShellRox: I personally find it useful to remember that the geometric interpretation of $hat{e}_1$, $hat{e}_2$, and $hat{epsilon}_1$ and $hat{epsilon}_2$ basis vectors also apply to all nondegenerate transformation matrices. That is, that if you know how the basis vectors are transformed, you immediately have the transformation matrix, and can (easily, algebraically!) find the inverse transformation as well. For me, this has been an indispendable tool for simplifying problems like 3D trilateration.
    $endgroup$
    – Nominal Animal
    Jan 22 at 18:06








1




1




$begingroup$
@ShellRox: You're welcome! I am not a mathematician, but as I do quite a bit of 2D/3D visualization (of atomic models and such), I wanted to show the geometric properties of rotation matrices that are particularly useful for programmers. Basically, if you tack on the versor/unit quaternion stuff on top, then something on how to combine rotations (say, for skeleton models and such), you quickly get a pretty comprehensive toolkit with everything with a direct geometric justification/description. I thought it might be useful; but I do agree that Servaes' answer is the more proper one.
$endgroup$
– Nominal Animal
Jan 22 at 15:23




$begingroup$
@ShellRox: You're welcome! I am not a mathematician, but as I do quite a bit of 2D/3D visualization (of atomic models and such), I wanted to show the geometric properties of rotation matrices that are particularly useful for programmers. Basically, if you tack on the versor/unit quaternion stuff on top, then something on how to combine rotations (say, for skeleton models and such), you quickly get a pretty comprehensive toolkit with everything with a direct geometric justification/description. I thought it might be useful; but I do agree that Servaes' answer is the more proper one.
$endgroup$
– Nominal Animal
Jan 22 at 15:23




1




1




$begingroup$
@ShellRox: I think it is a case of using the same variable to refer to different things. I see it often in math books, because authors assume readers will infer the meaning correctly from the context :(. In the context of this question, $E^1$ and $E^2$ are just some arbitrary vectors, that are being rotated by the linear transform $L$. In this notation, $L(E^1)$ and $L(E^2)$ are just those two vectors, rotated. The basis vectors before any rotation are $[1, 0]^T$ and $[0, 1]^T$. After rotation, they are $L([1,0]^T)$ and $L([0,1]^T)$, but these we can see directly from the matrix.
$endgroup$
– Nominal Animal
Jan 22 at 16:09




$begingroup$
@ShellRox: I think it is a case of using the same variable to refer to different things. I see it often in math books, because authors assume readers will infer the meaning correctly from the context :(. In the context of this question, $E^1$ and $E^2$ are just some arbitrary vectors, that are being rotated by the linear transform $L$. In this notation, $L(E^1)$ and $L(E^2)$ are just those two vectors, rotated. The basis vectors before any rotation are $[1, 0]^T$ and $[0, 1]^T$. After rotation, they are $L([1,0]^T)$ and $L([0,1]^T)$, but these we can see directly from the matrix.
$endgroup$
– Nominal Animal
Jan 22 at 16:09




1




1




$begingroup$
The reverse (meaning if we consider our basis vectors $[1, 0]^T$ and $[0, 1]^T$ after the rotation) is $L^{-1}([1,0]^T)$ and $L^{-1}([0,1]^T)$. It is an arbitrary choice, because the inverse of any rotation is a valid rotation too. In general, $L(vec{v}) = mathbf{R}vec{v}$ and $L^{-1}(vec{v}) = mathbf{R}^{-1} vec{v} = mathbf{R}^Tvec{v}$, if that matters.
$endgroup$
– Nominal Animal
Jan 22 at 16:12






$begingroup$
The reverse (meaning if we consider our basis vectors $[1, 0]^T$ and $[0, 1]^T$ after the rotation) is $L^{-1}([1,0]^T)$ and $L^{-1}([0,1]^T)$. It is an arbitrary choice, because the inverse of any rotation is a valid rotation too. In general, $L(vec{v}) = mathbf{R}vec{v}$ and $L^{-1}(vec{v}) = mathbf{R}^{-1} vec{v} = mathbf{R}^Tvec{v}$, if that matters.
$endgroup$
– Nominal Animal
Jan 22 at 16:12






1




1




$begingroup$
And that hence general inverse linear mapping is from properties of orthogonal matrix. Also I see that every linear map associated with orthogonal matrices is isomorphism, injective map? Interesting...
$endgroup$
– ShellRox
Jan 22 at 17:09




$begingroup$
And that hence general inverse linear mapping is from properties of orthogonal matrix. Also I see that every linear map associated with orthogonal matrices is isomorphism, injective map? Interesting...
$endgroup$
– ShellRox
Jan 22 at 17:09




1




1




$begingroup$
@ShellRox: I personally find it useful to remember that the geometric interpretation of $hat{e}_1$, $hat{e}_2$, and $hat{epsilon}_1$ and $hat{epsilon}_2$ basis vectors also apply to all nondegenerate transformation matrices. That is, that if you know how the basis vectors are transformed, you immediately have the transformation matrix, and can (easily, algebraically!) find the inverse transformation as well. For me, this has been an indispendable tool for simplifying problems like 3D trilateration.
$endgroup$
– Nominal Animal
Jan 22 at 18:06




$begingroup$
@ShellRox: I personally find it useful to remember that the geometric interpretation of $hat{e}_1$, $hat{e}_2$, and $hat{epsilon}_1$ and $hat{epsilon}_2$ basis vectors also apply to all nondegenerate transformation matrices. That is, that if you know how the basis vectors are transformed, you immediately have the transformation matrix, and can (easily, algebraically!) find the inverse transformation as well. For me, this has been an indispendable tool for simplifying problems like 3D trilateration.
$endgroup$
– Nominal Animal
Jan 22 at 18:06


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3082217%2fgeometric-justification-of-a-rotation-matrix%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

How to fix TextFormField cause rebuild widget in Flutter

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith