In Geometric Algebra, is there a geometric product between matrices?












2














Thanks for your help in advance.



I literally just started to self-study about geometric algebra.



I have some coursework background in linear algebra and was trying to make an educational bridge between what I know and what I'm trying to learn.



My question: Is there a geometric product for matrices in geometric algebra, like there is a geometric product for vectors? If so, how would one compute the geometric product between matrices?



Thanks










share|cite|improve this question





























    2














    Thanks for your help in advance.



    I literally just started to self-study about geometric algebra.



    I have some coursework background in linear algebra and was trying to make an educational bridge between what I know and what I'm trying to learn.



    My question: Is there a geometric product for matrices in geometric algebra, like there is a geometric product for vectors? If so, how would one compute the geometric product between matrices?



    Thanks










    share|cite|improve this question



























      2












      2








      2


      2





      Thanks for your help in advance.



      I literally just started to self-study about geometric algebra.



      I have some coursework background in linear algebra and was trying to make an educational bridge between what I know and what I'm trying to learn.



      My question: Is there a geometric product for matrices in geometric algebra, like there is a geometric product for vectors? If so, how would one compute the geometric product between matrices?



      Thanks










      share|cite|improve this question















      Thanks for your help in advance.



      I literally just started to self-study about geometric algebra.



      I have some coursework background in linear algebra and was trying to make an educational bridge between what I know and what I'm trying to learn.



      My question: Is there a geometric product for matrices in geometric algebra, like there is a geometric product for vectors? If so, how would one compute the geometric product between matrices?



      Thanks







      clifford-algebras geometric-algebras






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Aug 15 '13 at 20:25

























      asked Aug 15 '13 at 19:50









      New-to-GA

      415




      415






















          5 Answers
          5






          active

          oldest

          votes


















          4














          Let me address this more on the side of how linear algebra is presented in some GA material.



          In traditional linear algebra, you use a lot of matrices and column/row vectors because this gives you an easy way to compute the action of a linear map or operator on a vector. What I want to emphasize is that this is a representation. It's a way of talking about linear maps, but it's not the only way.



          In GA, there are reasons we don't often use matrices explicitly. One reason is that we have a natural extension of a linear operator to all kinds of blades, not just vectors. If you have a linear operator $underline T$, and you want to compute its action on a bivector $a wedge b$ with matrices, you would have to compute a totally different matrix from the one you would use just considering $underline T$ acting on a vector (this matrix's components would describe its action on basis bivectors, not basis vectors). This is one reason why matrices become rather useless.



          Thus, since we tend to look at linear maps and operators merely as linear functions, we have to develop ways to talk about common linear algebra concepts without reference to matrices at all. This is how we talk about a basis-independent of the determinant using the pseudoscalar $I$, saying $underline T(I) = I det underline T$ for instance. Texts on GA and GC also develop ways to talk about traces and other interesting linear algebra concepts without reference to matrices.



          With all that in mind, since we don't talk about matrices when doing linear algebra in GA, we don't have to think about geometric products of matrices. We just talk about compositions of maps (which would be represented through matrix multiplication) when applying several maps in succession.






          share|cite|improve this answer





























            2














            I think you're giving undue distinction to matrices.



            Matrices are, after all, just fancily written vectors with $n^2$ entries. You can use the vector space $M_n(Bbb R)$ and develop a geometric algebra containing it, but it would be the same as taking $Bbb R^{n^2}$ with the same bilinear product and developing that geometric algebra.



            The important thing about the geometric algebra is that you are taking the metric vector space $V$ that you're interested in and generating an algebra around it that has nice properties that we find useful. Nobody cares if the vectors are shaped like squares or hieroglyphs or ninja throwing stars, the only thing we care about is that it's a vector space with an inner product.





            In case you are still looking for more materials on geometric algebra, you might find things with the Clifford-algebras tag useful, and solutions there, especially this one and also maybe this one. I found Alan Macdonald's online introductory stuff very helpful.






            share|cite|improve this answer























            • Thanks, I never considered developing the geometric algebra from matrices. I understand your comment, but it might be an interesting and informative exercise. I'll take a look at the introductory material in your links. Again, I appreciate your response and so quickly too. I'm sure I'll be back with lots of questions as I study.
              – New-to-GA
              Aug 15 '13 at 20:17










            • Matrices are not just $n^2$-dimensional vectors. They have multiplicative structure. That can (mostly) be captured by geometric algebra; see my answer.
              – mr_e_man
              Nov 21 '18 at 6:45



















            1














            The only thing that is required to form matrices of multivectors is to take care to retain the ordering of any products, so if you have $ A = [a_{ij}] $ and $ B = [b_{ij}] $, where the matrix elements are multivector expressions, then your product is



            $$A B = begin{bmatrix}sum_k a_{ik} b_{kj}end{bmatrix},$$



            and not
            $$A B = begin{bmatrix}sum_k b_{kj} a_{ik}end{bmatrix}.$$



            Such matrices can occur naturally when factoring certain multivector expressions. See for example
            chapter: Spherical polar pendulum for one and multiple masses (Take II), where multivector matrix factors were used to express
            the Lagrangian for a chain of N spherical-pendulums.






            share|cite|improve this answer





























              0














              There are actually two ways to do this (in addition to the other answers'). They both use the same background, as follows.



              Given an $n$-dimensional real vector space $V$, we can construct a $2n$-dimensional space $Voplus V^*$, using the dual space $V^*$ (the set of all linear functions from $V$ to $mathbb R$). Define a dot product on $Voplus V^*$ by



              $$(a+alpha)cdot(b+beta)=acdotbeta+alphacdot b=beta(a)+alpha(b)$$



              where $ain V,alphain V^*,bin V,betain V^*$. Thus the dot product of any two vectors in $V$ is $0$ (so we don't have an "inner product" or "metric tensor" on $V$.)



              Take a basis ${e_i}={e_1,e_2,cdots,e_n}$ for $V$, and the dual basis ${varepsilon^i}$ for $V^*$, satisfying $varepsilon^icdot e_i=1$ and otherwise $varepsilon^icdot e_j=0$. These together form a basis for $Voplus V^*$. We can make a different basis ${sigma_i,tau_i}$, defined by



              $$sigma_i=frac{e_i+varepsilon^i}{sqrt2},qquadtau_i=frac{e_i-varepsilon^i}{sqrt2}.$$



              (If you want to avoid $sqrt2$ for some reason (like using $mathbb Q$ as the scalar field), then define $sigma_i=frac12e_i+varepsilon^i,;tau_i=frac12e_i-varepsilon^i$. The result is the same.)



              It can be seen that $sigma_icdottau_j=0$, and $sigma_icdotsigma_i=1=-tau_icdottau_i$ and otherwise $sigma_icdotsigma_j=0=tau_icdottau_j$. So we have an orthonormal basis of $n$ vectors $sigma_i$ squaring to ${^+}1$ and $n$ vectors $tau_i$ squaring to ${^-}1$, showing that $Voplus V^*$ is isomorphic to the pseudo-Euclidean space $mathbb R^{n,n}$.





              Method 1: Bivectors



              Any $ntimes n$ matrix (or linear transformation on $V$) can be represented by a bivector in the geometric algebra over $Voplus V^*$. Given the scalar components $M^i!_j$ of a matrix, the corresponding bivector is



              $$M=sum_{i,j}M^i!_j,e_iwedgevarepsilon^j.$$



              For example, with $n=2$, we would have



              $$M=begin{pmatrix}M^1!_1e_1wedgevarepsilon^1+M^1!_2e_1wedgevarepsilon^2 \ +M^2!_1e_2wedgevarepsilon^1+M^2!_2e_2wedgevarepsilon^2 end{pmatrix}congbegin{bmatrix}M^1!_1 & M^1!_2 \ M^2!_1 & M^2!_2end{bmatrix}.$$



              The transformation applying to a vector $a=sum_ia^ie_i$ is



              $$amapsto Mbullet a=M,llcorner,a=Mtimes a=-abullet M$$



              $$=sum_{i,j,k}M^i!_ja^k(e_iwedgevarepsilon^j)bullet e_k$$



              $$=sum_{i,j,k}M^i!_ja^kbig(e_i(varepsilon^jcdot e_k)-(e_icdot e_k)varepsilon^jbig)$$



              $$=sum_{i,j,k}M^i!_ja^kbig(e_i(delta^j_k)-0big)$$



              $$=sum_{i,j}M^i!_ja^je_i.$$



              There I used the bac-cab identity $(awedge b)bullet c=a(bcdot c)-(acdot c)b$, and the products $bullet,llcornertimes$ defined here.



              (Now, much of the remainder of this post is about a single bivector. For the product of two bivectors, you may skip to the highlighted equation.)



              The pullback/adjoint transformation on $V^*$ is $alphamapstoalphabullet M=-Mbulletalpha=sum_{i,j}alpha_iM^i!_jvarepsilon^j$. This relates to ordinary matrix multiplication, in that row vectors go on the left, vs column vectors on the right. Also relevant is the multivector identity $(A,lrcorner,B),llcorner,C=A,lrcorner,(B,llcorner,C)$, which implies $(alphabullet M)cdot b=alphacdot(Mbullet b)$. This relates to the associativity of matrix multiplication, or the definition of the adjoint.





              The outermorphism can be calculated using the exterior powers of $M$ :



              $$(Mbullet a)wedge(Mbullet b)=frac{Mwedge M}{2}bullet(awedge b)$$



              $$(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)=frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



              $$(Mbullet a_1)wedge(Mbullet a_2)wedgecdotswedge(Mbullet a_n)=frac{1(wedge M)^n}{n!}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



              $$=frac{Mwedge Mwedgecdotswedge M}{1;cdot;2;cdot;cdots;cdot;n}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



              (This notation, $1(wedge M)^n$, is sometimes replaced with $wedge^nM$ or $M^{wedge n}$, but those don't look right to me.)



              I'll prove the trivector case; the others are similar. I'll use the identities $A,llcorner,(Bwedge C)=(A,llcorner,B),llcorner,C$, and $a,lrcorner,(Bwedge C)=(a,lrcorner,B)wedge C+(-1)^kBwedge(a,lrcorner,C)$ when $a$ has grade $1$ and $B$ has grade $k$.



              $$frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



              $$=bigg(frac{Mwedge Mwedge M}{6}bullet abigg)bullet(bwedge c)$$



              $$=bigg(frac{Mwedge Mwedge(Mbullet a)+Mwedge(Mbullet a)wedge M+(Mbullet a)wedge Mwedge M}{6}bigg)bullet(bwedge c)$$



              (bivector $wedge$ is commutative, so these are all the same)



              $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bigg)bullet(bwedge c)$$



              $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bullet bbigg)bullet c$$



              $$=bigg(frac{(Mbullet a)wedge Mwedge(Mbullet b)+(Mbullet a)wedge(Mbullet b)wedge M+big((Mbullet a)cdot bbig)wedge Mwedge M}{2}bigg)bullet c$$



              (remember, all vectors in $V$ are orthogonal, so $(Mbullet a)cdot b=0$ )



              $$=Big((Mbullet a)wedge(Mbullet b)wedge MBig)bullet c$$



              $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)+(Mbullet a)wedgebig((Mbullet b)cdot cbig)wedge M+big((Mbullet a)cdot cbig)wedge(Mbullet b)wedge M$$



              $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c).$$



              This provides a formula for the determinant. Take the $n$-blade $E=e_1wedge e_2wedgecdotswedge e_n=e_1e_2cdots e_n$. (This is basis-dependent, though unique up to a scalar.) Then



              $$frac{1(wedge M)^n}{n!}bullet E=(det M)E.$$



              And, using the commutator identity $Atimes(BC)=(Atimes B)C+B(Atimes C)$, we find the trace:



              $$ME=M,lrcorner,E+Mtimes E+Mwedge E=0+Mtimes E+0$$



              $$=(Mtimes e_1)e_2cdots e_n+e_1(Mtimes e_2)cdots e_n+cdots+e_1e_2cdots(Mtimes e_n)$$



              $$=Big(sum_iM^i!_1e_iBig)e_2cdots e_n+e_1Big(sum_iM^i!_2e_iBig)cdots e_n+cdots+e_1e_2cdotsBig(sum_iM^i!_ne_iBig)$$



              (most of the terms disappear because $e_ie_i=0$ )



              $$=(M^1!_1e_1)e_2cdots e_n+e_1(M^2!_2e_2)cdots e_n+cdots+e_1e_2cdots(M^n!_ne_n)$$



              $$=(M^1!_1+M^2!_2+cdots+M^n!_n)e_1e_2cdots e_n=(text{tr},M)E.$$



              More generally, the characteristic polynomial coefficients are determined by the geometric product



              $$frac{1(wedge M)^k}{k!}E=c_kE.$$



              These can be combined into (a variant of) the polynomial itself. With the exterior exponential defined by



              $$exp!wedge(A)=sum_kfrac{1(wedge A)^k}{k!}=1+A+frac{Awedge A}2+frac{Awedge Awedge A}{6}+cdots,$$



              we have



              $$big(exp!wedge(tM)big)E=Big(sum_kc_kt^kBig)E=big(1+(text{tr},M)t+c_2t^2+cdots+(det M)t^nbig)E$$



              $$=t^nbigg(frac{1}{t^n}+frac{text{tr},M}{t^{n-1}}+frac{c_2}{t^{n-2}}+cdots+frac{det M}{1}bigg)E.$$





              The reverse of a multivector is $tilde A=sum_k(-1)^{k(k-1)/2}langle Arangle_k$; the reverse of a product is $(AB)^sim=tilde Btilde A$. It can be shown that the scalar product of two blades, with one reversed, is the determinant of the matrix of dot products of the blades' component vectors. For example, $(a_2wedge a_1)bullet(b_1wedge b_2)=(a_1cdot b_1)(a_2cdot b_2)-(a_1cdot b_2)(a_2cdot b_1)$.



              Given the above, and the blades $E=e_1cdots e_n$ and $cal E=varepsilon^1cdotsvarepsilon^n$, it follows that $Ebullettilde{cal E}=1$. The full geometric product happens to be the exterior exponential $Etilde{cal E}=exp!wedge K$, where $K=sum_ie_iwedgevarepsilon^i$ represents the identity transformation. So we can multiply this equation



              $$frac{1(wedge M)^k}{k!}E=c_kE$$



              by $tilde{cal E}$ to get



              $$frac{1(wedge M)^k}{k!}exp!wedge K=c_kexp!wedge K$$



              and take the scalar part, to isolate the polynomial coefficients



              $$frac{1(wedge M)^k}{k!}bulletfrac{1(wedge K)^k}{k!}=c_k.$$



              Or, multiply the $exp!wedge(tM)$ equation by $tilde{cal E}$ to get



              $$big(exp!wedge(tM)big)exp!wedge K=Big(sum_kc_kt^kBig)exp!wedge K.$$



              This can be wedged with $exp!wedge(-K)$ to isolate the polynomial, because $(exp!wedge A)wedge(exp!wedge B)=exp!wedge(A+B)$ if $A$ or $B$ has even grade.



              We also have the adjugate, which can be used to calculate the matrix inverse:



              $$frac{1(wedge M)^{n-1}}{(n-1)!}bulletfrac{1(wedge K)^n}{n!}=text{adj},M.$$





              The geometric product of two transformation bivectors, $M$ and $N$, has three parts (with grades $0,2,4$); each one is significant.




              $$MN=Mbullet N+Mtimes N+Mwedge N$$




              The first part is the trace of the matrix product:



              $$Mbullet N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)bullet(e_kwedgevarepsilon^l)$$



              $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_kdelta^l_i)$$



              $$=sum_{i,j}M^i!_jN^j!_i=text{tr}(Mboxdot N).$$



              The second part is the commutator of matrix products:



              $$Mtimes N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)times(e_kwedgevarepsilon^l)$$



              $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_ke_iwedgevarepsilon^l+delta^l_ivarepsilon^jwedge e_k)$$



              $$=sum_{i,j,l}M^i!_jN^j!_le_iwedgevarepsilon^l-sum_{j,k,l}N^k!_lM^l!_je_kwedgevarepsilon^j=Mboxdot N-Nboxdot M.$$



              (This can also be justified by Jacobi's identity $(Mtimes N)times a=Mtimes(Ntimes a)-Ntimes(Mtimes a)$.)



              The third part is similar to an outermorphism; when applied to a bivector from $V$, it produces



              $$(Mwedge N)bullet(awedge b)=(Mbullet a)wedge(Nbullet b)+(Nbullet a)wedge(Mbullet b).$$



              Unfortunately, there doesn't seem to be a simple expression for the ordinary matrix product. This is the best I could find, again using $K=sum_ie_iwedgevarepsilon^i$:



              $$Mboxdot N=frac{Mtimes N+(Mbullet K)N+(Nbullet K)M-(Mwedge N)bullet K}{2}=sum_{i,j,k}M^i!_jN^j!_ke_iwedgevarepsilon^k$$



              Note that $Mbullet K=text{tr},M$. And, of course, we have the defining relation $(Mboxdot N)bullet a=Mbullet(Nbullet a)$.



              (That formula is unnecessary for transformations between different spaces, say $V$ and $W$. Using the geometric algebra over $Voplus V^*oplus Woplus W^*$, with basis ${e_i,varepsilon^i,f_i,phi^i}$, if $M=sum_{i,j}M^i!_je_iwedgevarepsilon^j$ maps $V$ to itself, and $N=sum_{i,j}N^i!_je_iwedgephi^j$ maps $W$ to $V$, then the matrix product is simply $Mboxdot N=Mtimes N$.)





              Method 2: Rotors



              Any general linear transformation on $V$ can be represented by a rotor $R=r_{2k}r_{2k-1}cdots r_2r_1$, a geometric product of an even number of invertible vectors in $Voplus V^*$. Each vector squares to a positive or negative number. If the numbers of positive and negative vectors are both even, then the transformation's determinant is positive; if they're both odd, then the determinant is negative. The transformation is done by the "sandwich product"



              $$amapsto RaR^{-1}=r_{2k}cdots r_2r_1ar_1^{-1}r_2^{-1}cdots r_{2k}^{-1}.$$



              Any such transformation respects the geometric product: $(RAR^{-1})(RBR^{-1})=R(AB)R^{-1}$; in particular, for vectors, $(RaR^{-1})cdot(RbR^{-1})=R(acdot b)R^{-1}=acdot b$, and $(RaR^{-1})wedge(RbR^{-1})=R(awedge b)R^{-1}$. So the outermorphism uses the same formula for an arbitrary multivector: $Amapsto RAR^{-1}$.



              The composition of two transformations, with rotors $R$ and $S$, is represented by the geometric product $RS$:



              $$amapsto R(SaS^{-1})R^{-1}=(RS)a(RS)^{-1}.$$





              Here are some examples, using $sigma_i=(e_i+varepsilon^i)/sqrt2,;tau_i=(e_i-varepsilon^i)/sqrt2$, and



              $$a=sum_ia^ie_i=a^1frac{sigma_1+tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}.$$



              Reflection along $e_1$:



              $$R=tau_1sigma_1=e_1wedgevarepsilon^1$$



              $$RaR^{-1}=a^1frac{-sigma_1-tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



              $$=-a^1e_1+a^2e_2+cdots+a^ne_n$$



              Stretching by factor $exptheta$ along $e_1$:



              $$R=expBig(fractheta2tau_1sigma_1Big)=coshfractheta2+tau_1sigma_1sinhfractheta2$$



              $$=Big(sigma_1coshfractheta2+tau_1sinhfractheta2Big)sigma_1$$



              $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_1sinhtheta)+(tau_1coshtheta+sigma_1sinhtheta)}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



              $$=a^1e_1exptheta+a^2e_2+cdots+a^ne_n$$



              Circular rotation by $theta$ from $e_1$ towards $e_2$ (note that $sigma_2sigma_1$ commutes with $tau_2tau_1$, and both square to $-1$ so Euler's formula applies) :



              $$R=expBig(fractheta2(sigma_2sigma_1-tau_2tau_1)Big)=expBig(fractheta2sigma_2sigma_1Big)expBig(-fractheta2tau_2tau_1Big)$$



              $$=Big(sigma_1cosfractheta2+sigma_2sinfractheta2Big)sigma_1Big(-tau_1cosfractheta2-tau_2sinfractheta2Big)tau_1$$



              $$RaR^{-1}=a^1frac{(sigma_1costheta+sigma_2sintheta)+(tau_1costheta+tau_2sintheta)}{sqrt2}+a^2frac{(-sigma_1sintheta+sigma_2costheta)+(-tau_1sintheta+tau_2costheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



              $$=a^1(e_1costheta+e_2sintheta)+a^2(-e_1sintheta+e_2costheta)+a^3e_3+cdots+a^ne_n$$



              Hyperbolic rotation by $theta$ from $e_1$ towards $e_2$:



              $$R=expBig(fractheta2(tau_2sigma_1-sigma_2tau_1)Big)=expBig(fractheta2tau_2sigma_1Big)expBig(-fractheta2sigma_2tau_1Big)$$



              $$=Big(sigma_1coshfractheta2+tau_2sinhfractheta2Big)sigma_1Big(-tau_1coshfractheta2-sigma_2sinhfractheta2Big)tau_1$$



              $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_2sinhtheta)+(tau_1coshtheta+sigma_2sinhtheta)}{sqrt2}+a^2frac{(tau_1sinhtheta+sigma_2coshtheta)+(sigma_1sinhtheta+tau_2coshtheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



              $$=a^1(e_1coshtheta+e_2sinhtheta)+a^2(e_1sinhtheta+e_2coshtheta)+a^3e_3+cdots+a^ne_n$$



              Shear by $theta$ from $e_1$ towards $e_2$:



              $$R=expBig(fractheta2e_2wedgevarepsilon^1Big)=1+fractheta2e_2wedgevarepsilon^1$$



              $$=-frac14Big(e_1-varepsilon^1+fractheta4e_2Big)Big(e_1-varepsilon^1-fractheta4e_2Big)Big(e_1+varepsilon^1+fractheta4e_2Big)Big(e_1+varepsilon^1-fractheta4e_2Big)$$



              $$RaR^{-1}=a^1(e_1+theta e_2)+a^2e_2+a^3e_3+cdots+a^ne_n$$





              This post is too long...



              Some of this is described in Doran, Hestenes, Sommen, & Van Acker's "Lie Groups as Spin Groups": http://geocalc.clas.asu.edu/html/GeoAlg.html . (Beware that $E,e$ have different meanings from mine, though $K$ is the same.)






              share|cite|improve this answer































                -1














                Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!



                Gone is the awkward distinction between "row vectors" and "column vectors". In Clifford there is no distinction. And many weird abstract concepts become concrete spatial concepts that are easy to visualize. The determinant becomes a three-dimensional volume between three vectors, and the volume goes to zero as the vectors become parallel.



                In Clifford algebra a matrix is just an array of vectors that span a space. The geometric product has two effects: rotation and scaling. So a geometric product of a matrix will tend to rotate and scale the geometric shape.



                I find that the most useful and interesting aspect of Clifford algebra is to try to picture all algebraic relationships as spatial structures, or operations by spatial structures on other structures.






                share|cite|improve this answer

















                • 6




                  "Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!" I find this overstated to the point of being somewhat irresponsible. Vectors and matrices are fundamental mathematical tools which are certainly not obsolete and not in danger of becoming so anytime soon. If e.g. you want to solve a linear system of equations -- a ubiquitous problem in pure and applied mathematics -- then you will want to use matrices and Gaussian reduction. How would you do this using Clifford algebras??
                  – Pete L. Clark
                  Aug 16 '13 at 15:27






                • 4




                  Also most mathematicians would regard, e.g., the exterior algebra as a core part of the theory of linear algebra, so e.g. the name of the course in which one would learn your description of determinants is "linear algebra".
                  – Pete L. Clark
                  Aug 16 '13 at 15:29






                • 2




                  This is the sort of fanatical praise that seems to discolor whatever small reputation geometric algebra has attained. It is not likely to replace any part of linear algebra at all, but it is probably going to give rise to some interesting new explanations and illustrations for students.
                  – rschwieb
                  Aug 16 '13 at 17:22













                Your Answer





                StackExchange.ifUsing("editor", function () {
                return StackExchange.using("mathjaxEditing", function () {
                StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
                StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
                });
                });
                }, "mathjax-editing");

                StackExchange.ready(function() {
                var channelOptions = {
                tags: "".split(" "),
                id: "69"
                };
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function() {
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled) {
                StackExchange.using("snippets", function() {
                createEditor();
                });
                }
                else {
                createEditor();
                }
                });

                function createEditor() {
                StackExchange.prepareEditor({
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: true,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: 10,
                bindNavPrevention: true,
                postfix: "",
                imageUploader: {
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                },
                noCode: true, onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                });


                }
                });














                draft saved

                draft discarded


















                StackExchange.ready(
                function () {
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f468532%2fin-geometric-algebra-is-there-a-geometric-product-between-matrices%23new-answer', 'question_page');
                }
                );

                Post as a guest















                Required, but never shown

























                5 Answers
                5






                active

                oldest

                votes








                5 Answers
                5






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                4














                Let me address this more on the side of how linear algebra is presented in some GA material.



                In traditional linear algebra, you use a lot of matrices and column/row vectors because this gives you an easy way to compute the action of a linear map or operator on a vector. What I want to emphasize is that this is a representation. It's a way of talking about linear maps, but it's not the only way.



                In GA, there are reasons we don't often use matrices explicitly. One reason is that we have a natural extension of a linear operator to all kinds of blades, not just vectors. If you have a linear operator $underline T$, and you want to compute its action on a bivector $a wedge b$ with matrices, you would have to compute a totally different matrix from the one you would use just considering $underline T$ acting on a vector (this matrix's components would describe its action on basis bivectors, not basis vectors). This is one reason why matrices become rather useless.



                Thus, since we tend to look at linear maps and operators merely as linear functions, we have to develop ways to talk about common linear algebra concepts without reference to matrices at all. This is how we talk about a basis-independent of the determinant using the pseudoscalar $I$, saying $underline T(I) = I det underline T$ for instance. Texts on GA and GC also develop ways to talk about traces and other interesting linear algebra concepts without reference to matrices.



                With all that in mind, since we don't talk about matrices when doing linear algebra in GA, we don't have to think about geometric products of matrices. We just talk about compositions of maps (which would be represented through matrix multiplication) when applying several maps in succession.






                share|cite|improve this answer


























                  4














                  Let me address this more on the side of how linear algebra is presented in some GA material.



                  In traditional linear algebra, you use a lot of matrices and column/row vectors because this gives you an easy way to compute the action of a linear map or operator on a vector. What I want to emphasize is that this is a representation. It's a way of talking about linear maps, but it's not the only way.



                  In GA, there are reasons we don't often use matrices explicitly. One reason is that we have a natural extension of a linear operator to all kinds of blades, not just vectors. If you have a linear operator $underline T$, and you want to compute its action on a bivector $a wedge b$ with matrices, you would have to compute a totally different matrix from the one you would use just considering $underline T$ acting on a vector (this matrix's components would describe its action on basis bivectors, not basis vectors). This is one reason why matrices become rather useless.



                  Thus, since we tend to look at linear maps and operators merely as linear functions, we have to develop ways to talk about common linear algebra concepts without reference to matrices at all. This is how we talk about a basis-independent of the determinant using the pseudoscalar $I$, saying $underline T(I) = I det underline T$ for instance. Texts on GA and GC also develop ways to talk about traces and other interesting linear algebra concepts without reference to matrices.



                  With all that in mind, since we don't talk about matrices when doing linear algebra in GA, we don't have to think about geometric products of matrices. We just talk about compositions of maps (which would be represented through matrix multiplication) when applying several maps in succession.






                  share|cite|improve this answer
























                    4












                    4








                    4






                    Let me address this more on the side of how linear algebra is presented in some GA material.



                    In traditional linear algebra, you use a lot of matrices and column/row vectors because this gives you an easy way to compute the action of a linear map or operator on a vector. What I want to emphasize is that this is a representation. It's a way of talking about linear maps, but it's not the only way.



                    In GA, there are reasons we don't often use matrices explicitly. One reason is that we have a natural extension of a linear operator to all kinds of blades, not just vectors. If you have a linear operator $underline T$, and you want to compute its action on a bivector $a wedge b$ with matrices, you would have to compute a totally different matrix from the one you would use just considering $underline T$ acting on a vector (this matrix's components would describe its action on basis bivectors, not basis vectors). This is one reason why matrices become rather useless.



                    Thus, since we tend to look at linear maps and operators merely as linear functions, we have to develop ways to talk about common linear algebra concepts without reference to matrices at all. This is how we talk about a basis-independent of the determinant using the pseudoscalar $I$, saying $underline T(I) = I det underline T$ for instance. Texts on GA and GC also develop ways to talk about traces and other interesting linear algebra concepts without reference to matrices.



                    With all that in mind, since we don't talk about matrices when doing linear algebra in GA, we don't have to think about geometric products of matrices. We just talk about compositions of maps (which would be represented through matrix multiplication) when applying several maps in succession.






                    share|cite|improve this answer












                    Let me address this more on the side of how linear algebra is presented in some GA material.



                    In traditional linear algebra, you use a lot of matrices and column/row vectors because this gives you an easy way to compute the action of a linear map or operator on a vector. What I want to emphasize is that this is a representation. It's a way of talking about linear maps, but it's not the only way.



                    In GA, there are reasons we don't often use matrices explicitly. One reason is that we have a natural extension of a linear operator to all kinds of blades, not just vectors. If you have a linear operator $underline T$, and you want to compute its action on a bivector $a wedge b$ with matrices, you would have to compute a totally different matrix from the one you would use just considering $underline T$ acting on a vector (this matrix's components would describe its action on basis bivectors, not basis vectors). This is one reason why matrices become rather useless.



                    Thus, since we tend to look at linear maps and operators merely as linear functions, we have to develop ways to talk about common linear algebra concepts without reference to matrices at all. This is how we talk about a basis-independent of the determinant using the pseudoscalar $I$, saying $underline T(I) = I det underline T$ for instance. Texts on GA and GC also develop ways to talk about traces and other interesting linear algebra concepts without reference to matrices.



                    With all that in mind, since we don't talk about matrices when doing linear algebra in GA, we don't have to think about geometric products of matrices. We just talk about compositions of maps (which would be represented through matrix multiplication) when applying several maps in succession.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Aug 15 '13 at 21:59









                    Muphrid

                    15.5k11541




                    15.5k11541























                        2














                        I think you're giving undue distinction to matrices.



                        Matrices are, after all, just fancily written vectors with $n^2$ entries. You can use the vector space $M_n(Bbb R)$ and develop a geometric algebra containing it, but it would be the same as taking $Bbb R^{n^2}$ with the same bilinear product and developing that geometric algebra.



                        The important thing about the geometric algebra is that you are taking the metric vector space $V$ that you're interested in and generating an algebra around it that has nice properties that we find useful. Nobody cares if the vectors are shaped like squares or hieroglyphs or ninja throwing stars, the only thing we care about is that it's a vector space with an inner product.





                        In case you are still looking for more materials on geometric algebra, you might find things with the Clifford-algebras tag useful, and solutions there, especially this one and also maybe this one. I found Alan Macdonald's online introductory stuff very helpful.






                        share|cite|improve this answer























                        • Thanks, I never considered developing the geometric algebra from matrices. I understand your comment, but it might be an interesting and informative exercise. I'll take a look at the introductory material in your links. Again, I appreciate your response and so quickly too. I'm sure I'll be back with lots of questions as I study.
                          – New-to-GA
                          Aug 15 '13 at 20:17










                        • Matrices are not just $n^2$-dimensional vectors. They have multiplicative structure. That can (mostly) be captured by geometric algebra; see my answer.
                          – mr_e_man
                          Nov 21 '18 at 6:45
















                        2














                        I think you're giving undue distinction to matrices.



                        Matrices are, after all, just fancily written vectors with $n^2$ entries. You can use the vector space $M_n(Bbb R)$ and develop a geometric algebra containing it, but it would be the same as taking $Bbb R^{n^2}$ with the same bilinear product and developing that geometric algebra.



                        The important thing about the geometric algebra is that you are taking the metric vector space $V$ that you're interested in and generating an algebra around it that has nice properties that we find useful. Nobody cares if the vectors are shaped like squares or hieroglyphs or ninja throwing stars, the only thing we care about is that it's a vector space with an inner product.





                        In case you are still looking for more materials on geometric algebra, you might find things with the Clifford-algebras tag useful, and solutions there, especially this one and also maybe this one. I found Alan Macdonald's online introductory stuff very helpful.






                        share|cite|improve this answer























                        • Thanks, I never considered developing the geometric algebra from matrices. I understand your comment, but it might be an interesting and informative exercise. I'll take a look at the introductory material in your links. Again, I appreciate your response and so quickly too. I'm sure I'll be back with lots of questions as I study.
                          – New-to-GA
                          Aug 15 '13 at 20:17










                        • Matrices are not just $n^2$-dimensional vectors. They have multiplicative structure. That can (mostly) be captured by geometric algebra; see my answer.
                          – mr_e_man
                          Nov 21 '18 at 6:45














                        2












                        2








                        2






                        I think you're giving undue distinction to matrices.



                        Matrices are, after all, just fancily written vectors with $n^2$ entries. You can use the vector space $M_n(Bbb R)$ and develop a geometric algebra containing it, but it would be the same as taking $Bbb R^{n^2}$ with the same bilinear product and developing that geometric algebra.



                        The important thing about the geometric algebra is that you are taking the metric vector space $V$ that you're interested in and generating an algebra around it that has nice properties that we find useful. Nobody cares if the vectors are shaped like squares or hieroglyphs or ninja throwing stars, the only thing we care about is that it's a vector space with an inner product.





                        In case you are still looking for more materials on geometric algebra, you might find things with the Clifford-algebras tag useful, and solutions there, especially this one and also maybe this one. I found Alan Macdonald's online introductory stuff very helpful.






                        share|cite|improve this answer














                        I think you're giving undue distinction to matrices.



                        Matrices are, after all, just fancily written vectors with $n^2$ entries. You can use the vector space $M_n(Bbb R)$ and develop a geometric algebra containing it, but it would be the same as taking $Bbb R^{n^2}$ with the same bilinear product and developing that geometric algebra.



                        The important thing about the geometric algebra is that you are taking the metric vector space $V$ that you're interested in and generating an algebra around it that has nice properties that we find useful. Nobody cares if the vectors are shaped like squares or hieroglyphs or ninja throwing stars, the only thing we care about is that it's a vector space with an inner product.





                        In case you are still looking for more materials on geometric algebra, you might find things with the Clifford-algebras tag useful, and solutions there, especially this one and also maybe this one. I found Alan Macdonald's online introductory stuff very helpful.







                        share|cite|improve this answer














                        share|cite|improve this answer



                        share|cite|improve this answer








                        edited Apr 13 '17 at 12:20









                        Community

                        1




                        1










                        answered Aug 15 '13 at 19:55









                        rschwieb

                        105k1299244




                        105k1299244












                        • Thanks, I never considered developing the geometric algebra from matrices. I understand your comment, but it might be an interesting and informative exercise. I'll take a look at the introductory material in your links. Again, I appreciate your response and so quickly too. I'm sure I'll be back with lots of questions as I study.
                          – New-to-GA
                          Aug 15 '13 at 20:17










                        • Matrices are not just $n^2$-dimensional vectors. They have multiplicative structure. That can (mostly) be captured by geometric algebra; see my answer.
                          – mr_e_man
                          Nov 21 '18 at 6:45


















                        • Thanks, I never considered developing the geometric algebra from matrices. I understand your comment, but it might be an interesting and informative exercise. I'll take a look at the introductory material in your links. Again, I appreciate your response and so quickly too. I'm sure I'll be back with lots of questions as I study.
                          – New-to-GA
                          Aug 15 '13 at 20:17










                        • Matrices are not just $n^2$-dimensional vectors. They have multiplicative structure. That can (mostly) be captured by geometric algebra; see my answer.
                          – mr_e_man
                          Nov 21 '18 at 6:45
















                        Thanks, I never considered developing the geometric algebra from matrices. I understand your comment, but it might be an interesting and informative exercise. I'll take a look at the introductory material in your links. Again, I appreciate your response and so quickly too. I'm sure I'll be back with lots of questions as I study.
                        – New-to-GA
                        Aug 15 '13 at 20:17




                        Thanks, I never considered developing the geometric algebra from matrices. I understand your comment, but it might be an interesting and informative exercise. I'll take a look at the introductory material in your links. Again, I appreciate your response and so quickly too. I'm sure I'll be back with lots of questions as I study.
                        – New-to-GA
                        Aug 15 '13 at 20:17












                        Matrices are not just $n^2$-dimensional vectors. They have multiplicative structure. That can (mostly) be captured by geometric algebra; see my answer.
                        – mr_e_man
                        Nov 21 '18 at 6:45




                        Matrices are not just $n^2$-dimensional vectors. They have multiplicative structure. That can (mostly) be captured by geometric algebra; see my answer.
                        – mr_e_man
                        Nov 21 '18 at 6:45











                        1














                        The only thing that is required to form matrices of multivectors is to take care to retain the ordering of any products, so if you have $ A = [a_{ij}] $ and $ B = [b_{ij}] $, where the matrix elements are multivector expressions, then your product is



                        $$A B = begin{bmatrix}sum_k a_{ik} b_{kj}end{bmatrix},$$



                        and not
                        $$A B = begin{bmatrix}sum_k b_{kj} a_{ik}end{bmatrix}.$$



                        Such matrices can occur naturally when factoring certain multivector expressions. See for example
                        chapter: Spherical polar pendulum for one and multiple masses (Take II), where multivector matrix factors were used to express
                        the Lagrangian for a chain of N spherical-pendulums.






                        share|cite|improve this answer


























                          1














                          The only thing that is required to form matrices of multivectors is to take care to retain the ordering of any products, so if you have $ A = [a_{ij}] $ and $ B = [b_{ij}] $, where the matrix elements are multivector expressions, then your product is



                          $$A B = begin{bmatrix}sum_k a_{ik} b_{kj}end{bmatrix},$$



                          and not
                          $$A B = begin{bmatrix}sum_k b_{kj} a_{ik}end{bmatrix}.$$



                          Such matrices can occur naturally when factoring certain multivector expressions. See for example
                          chapter: Spherical polar pendulum for one and multiple masses (Take II), where multivector matrix factors were used to express
                          the Lagrangian for a chain of N spherical-pendulums.






                          share|cite|improve this answer
























                            1












                            1








                            1






                            The only thing that is required to form matrices of multivectors is to take care to retain the ordering of any products, so if you have $ A = [a_{ij}] $ and $ B = [b_{ij}] $, where the matrix elements are multivector expressions, then your product is



                            $$A B = begin{bmatrix}sum_k a_{ik} b_{kj}end{bmatrix},$$



                            and not
                            $$A B = begin{bmatrix}sum_k b_{kj} a_{ik}end{bmatrix}.$$



                            Such matrices can occur naturally when factoring certain multivector expressions. See for example
                            chapter: Spherical polar pendulum for one and multiple masses (Take II), where multivector matrix factors were used to express
                            the Lagrangian for a chain of N spherical-pendulums.






                            share|cite|improve this answer












                            The only thing that is required to form matrices of multivectors is to take care to retain the ordering of any products, so if you have $ A = [a_{ij}] $ and $ B = [b_{ij}] $, where the matrix elements are multivector expressions, then your product is



                            $$A B = begin{bmatrix}sum_k a_{ik} b_{kj}end{bmatrix},$$



                            and not
                            $$A B = begin{bmatrix}sum_k b_{kj} a_{ik}end{bmatrix}.$$



                            Such matrices can occur naturally when factoring certain multivector expressions. See for example
                            chapter: Spherical polar pendulum for one and multiple masses (Take II), where multivector matrix factors were used to express
                            the Lagrangian for a chain of N spherical-pendulums.







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered Oct 31 '16 at 14:19









                            Peeter Joot

                            595310




                            595310























                                0














                                There are actually two ways to do this (in addition to the other answers'). They both use the same background, as follows.



                                Given an $n$-dimensional real vector space $V$, we can construct a $2n$-dimensional space $Voplus V^*$, using the dual space $V^*$ (the set of all linear functions from $V$ to $mathbb R$). Define a dot product on $Voplus V^*$ by



                                $$(a+alpha)cdot(b+beta)=acdotbeta+alphacdot b=beta(a)+alpha(b)$$



                                where $ain V,alphain V^*,bin V,betain V^*$. Thus the dot product of any two vectors in $V$ is $0$ (so we don't have an "inner product" or "metric tensor" on $V$.)



                                Take a basis ${e_i}={e_1,e_2,cdots,e_n}$ for $V$, and the dual basis ${varepsilon^i}$ for $V^*$, satisfying $varepsilon^icdot e_i=1$ and otherwise $varepsilon^icdot e_j=0$. These together form a basis for $Voplus V^*$. We can make a different basis ${sigma_i,tau_i}$, defined by



                                $$sigma_i=frac{e_i+varepsilon^i}{sqrt2},qquadtau_i=frac{e_i-varepsilon^i}{sqrt2}.$$



                                (If you want to avoid $sqrt2$ for some reason (like using $mathbb Q$ as the scalar field), then define $sigma_i=frac12e_i+varepsilon^i,;tau_i=frac12e_i-varepsilon^i$. The result is the same.)



                                It can be seen that $sigma_icdottau_j=0$, and $sigma_icdotsigma_i=1=-tau_icdottau_i$ and otherwise $sigma_icdotsigma_j=0=tau_icdottau_j$. So we have an orthonormal basis of $n$ vectors $sigma_i$ squaring to ${^+}1$ and $n$ vectors $tau_i$ squaring to ${^-}1$, showing that $Voplus V^*$ is isomorphic to the pseudo-Euclidean space $mathbb R^{n,n}$.





                                Method 1: Bivectors



                                Any $ntimes n$ matrix (or linear transformation on $V$) can be represented by a bivector in the geometric algebra over $Voplus V^*$. Given the scalar components $M^i!_j$ of a matrix, the corresponding bivector is



                                $$M=sum_{i,j}M^i!_j,e_iwedgevarepsilon^j.$$



                                For example, with $n=2$, we would have



                                $$M=begin{pmatrix}M^1!_1e_1wedgevarepsilon^1+M^1!_2e_1wedgevarepsilon^2 \ +M^2!_1e_2wedgevarepsilon^1+M^2!_2e_2wedgevarepsilon^2 end{pmatrix}congbegin{bmatrix}M^1!_1 & M^1!_2 \ M^2!_1 & M^2!_2end{bmatrix}.$$



                                The transformation applying to a vector $a=sum_ia^ie_i$ is



                                $$amapsto Mbullet a=M,llcorner,a=Mtimes a=-abullet M$$



                                $$=sum_{i,j,k}M^i!_ja^k(e_iwedgevarepsilon^j)bullet e_k$$



                                $$=sum_{i,j,k}M^i!_ja^kbig(e_i(varepsilon^jcdot e_k)-(e_icdot e_k)varepsilon^jbig)$$



                                $$=sum_{i,j,k}M^i!_ja^kbig(e_i(delta^j_k)-0big)$$



                                $$=sum_{i,j}M^i!_ja^je_i.$$



                                There I used the bac-cab identity $(awedge b)bullet c=a(bcdot c)-(acdot c)b$, and the products $bullet,llcornertimes$ defined here.



                                (Now, much of the remainder of this post is about a single bivector. For the product of two bivectors, you may skip to the highlighted equation.)



                                The pullback/adjoint transformation on $V^*$ is $alphamapstoalphabullet M=-Mbulletalpha=sum_{i,j}alpha_iM^i!_jvarepsilon^j$. This relates to ordinary matrix multiplication, in that row vectors go on the left, vs column vectors on the right. Also relevant is the multivector identity $(A,lrcorner,B),llcorner,C=A,lrcorner,(B,llcorner,C)$, which implies $(alphabullet M)cdot b=alphacdot(Mbullet b)$. This relates to the associativity of matrix multiplication, or the definition of the adjoint.





                                The outermorphism can be calculated using the exterior powers of $M$ :



                                $$(Mbullet a)wedge(Mbullet b)=frac{Mwedge M}{2}bullet(awedge b)$$



                                $$(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)=frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



                                $$(Mbullet a_1)wedge(Mbullet a_2)wedgecdotswedge(Mbullet a_n)=frac{1(wedge M)^n}{n!}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



                                $$=frac{Mwedge Mwedgecdotswedge M}{1;cdot;2;cdot;cdots;cdot;n}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



                                (This notation, $1(wedge M)^n$, is sometimes replaced with $wedge^nM$ or $M^{wedge n}$, but those don't look right to me.)



                                I'll prove the trivector case; the others are similar. I'll use the identities $A,llcorner,(Bwedge C)=(A,llcorner,B),llcorner,C$, and $a,lrcorner,(Bwedge C)=(a,lrcorner,B)wedge C+(-1)^kBwedge(a,lrcorner,C)$ when $a$ has grade $1$ and $B$ has grade $k$.



                                $$frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



                                $$=bigg(frac{Mwedge Mwedge M}{6}bullet abigg)bullet(bwedge c)$$



                                $$=bigg(frac{Mwedge Mwedge(Mbullet a)+Mwedge(Mbullet a)wedge M+(Mbullet a)wedge Mwedge M}{6}bigg)bullet(bwedge c)$$



                                (bivector $wedge$ is commutative, so these are all the same)



                                $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bigg)bullet(bwedge c)$$



                                $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bullet bbigg)bullet c$$



                                $$=bigg(frac{(Mbullet a)wedge Mwedge(Mbullet b)+(Mbullet a)wedge(Mbullet b)wedge M+big((Mbullet a)cdot bbig)wedge Mwedge M}{2}bigg)bullet c$$



                                (remember, all vectors in $V$ are orthogonal, so $(Mbullet a)cdot b=0$ )



                                $$=Big((Mbullet a)wedge(Mbullet b)wedge MBig)bullet c$$



                                $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)+(Mbullet a)wedgebig((Mbullet b)cdot cbig)wedge M+big((Mbullet a)cdot cbig)wedge(Mbullet b)wedge M$$



                                $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c).$$



                                This provides a formula for the determinant. Take the $n$-blade $E=e_1wedge e_2wedgecdotswedge e_n=e_1e_2cdots e_n$. (This is basis-dependent, though unique up to a scalar.) Then



                                $$frac{1(wedge M)^n}{n!}bullet E=(det M)E.$$



                                And, using the commutator identity $Atimes(BC)=(Atimes B)C+B(Atimes C)$, we find the trace:



                                $$ME=M,lrcorner,E+Mtimes E+Mwedge E=0+Mtimes E+0$$



                                $$=(Mtimes e_1)e_2cdots e_n+e_1(Mtimes e_2)cdots e_n+cdots+e_1e_2cdots(Mtimes e_n)$$



                                $$=Big(sum_iM^i!_1e_iBig)e_2cdots e_n+e_1Big(sum_iM^i!_2e_iBig)cdots e_n+cdots+e_1e_2cdotsBig(sum_iM^i!_ne_iBig)$$



                                (most of the terms disappear because $e_ie_i=0$ )



                                $$=(M^1!_1e_1)e_2cdots e_n+e_1(M^2!_2e_2)cdots e_n+cdots+e_1e_2cdots(M^n!_ne_n)$$



                                $$=(M^1!_1+M^2!_2+cdots+M^n!_n)e_1e_2cdots e_n=(text{tr},M)E.$$



                                More generally, the characteristic polynomial coefficients are determined by the geometric product



                                $$frac{1(wedge M)^k}{k!}E=c_kE.$$



                                These can be combined into (a variant of) the polynomial itself. With the exterior exponential defined by



                                $$exp!wedge(A)=sum_kfrac{1(wedge A)^k}{k!}=1+A+frac{Awedge A}2+frac{Awedge Awedge A}{6}+cdots,$$



                                we have



                                $$big(exp!wedge(tM)big)E=Big(sum_kc_kt^kBig)E=big(1+(text{tr},M)t+c_2t^2+cdots+(det M)t^nbig)E$$



                                $$=t^nbigg(frac{1}{t^n}+frac{text{tr},M}{t^{n-1}}+frac{c_2}{t^{n-2}}+cdots+frac{det M}{1}bigg)E.$$





                                The reverse of a multivector is $tilde A=sum_k(-1)^{k(k-1)/2}langle Arangle_k$; the reverse of a product is $(AB)^sim=tilde Btilde A$. It can be shown that the scalar product of two blades, with one reversed, is the determinant of the matrix of dot products of the blades' component vectors. For example, $(a_2wedge a_1)bullet(b_1wedge b_2)=(a_1cdot b_1)(a_2cdot b_2)-(a_1cdot b_2)(a_2cdot b_1)$.



                                Given the above, and the blades $E=e_1cdots e_n$ and $cal E=varepsilon^1cdotsvarepsilon^n$, it follows that $Ebullettilde{cal E}=1$. The full geometric product happens to be the exterior exponential $Etilde{cal E}=exp!wedge K$, where $K=sum_ie_iwedgevarepsilon^i$ represents the identity transformation. So we can multiply this equation



                                $$frac{1(wedge M)^k}{k!}E=c_kE$$



                                by $tilde{cal E}$ to get



                                $$frac{1(wedge M)^k}{k!}exp!wedge K=c_kexp!wedge K$$



                                and take the scalar part, to isolate the polynomial coefficients



                                $$frac{1(wedge M)^k}{k!}bulletfrac{1(wedge K)^k}{k!}=c_k.$$



                                Or, multiply the $exp!wedge(tM)$ equation by $tilde{cal E}$ to get



                                $$big(exp!wedge(tM)big)exp!wedge K=Big(sum_kc_kt^kBig)exp!wedge K.$$



                                This can be wedged with $exp!wedge(-K)$ to isolate the polynomial, because $(exp!wedge A)wedge(exp!wedge B)=exp!wedge(A+B)$ if $A$ or $B$ has even grade.



                                We also have the adjugate, which can be used to calculate the matrix inverse:



                                $$frac{1(wedge M)^{n-1}}{(n-1)!}bulletfrac{1(wedge K)^n}{n!}=text{adj},M.$$





                                The geometric product of two transformation bivectors, $M$ and $N$, has three parts (with grades $0,2,4$); each one is significant.




                                $$MN=Mbullet N+Mtimes N+Mwedge N$$




                                The first part is the trace of the matrix product:



                                $$Mbullet N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)bullet(e_kwedgevarepsilon^l)$$



                                $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_kdelta^l_i)$$



                                $$=sum_{i,j}M^i!_jN^j!_i=text{tr}(Mboxdot N).$$



                                The second part is the commutator of matrix products:



                                $$Mtimes N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)times(e_kwedgevarepsilon^l)$$



                                $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_ke_iwedgevarepsilon^l+delta^l_ivarepsilon^jwedge e_k)$$



                                $$=sum_{i,j,l}M^i!_jN^j!_le_iwedgevarepsilon^l-sum_{j,k,l}N^k!_lM^l!_je_kwedgevarepsilon^j=Mboxdot N-Nboxdot M.$$



                                (This can also be justified by Jacobi's identity $(Mtimes N)times a=Mtimes(Ntimes a)-Ntimes(Mtimes a)$.)



                                The third part is similar to an outermorphism; when applied to a bivector from $V$, it produces



                                $$(Mwedge N)bullet(awedge b)=(Mbullet a)wedge(Nbullet b)+(Nbullet a)wedge(Mbullet b).$$



                                Unfortunately, there doesn't seem to be a simple expression for the ordinary matrix product. This is the best I could find, again using $K=sum_ie_iwedgevarepsilon^i$:



                                $$Mboxdot N=frac{Mtimes N+(Mbullet K)N+(Nbullet K)M-(Mwedge N)bullet K}{2}=sum_{i,j,k}M^i!_jN^j!_ke_iwedgevarepsilon^k$$



                                Note that $Mbullet K=text{tr},M$. And, of course, we have the defining relation $(Mboxdot N)bullet a=Mbullet(Nbullet a)$.



                                (That formula is unnecessary for transformations between different spaces, say $V$ and $W$. Using the geometric algebra over $Voplus V^*oplus Woplus W^*$, with basis ${e_i,varepsilon^i,f_i,phi^i}$, if $M=sum_{i,j}M^i!_je_iwedgevarepsilon^j$ maps $V$ to itself, and $N=sum_{i,j}N^i!_je_iwedgephi^j$ maps $W$ to $V$, then the matrix product is simply $Mboxdot N=Mtimes N$.)





                                Method 2: Rotors



                                Any general linear transformation on $V$ can be represented by a rotor $R=r_{2k}r_{2k-1}cdots r_2r_1$, a geometric product of an even number of invertible vectors in $Voplus V^*$. Each vector squares to a positive or negative number. If the numbers of positive and negative vectors are both even, then the transformation's determinant is positive; if they're both odd, then the determinant is negative. The transformation is done by the "sandwich product"



                                $$amapsto RaR^{-1}=r_{2k}cdots r_2r_1ar_1^{-1}r_2^{-1}cdots r_{2k}^{-1}.$$



                                Any such transformation respects the geometric product: $(RAR^{-1})(RBR^{-1})=R(AB)R^{-1}$; in particular, for vectors, $(RaR^{-1})cdot(RbR^{-1})=R(acdot b)R^{-1}=acdot b$, and $(RaR^{-1})wedge(RbR^{-1})=R(awedge b)R^{-1}$. So the outermorphism uses the same formula for an arbitrary multivector: $Amapsto RAR^{-1}$.



                                The composition of two transformations, with rotors $R$ and $S$, is represented by the geometric product $RS$:



                                $$amapsto R(SaS^{-1})R^{-1}=(RS)a(RS)^{-1}.$$





                                Here are some examples, using $sigma_i=(e_i+varepsilon^i)/sqrt2,;tau_i=(e_i-varepsilon^i)/sqrt2$, and



                                $$a=sum_ia^ie_i=a^1frac{sigma_1+tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}.$$



                                Reflection along $e_1$:



                                $$R=tau_1sigma_1=e_1wedgevarepsilon^1$$



                                $$RaR^{-1}=a^1frac{-sigma_1-tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                $$=-a^1e_1+a^2e_2+cdots+a^ne_n$$



                                Stretching by factor $exptheta$ along $e_1$:



                                $$R=expBig(fractheta2tau_1sigma_1Big)=coshfractheta2+tau_1sigma_1sinhfractheta2$$



                                $$=Big(sigma_1coshfractheta2+tau_1sinhfractheta2Big)sigma_1$$



                                $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_1sinhtheta)+(tau_1coshtheta+sigma_1sinhtheta)}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                $$=a^1e_1exptheta+a^2e_2+cdots+a^ne_n$$



                                Circular rotation by $theta$ from $e_1$ towards $e_2$ (note that $sigma_2sigma_1$ commutes with $tau_2tau_1$, and both square to $-1$ so Euler's formula applies) :



                                $$R=expBig(fractheta2(sigma_2sigma_1-tau_2tau_1)Big)=expBig(fractheta2sigma_2sigma_1Big)expBig(-fractheta2tau_2tau_1Big)$$



                                $$=Big(sigma_1cosfractheta2+sigma_2sinfractheta2Big)sigma_1Big(-tau_1cosfractheta2-tau_2sinfractheta2Big)tau_1$$



                                $$RaR^{-1}=a^1frac{(sigma_1costheta+sigma_2sintheta)+(tau_1costheta+tau_2sintheta)}{sqrt2}+a^2frac{(-sigma_1sintheta+sigma_2costheta)+(-tau_1sintheta+tau_2costheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                $$=a^1(e_1costheta+e_2sintheta)+a^2(-e_1sintheta+e_2costheta)+a^3e_3+cdots+a^ne_n$$



                                Hyperbolic rotation by $theta$ from $e_1$ towards $e_2$:



                                $$R=expBig(fractheta2(tau_2sigma_1-sigma_2tau_1)Big)=expBig(fractheta2tau_2sigma_1Big)expBig(-fractheta2sigma_2tau_1Big)$$



                                $$=Big(sigma_1coshfractheta2+tau_2sinhfractheta2Big)sigma_1Big(-tau_1coshfractheta2-sigma_2sinhfractheta2Big)tau_1$$



                                $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_2sinhtheta)+(tau_1coshtheta+sigma_2sinhtheta)}{sqrt2}+a^2frac{(tau_1sinhtheta+sigma_2coshtheta)+(sigma_1sinhtheta+tau_2coshtheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                $$=a^1(e_1coshtheta+e_2sinhtheta)+a^2(e_1sinhtheta+e_2coshtheta)+a^3e_3+cdots+a^ne_n$$



                                Shear by $theta$ from $e_1$ towards $e_2$:



                                $$R=expBig(fractheta2e_2wedgevarepsilon^1Big)=1+fractheta2e_2wedgevarepsilon^1$$



                                $$=-frac14Big(e_1-varepsilon^1+fractheta4e_2Big)Big(e_1-varepsilon^1-fractheta4e_2Big)Big(e_1+varepsilon^1+fractheta4e_2Big)Big(e_1+varepsilon^1-fractheta4e_2Big)$$



                                $$RaR^{-1}=a^1(e_1+theta e_2)+a^2e_2+a^3e_3+cdots+a^ne_n$$





                                This post is too long...



                                Some of this is described in Doran, Hestenes, Sommen, & Van Acker's "Lie Groups as Spin Groups": http://geocalc.clas.asu.edu/html/GeoAlg.html . (Beware that $E,e$ have different meanings from mine, though $K$ is the same.)






                                share|cite|improve this answer




























                                  0














                                  There are actually two ways to do this (in addition to the other answers'). They both use the same background, as follows.



                                  Given an $n$-dimensional real vector space $V$, we can construct a $2n$-dimensional space $Voplus V^*$, using the dual space $V^*$ (the set of all linear functions from $V$ to $mathbb R$). Define a dot product on $Voplus V^*$ by



                                  $$(a+alpha)cdot(b+beta)=acdotbeta+alphacdot b=beta(a)+alpha(b)$$



                                  where $ain V,alphain V^*,bin V,betain V^*$. Thus the dot product of any two vectors in $V$ is $0$ (so we don't have an "inner product" or "metric tensor" on $V$.)



                                  Take a basis ${e_i}={e_1,e_2,cdots,e_n}$ for $V$, and the dual basis ${varepsilon^i}$ for $V^*$, satisfying $varepsilon^icdot e_i=1$ and otherwise $varepsilon^icdot e_j=0$. These together form a basis for $Voplus V^*$. We can make a different basis ${sigma_i,tau_i}$, defined by



                                  $$sigma_i=frac{e_i+varepsilon^i}{sqrt2},qquadtau_i=frac{e_i-varepsilon^i}{sqrt2}.$$



                                  (If you want to avoid $sqrt2$ for some reason (like using $mathbb Q$ as the scalar field), then define $sigma_i=frac12e_i+varepsilon^i,;tau_i=frac12e_i-varepsilon^i$. The result is the same.)



                                  It can be seen that $sigma_icdottau_j=0$, and $sigma_icdotsigma_i=1=-tau_icdottau_i$ and otherwise $sigma_icdotsigma_j=0=tau_icdottau_j$. So we have an orthonormal basis of $n$ vectors $sigma_i$ squaring to ${^+}1$ and $n$ vectors $tau_i$ squaring to ${^-}1$, showing that $Voplus V^*$ is isomorphic to the pseudo-Euclidean space $mathbb R^{n,n}$.





                                  Method 1: Bivectors



                                  Any $ntimes n$ matrix (or linear transformation on $V$) can be represented by a bivector in the geometric algebra over $Voplus V^*$. Given the scalar components $M^i!_j$ of a matrix, the corresponding bivector is



                                  $$M=sum_{i,j}M^i!_j,e_iwedgevarepsilon^j.$$



                                  For example, with $n=2$, we would have



                                  $$M=begin{pmatrix}M^1!_1e_1wedgevarepsilon^1+M^1!_2e_1wedgevarepsilon^2 \ +M^2!_1e_2wedgevarepsilon^1+M^2!_2e_2wedgevarepsilon^2 end{pmatrix}congbegin{bmatrix}M^1!_1 & M^1!_2 \ M^2!_1 & M^2!_2end{bmatrix}.$$



                                  The transformation applying to a vector $a=sum_ia^ie_i$ is



                                  $$amapsto Mbullet a=M,llcorner,a=Mtimes a=-abullet M$$



                                  $$=sum_{i,j,k}M^i!_ja^k(e_iwedgevarepsilon^j)bullet e_k$$



                                  $$=sum_{i,j,k}M^i!_ja^kbig(e_i(varepsilon^jcdot e_k)-(e_icdot e_k)varepsilon^jbig)$$



                                  $$=sum_{i,j,k}M^i!_ja^kbig(e_i(delta^j_k)-0big)$$



                                  $$=sum_{i,j}M^i!_ja^je_i.$$



                                  There I used the bac-cab identity $(awedge b)bullet c=a(bcdot c)-(acdot c)b$, and the products $bullet,llcornertimes$ defined here.



                                  (Now, much of the remainder of this post is about a single bivector. For the product of two bivectors, you may skip to the highlighted equation.)



                                  The pullback/adjoint transformation on $V^*$ is $alphamapstoalphabullet M=-Mbulletalpha=sum_{i,j}alpha_iM^i!_jvarepsilon^j$. This relates to ordinary matrix multiplication, in that row vectors go on the left, vs column vectors on the right. Also relevant is the multivector identity $(A,lrcorner,B),llcorner,C=A,lrcorner,(B,llcorner,C)$, which implies $(alphabullet M)cdot b=alphacdot(Mbullet b)$. This relates to the associativity of matrix multiplication, or the definition of the adjoint.





                                  The outermorphism can be calculated using the exterior powers of $M$ :



                                  $$(Mbullet a)wedge(Mbullet b)=frac{Mwedge M}{2}bullet(awedge b)$$



                                  $$(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)=frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



                                  $$(Mbullet a_1)wedge(Mbullet a_2)wedgecdotswedge(Mbullet a_n)=frac{1(wedge M)^n}{n!}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



                                  $$=frac{Mwedge Mwedgecdotswedge M}{1;cdot;2;cdot;cdots;cdot;n}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



                                  (This notation, $1(wedge M)^n$, is sometimes replaced with $wedge^nM$ or $M^{wedge n}$, but those don't look right to me.)



                                  I'll prove the trivector case; the others are similar. I'll use the identities $A,llcorner,(Bwedge C)=(A,llcorner,B),llcorner,C$, and $a,lrcorner,(Bwedge C)=(a,lrcorner,B)wedge C+(-1)^kBwedge(a,lrcorner,C)$ when $a$ has grade $1$ and $B$ has grade $k$.



                                  $$frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



                                  $$=bigg(frac{Mwedge Mwedge M}{6}bullet abigg)bullet(bwedge c)$$



                                  $$=bigg(frac{Mwedge Mwedge(Mbullet a)+Mwedge(Mbullet a)wedge M+(Mbullet a)wedge Mwedge M}{6}bigg)bullet(bwedge c)$$



                                  (bivector $wedge$ is commutative, so these are all the same)



                                  $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bigg)bullet(bwedge c)$$



                                  $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bullet bbigg)bullet c$$



                                  $$=bigg(frac{(Mbullet a)wedge Mwedge(Mbullet b)+(Mbullet a)wedge(Mbullet b)wedge M+big((Mbullet a)cdot bbig)wedge Mwedge M}{2}bigg)bullet c$$



                                  (remember, all vectors in $V$ are orthogonal, so $(Mbullet a)cdot b=0$ )



                                  $$=Big((Mbullet a)wedge(Mbullet b)wedge MBig)bullet c$$



                                  $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)+(Mbullet a)wedgebig((Mbullet b)cdot cbig)wedge M+big((Mbullet a)cdot cbig)wedge(Mbullet b)wedge M$$



                                  $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c).$$



                                  This provides a formula for the determinant. Take the $n$-blade $E=e_1wedge e_2wedgecdotswedge e_n=e_1e_2cdots e_n$. (This is basis-dependent, though unique up to a scalar.) Then



                                  $$frac{1(wedge M)^n}{n!}bullet E=(det M)E.$$



                                  And, using the commutator identity $Atimes(BC)=(Atimes B)C+B(Atimes C)$, we find the trace:



                                  $$ME=M,lrcorner,E+Mtimes E+Mwedge E=0+Mtimes E+0$$



                                  $$=(Mtimes e_1)e_2cdots e_n+e_1(Mtimes e_2)cdots e_n+cdots+e_1e_2cdots(Mtimes e_n)$$



                                  $$=Big(sum_iM^i!_1e_iBig)e_2cdots e_n+e_1Big(sum_iM^i!_2e_iBig)cdots e_n+cdots+e_1e_2cdotsBig(sum_iM^i!_ne_iBig)$$



                                  (most of the terms disappear because $e_ie_i=0$ )



                                  $$=(M^1!_1e_1)e_2cdots e_n+e_1(M^2!_2e_2)cdots e_n+cdots+e_1e_2cdots(M^n!_ne_n)$$



                                  $$=(M^1!_1+M^2!_2+cdots+M^n!_n)e_1e_2cdots e_n=(text{tr},M)E.$$



                                  More generally, the characteristic polynomial coefficients are determined by the geometric product



                                  $$frac{1(wedge M)^k}{k!}E=c_kE.$$



                                  These can be combined into (a variant of) the polynomial itself. With the exterior exponential defined by



                                  $$exp!wedge(A)=sum_kfrac{1(wedge A)^k}{k!}=1+A+frac{Awedge A}2+frac{Awedge Awedge A}{6}+cdots,$$



                                  we have



                                  $$big(exp!wedge(tM)big)E=Big(sum_kc_kt^kBig)E=big(1+(text{tr},M)t+c_2t^2+cdots+(det M)t^nbig)E$$



                                  $$=t^nbigg(frac{1}{t^n}+frac{text{tr},M}{t^{n-1}}+frac{c_2}{t^{n-2}}+cdots+frac{det M}{1}bigg)E.$$





                                  The reverse of a multivector is $tilde A=sum_k(-1)^{k(k-1)/2}langle Arangle_k$; the reverse of a product is $(AB)^sim=tilde Btilde A$. It can be shown that the scalar product of two blades, with one reversed, is the determinant of the matrix of dot products of the blades' component vectors. For example, $(a_2wedge a_1)bullet(b_1wedge b_2)=(a_1cdot b_1)(a_2cdot b_2)-(a_1cdot b_2)(a_2cdot b_1)$.



                                  Given the above, and the blades $E=e_1cdots e_n$ and $cal E=varepsilon^1cdotsvarepsilon^n$, it follows that $Ebullettilde{cal E}=1$. The full geometric product happens to be the exterior exponential $Etilde{cal E}=exp!wedge K$, where $K=sum_ie_iwedgevarepsilon^i$ represents the identity transformation. So we can multiply this equation



                                  $$frac{1(wedge M)^k}{k!}E=c_kE$$



                                  by $tilde{cal E}$ to get



                                  $$frac{1(wedge M)^k}{k!}exp!wedge K=c_kexp!wedge K$$



                                  and take the scalar part, to isolate the polynomial coefficients



                                  $$frac{1(wedge M)^k}{k!}bulletfrac{1(wedge K)^k}{k!}=c_k.$$



                                  Or, multiply the $exp!wedge(tM)$ equation by $tilde{cal E}$ to get



                                  $$big(exp!wedge(tM)big)exp!wedge K=Big(sum_kc_kt^kBig)exp!wedge K.$$



                                  This can be wedged with $exp!wedge(-K)$ to isolate the polynomial, because $(exp!wedge A)wedge(exp!wedge B)=exp!wedge(A+B)$ if $A$ or $B$ has even grade.



                                  We also have the adjugate, which can be used to calculate the matrix inverse:



                                  $$frac{1(wedge M)^{n-1}}{(n-1)!}bulletfrac{1(wedge K)^n}{n!}=text{adj},M.$$





                                  The geometric product of two transformation bivectors, $M$ and $N$, has three parts (with grades $0,2,4$); each one is significant.




                                  $$MN=Mbullet N+Mtimes N+Mwedge N$$




                                  The first part is the trace of the matrix product:



                                  $$Mbullet N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)bullet(e_kwedgevarepsilon^l)$$



                                  $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_kdelta^l_i)$$



                                  $$=sum_{i,j}M^i!_jN^j!_i=text{tr}(Mboxdot N).$$



                                  The second part is the commutator of matrix products:



                                  $$Mtimes N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)times(e_kwedgevarepsilon^l)$$



                                  $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_ke_iwedgevarepsilon^l+delta^l_ivarepsilon^jwedge e_k)$$



                                  $$=sum_{i,j,l}M^i!_jN^j!_le_iwedgevarepsilon^l-sum_{j,k,l}N^k!_lM^l!_je_kwedgevarepsilon^j=Mboxdot N-Nboxdot M.$$



                                  (This can also be justified by Jacobi's identity $(Mtimes N)times a=Mtimes(Ntimes a)-Ntimes(Mtimes a)$.)



                                  The third part is similar to an outermorphism; when applied to a bivector from $V$, it produces



                                  $$(Mwedge N)bullet(awedge b)=(Mbullet a)wedge(Nbullet b)+(Nbullet a)wedge(Mbullet b).$$



                                  Unfortunately, there doesn't seem to be a simple expression for the ordinary matrix product. This is the best I could find, again using $K=sum_ie_iwedgevarepsilon^i$:



                                  $$Mboxdot N=frac{Mtimes N+(Mbullet K)N+(Nbullet K)M-(Mwedge N)bullet K}{2}=sum_{i,j,k}M^i!_jN^j!_ke_iwedgevarepsilon^k$$



                                  Note that $Mbullet K=text{tr},M$. And, of course, we have the defining relation $(Mboxdot N)bullet a=Mbullet(Nbullet a)$.



                                  (That formula is unnecessary for transformations between different spaces, say $V$ and $W$. Using the geometric algebra over $Voplus V^*oplus Woplus W^*$, with basis ${e_i,varepsilon^i,f_i,phi^i}$, if $M=sum_{i,j}M^i!_je_iwedgevarepsilon^j$ maps $V$ to itself, and $N=sum_{i,j}N^i!_je_iwedgephi^j$ maps $W$ to $V$, then the matrix product is simply $Mboxdot N=Mtimes N$.)





                                  Method 2: Rotors



                                  Any general linear transformation on $V$ can be represented by a rotor $R=r_{2k}r_{2k-1}cdots r_2r_1$, a geometric product of an even number of invertible vectors in $Voplus V^*$. Each vector squares to a positive or negative number. If the numbers of positive and negative vectors are both even, then the transformation's determinant is positive; if they're both odd, then the determinant is negative. The transformation is done by the "sandwich product"



                                  $$amapsto RaR^{-1}=r_{2k}cdots r_2r_1ar_1^{-1}r_2^{-1}cdots r_{2k}^{-1}.$$



                                  Any such transformation respects the geometric product: $(RAR^{-1})(RBR^{-1})=R(AB)R^{-1}$; in particular, for vectors, $(RaR^{-1})cdot(RbR^{-1})=R(acdot b)R^{-1}=acdot b$, and $(RaR^{-1})wedge(RbR^{-1})=R(awedge b)R^{-1}$. So the outermorphism uses the same formula for an arbitrary multivector: $Amapsto RAR^{-1}$.



                                  The composition of two transformations, with rotors $R$ and $S$, is represented by the geometric product $RS$:



                                  $$amapsto R(SaS^{-1})R^{-1}=(RS)a(RS)^{-1}.$$





                                  Here are some examples, using $sigma_i=(e_i+varepsilon^i)/sqrt2,;tau_i=(e_i-varepsilon^i)/sqrt2$, and



                                  $$a=sum_ia^ie_i=a^1frac{sigma_1+tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}.$$



                                  Reflection along $e_1$:



                                  $$R=tau_1sigma_1=e_1wedgevarepsilon^1$$



                                  $$RaR^{-1}=a^1frac{-sigma_1-tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                  $$=-a^1e_1+a^2e_2+cdots+a^ne_n$$



                                  Stretching by factor $exptheta$ along $e_1$:



                                  $$R=expBig(fractheta2tau_1sigma_1Big)=coshfractheta2+tau_1sigma_1sinhfractheta2$$



                                  $$=Big(sigma_1coshfractheta2+tau_1sinhfractheta2Big)sigma_1$$



                                  $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_1sinhtheta)+(tau_1coshtheta+sigma_1sinhtheta)}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                  $$=a^1e_1exptheta+a^2e_2+cdots+a^ne_n$$



                                  Circular rotation by $theta$ from $e_1$ towards $e_2$ (note that $sigma_2sigma_1$ commutes with $tau_2tau_1$, and both square to $-1$ so Euler's formula applies) :



                                  $$R=expBig(fractheta2(sigma_2sigma_1-tau_2tau_1)Big)=expBig(fractheta2sigma_2sigma_1Big)expBig(-fractheta2tau_2tau_1Big)$$



                                  $$=Big(sigma_1cosfractheta2+sigma_2sinfractheta2Big)sigma_1Big(-tau_1cosfractheta2-tau_2sinfractheta2Big)tau_1$$



                                  $$RaR^{-1}=a^1frac{(sigma_1costheta+sigma_2sintheta)+(tau_1costheta+tau_2sintheta)}{sqrt2}+a^2frac{(-sigma_1sintheta+sigma_2costheta)+(-tau_1sintheta+tau_2costheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                  $$=a^1(e_1costheta+e_2sintheta)+a^2(-e_1sintheta+e_2costheta)+a^3e_3+cdots+a^ne_n$$



                                  Hyperbolic rotation by $theta$ from $e_1$ towards $e_2$:



                                  $$R=expBig(fractheta2(tau_2sigma_1-sigma_2tau_1)Big)=expBig(fractheta2tau_2sigma_1Big)expBig(-fractheta2sigma_2tau_1Big)$$



                                  $$=Big(sigma_1coshfractheta2+tau_2sinhfractheta2Big)sigma_1Big(-tau_1coshfractheta2-sigma_2sinhfractheta2Big)tau_1$$



                                  $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_2sinhtheta)+(tau_1coshtheta+sigma_2sinhtheta)}{sqrt2}+a^2frac{(tau_1sinhtheta+sigma_2coshtheta)+(sigma_1sinhtheta+tau_2coshtheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                  $$=a^1(e_1coshtheta+e_2sinhtheta)+a^2(e_1sinhtheta+e_2coshtheta)+a^3e_3+cdots+a^ne_n$$



                                  Shear by $theta$ from $e_1$ towards $e_2$:



                                  $$R=expBig(fractheta2e_2wedgevarepsilon^1Big)=1+fractheta2e_2wedgevarepsilon^1$$



                                  $$=-frac14Big(e_1-varepsilon^1+fractheta4e_2Big)Big(e_1-varepsilon^1-fractheta4e_2Big)Big(e_1+varepsilon^1+fractheta4e_2Big)Big(e_1+varepsilon^1-fractheta4e_2Big)$$



                                  $$RaR^{-1}=a^1(e_1+theta e_2)+a^2e_2+a^3e_3+cdots+a^ne_n$$





                                  This post is too long...



                                  Some of this is described in Doran, Hestenes, Sommen, & Van Acker's "Lie Groups as Spin Groups": http://geocalc.clas.asu.edu/html/GeoAlg.html . (Beware that $E,e$ have different meanings from mine, though $K$ is the same.)






                                  share|cite|improve this answer


























                                    0












                                    0








                                    0






                                    There are actually two ways to do this (in addition to the other answers'). They both use the same background, as follows.



                                    Given an $n$-dimensional real vector space $V$, we can construct a $2n$-dimensional space $Voplus V^*$, using the dual space $V^*$ (the set of all linear functions from $V$ to $mathbb R$). Define a dot product on $Voplus V^*$ by



                                    $$(a+alpha)cdot(b+beta)=acdotbeta+alphacdot b=beta(a)+alpha(b)$$



                                    where $ain V,alphain V^*,bin V,betain V^*$. Thus the dot product of any two vectors in $V$ is $0$ (so we don't have an "inner product" or "metric tensor" on $V$.)



                                    Take a basis ${e_i}={e_1,e_2,cdots,e_n}$ for $V$, and the dual basis ${varepsilon^i}$ for $V^*$, satisfying $varepsilon^icdot e_i=1$ and otherwise $varepsilon^icdot e_j=0$. These together form a basis for $Voplus V^*$. We can make a different basis ${sigma_i,tau_i}$, defined by



                                    $$sigma_i=frac{e_i+varepsilon^i}{sqrt2},qquadtau_i=frac{e_i-varepsilon^i}{sqrt2}.$$



                                    (If you want to avoid $sqrt2$ for some reason (like using $mathbb Q$ as the scalar field), then define $sigma_i=frac12e_i+varepsilon^i,;tau_i=frac12e_i-varepsilon^i$. The result is the same.)



                                    It can be seen that $sigma_icdottau_j=0$, and $sigma_icdotsigma_i=1=-tau_icdottau_i$ and otherwise $sigma_icdotsigma_j=0=tau_icdottau_j$. So we have an orthonormal basis of $n$ vectors $sigma_i$ squaring to ${^+}1$ and $n$ vectors $tau_i$ squaring to ${^-}1$, showing that $Voplus V^*$ is isomorphic to the pseudo-Euclidean space $mathbb R^{n,n}$.





                                    Method 1: Bivectors



                                    Any $ntimes n$ matrix (or linear transformation on $V$) can be represented by a bivector in the geometric algebra over $Voplus V^*$. Given the scalar components $M^i!_j$ of a matrix, the corresponding bivector is



                                    $$M=sum_{i,j}M^i!_j,e_iwedgevarepsilon^j.$$



                                    For example, with $n=2$, we would have



                                    $$M=begin{pmatrix}M^1!_1e_1wedgevarepsilon^1+M^1!_2e_1wedgevarepsilon^2 \ +M^2!_1e_2wedgevarepsilon^1+M^2!_2e_2wedgevarepsilon^2 end{pmatrix}congbegin{bmatrix}M^1!_1 & M^1!_2 \ M^2!_1 & M^2!_2end{bmatrix}.$$



                                    The transformation applying to a vector $a=sum_ia^ie_i$ is



                                    $$amapsto Mbullet a=M,llcorner,a=Mtimes a=-abullet M$$



                                    $$=sum_{i,j,k}M^i!_ja^k(e_iwedgevarepsilon^j)bullet e_k$$



                                    $$=sum_{i,j,k}M^i!_ja^kbig(e_i(varepsilon^jcdot e_k)-(e_icdot e_k)varepsilon^jbig)$$



                                    $$=sum_{i,j,k}M^i!_ja^kbig(e_i(delta^j_k)-0big)$$



                                    $$=sum_{i,j}M^i!_ja^je_i.$$



                                    There I used the bac-cab identity $(awedge b)bullet c=a(bcdot c)-(acdot c)b$, and the products $bullet,llcornertimes$ defined here.



                                    (Now, much of the remainder of this post is about a single bivector. For the product of two bivectors, you may skip to the highlighted equation.)



                                    The pullback/adjoint transformation on $V^*$ is $alphamapstoalphabullet M=-Mbulletalpha=sum_{i,j}alpha_iM^i!_jvarepsilon^j$. This relates to ordinary matrix multiplication, in that row vectors go on the left, vs column vectors on the right. Also relevant is the multivector identity $(A,lrcorner,B),llcorner,C=A,lrcorner,(B,llcorner,C)$, which implies $(alphabullet M)cdot b=alphacdot(Mbullet b)$. This relates to the associativity of matrix multiplication, or the definition of the adjoint.





                                    The outermorphism can be calculated using the exterior powers of $M$ :



                                    $$(Mbullet a)wedge(Mbullet b)=frac{Mwedge M}{2}bullet(awedge b)$$



                                    $$(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)=frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



                                    $$(Mbullet a_1)wedge(Mbullet a_2)wedgecdotswedge(Mbullet a_n)=frac{1(wedge M)^n}{n!}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



                                    $$=frac{Mwedge Mwedgecdotswedge M}{1;cdot;2;cdot;cdots;cdot;n}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



                                    (This notation, $1(wedge M)^n$, is sometimes replaced with $wedge^nM$ or $M^{wedge n}$, but those don't look right to me.)



                                    I'll prove the trivector case; the others are similar. I'll use the identities $A,llcorner,(Bwedge C)=(A,llcorner,B),llcorner,C$, and $a,lrcorner,(Bwedge C)=(a,lrcorner,B)wedge C+(-1)^kBwedge(a,lrcorner,C)$ when $a$ has grade $1$ and $B$ has grade $k$.



                                    $$frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



                                    $$=bigg(frac{Mwedge Mwedge M}{6}bullet abigg)bullet(bwedge c)$$



                                    $$=bigg(frac{Mwedge Mwedge(Mbullet a)+Mwedge(Mbullet a)wedge M+(Mbullet a)wedge Mwedge M}{6}bigg)bullet(bwedge c)$$



                                    (bivector $wedge$ is commutative, so these are all the same)



                                    $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bigg)bullet(bwedge c)$$



                                    $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bullet bbigg)bullet c$$



                                    $$=bigg(frac{(Mbullet a)wedge Mwedge(Mbullet b)+(Mbullet a)wedge(Mbullet b)wedge M+big((Mbullet a)cdot bbig)wedge Mwedge M}{2}bigg)bullet c$$



                                    (remember, all vectors in $V$ are orthogonal, so $(Mbullet a)cdot b=0$ )



                                    $$=Big((Mbullet a)wedge(Mbullet b)wedge MBig)bullet c$$



                                    $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)+(Mbullet a)wedgebig((Mbullet b)cdot cbig)wedge M+big((Mbullet a)cdot cbig)wedge(Mbullet b)wedge M$$



                                    $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c).$$



                                    This provides a formula for the determinant. Take the $n$-blade $E=e_1wedge e_2wedgecdotswedge e_n=e_1e_2cdots e_n$. (This is basis-dependent, though unique up to a scalar.) Then



                                    $$frac{1(wedge M)^n}{n!}bullet E=(det M)E.$$



                                    And, using the commutator identity $Atimes(BC)=(Atimes B)C+B(Atimes C)$, we find the trace:



                                    $$ME=M,lrcorner,E+Mtimes E+Mwedge E=0+Mtimes E+0$$



                                    $$=(Mtimes e_1)e_2cdots e_n+e_1(Mtimes e_2)cdots e_n+cdots+e_1e_2cdots(Mtimes e_n)$$



                                    $$=Big(sum_iM^i!_1e_iBig)e_2cdots e_n+e_1Big(sum_iM^i!_2e_iBig)cdots e_n+cdots+e_1e_2cdotsBig(sum_iM^i!_ne_iBig)$$



                                    (most of the terms disappear because $e_ie_i=0$ )



                                    $$=(M^1!_1e_1)e_2cdots e_n+e_1(M^2!_2e_2)cdots e_n+cdots+e_1e_2cdots(M^n!_ne_n)$$



                                    $$=(M^1!_1+M^2!_2+cdots+M^n!_n)e_1e_2cdots e_n=(text{tr},M)E.$$



                                    More generally, the characteristic polynomial coefficients are determined by the geometric product



                                    $$frac{1(wedge M)^k}{k!}E=c_kE.$$



                                    These can be combined into (a variant of) the polynomial itself. With the exterior exponential defined by



                                    $$exp!wedge(A)=sum_kfrac{1(wedge A)^k}{k!}=1+A+frac{Awedge A}2+frac{Awedge Awedge A}{6}+cdots,$$



                                    we have



                                    $$big(exp!wedge(tM)big)E=Big(sum_kc_kt^kBig)E=big(1+(text{tr},M)t+c_2t^2+cdots+(det M)t^nbig)E$$



                                    $$=t^nbigg(frac{1}{t^n}+frac{text{tr},M}{t^{n-1}}+frac{c_2}{t^{n-2}}+cdots+frac{det M}{1}bigg)E.$$





                                    The reverse of a multivector is $tilde A=sum_k(-1)^{k(k-1)/2}langle Arangle_k$; the reverse of a product is $(AB)^sim=tilde Btilde A$. It can be shown that the scalar product of two blades, with one reversed, is the determinant of the matrix of dot products of the blades' component vectors. For example, $(a_2wedge a_1)bullet(b_1wedge b_2)=(a_1cdot b_1)(a_2cdot b_2)-(a_1cdot b_2)(a_2cdot b_1)$.



                                    Given the above, and the blades $E=e_1cdots e_n$ and $cal E=varepsilon^1cdotsvarepsilon^n$, it follows that $Ebullettilde{cal E}=1$. The full geometric product happens to be the exterior exponential $Etilde{cal E}=exp!wedge K$, where $K=sum_ie_iwedgevarepsilon^i$ represents the identity transformation. So we can multiply this equation



                                    $$frac{1(wedge M)^k}{k!}E=c_kE$$



                                    by $tilde{cal E}$ to get



                                    $$frac{1(wedge M)^k}{k!}exp!wedge K=c_kexp!wedge K$$



                                    and take the scalar part, to isolate the polynomial coefficients



                                    $$frac{1(wedge M)^k}{k!}bulletfrac{1(wedge K)^k}{k!}=c_k.$$



                                    Or, multiply the $exp!wedge(tM)$ equation by $tilde{cal E}$ to get



                                    $$big(exp!wedge(tM)big)exp!wedge K=Big(sum_kc_kt^kBig)exp!wedge K.$$



                                    This can be wedged with $exp!wedge(-K)$ to isolate the polynomial, because $(exp!wedge A)wedge(exp!wedge B)=exp!wedge(A+B)$ if $A$ or $B$ has even grade.



                                    We also have the adjugate, which can be used to calculate the matrix inverse:



                                    $$frac{1(wedge M)^{n-1}}{(n-1)!}bulletfrac{1(wedge K)^n}{n!}=text{adj},M.$$





                                    The geometric product of two transformation bivectors, $M$ and $N$, has three parts (with grades $0,2,4$); each one is significant.




                                    $$MN=Mbullet N+Mtimes N+Mwedge N$$




                                    The first part is the trace of the matrix product:



                                    $$Mbullet N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)bullet(e_kwedgevarepsilon^l)$$



                                    $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_kdelta^l_i)$$



                                    $$=sum_{i,j}M^i!_jN^j!_i=text{tr}(Mboxdot N).$$



                                    The second part is the commutator of matrix products:



                                    $$Mtimes N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)times(e_kwedgevarepsilon^l)$$



                                    $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_ke_iwedgevarepsilon^l+delta^l_ivarepsilon^jwedge e_k)$$



                                    $$=sum_{i,j,l}M^i!_jN^j!_le_iwedgevarepsilon^l-sum_{j,k,l}N^k!_lM^l!_je_kwedgevarepsilon^j=Mboxdot N-Nboxdot M.$$



                                    (This can also be justified by Jacobi's identity $(Mtimes N)times a=Mtimes(Ntimes a)-Ntimes(Mtimes a)$.)



                                    The third part is similar to an outermorphism; when applied to a bivector from $V$, it produces



                                    $$(Mwedge N)bullet(awedge b)=(Mbullet a)wedge(Nbullet b)+(Nbullet a)wedge(Mbullet b).$$



                                    Unfortunately, there doesn't seem to be a simple expression for the ordinary matrix product. This is the best I could find, again using $K=sum_ie_iwedgevarepsilon^i$:



                                    $$Mboxdot N=frac{Mtimes N+(Mbullet K)N+(Nbullet K)M-(Mwedge N)bullet K}{2}=sum_{i,j,k}M^i!_jN^j!_ke_iwedgevarepsilon^k$$



                                    Note that $Mbullet K=text{tr},M$. And, of course, we have the defining relation $(Mboxdot N)bullet a=Mbullet(Nbullet a)$.



                                    (That formula is unnecessary for transformations between different spaces, say $V$ and $W$. Using the geometric algebra over $Voplus V^*oplus Woplus W^*$, with basis ${e_i,varepsilon^i,f_i,phi^i}$, if $M=sum_{i,j}M^i!_je_iwedgevarepsilon^j$ maps $V$ to itself, and $N=sum_{i,j}N^i!_je_iwedgephi^j$ maps $W$ to $V$, then the matrix product is simply $Mboxdot N=Mtimes N$.)





                                    Method 2: Rotors



                                    Any general linear transformation on $V$ can be represented by a rotor $R=r_{2k}r_{2k-1}cdots r_2r_1$, a geometric product of an even number of invertible vectors in $Voplus V^*$. Each vector squares to a positive or negative number. If the numbers of positive and negative vectors are both even, then the transformation's determinant is positive; if they're both odd, then the determinant is negative. The transformation is done by the "sandwich product"



                                    $$amapsto RaR^{-1}=r_{2k}cdots r_2r_1ar_1^{-1}r_2^{-1}cdots r_{2k}^{-1}.$$



                                    Any such transformation respects the geometric product: $(RAR^{-1})(RBR^{-1})=R(AB)R^{-1}$; in particular, for vectors, $(RaR^{-1})cdot(RbR^{-1})=R(acdot b)R^{-1}=acdot b$, and $(RaR^{-1})wedge(RbR^{-1})=R(awedge b)R^{-1}$. So the outermorphism uses the same formula for an arbitrary multivector: $Amapsto RAR^{-1}$.



                                    The composition of two transformations, with rotors $R$ and $S$, is represented by the geometric product $RS$:



                                    $$amapsto R(SaS^{-1})R^{-1}=(RS)a(RS)^{-1}.$$





                                    Here are some examples, using $sigma_i=(e_i+varepsilon^i)/sqrt2,;tau_i=(e_i-varepsilon^i)/sqrt2$, and



                                    $$a=sum_ia^ie_i=a^1frac{sigma_1+tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}.$$



                                    Reflection along $e_1$:



                                    $$R=tau_1sigma_1=e_1wedgevarepsilon^1$$



                                    $$RaR^{-1}=a^1frac{-sigma_1-tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                    $$=-a^1e_1+a^2e_2+cdots+a^ne_n$$



                                    Stretching by factor $exptheta$ along $e_1$:



                                    $$R=expBig(fractheta2tau_1sigma_1Big)=coshfractheta2+tau_1sigma_1sinhfractheta2$$



                                    $$=Big(sigma_1coshfractheta2+tau_1sinhfractheta2Big)sigma_1$$



                                    $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_1sinhtheta)+(tau_1coshtheta+sigma_1sinhtheta)}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                    $$=a^1e_1exptheta+a^2e_2+cdots+a^ne_n$$



                                    Circular rotation by $theta$ from $e_1$ towards $e_2$ (note that $sigma_2sigma_1$ commutes with $tau_2tau_1$, and both square to $-1$ so Euler's formula applies) :



                                    $$R=expBig(fractheta2(sigma_2sigma_1-tau_2tau_1)Big)=expBig(fractheta2sigma_2sigma_1Big)expBig(-fractheta2tau_2tau_1Big)$$



                                    $$=Big(sigma_1cosfractheta2+sigma_2sinfractheta2Big)sigma_1Big(-tau_1cosfractheta2-tau_2sinfractheta2Big)tau_1$$



                                    $$RaR^{-1}=a^1frac{(sigma_1costheta+sigma_2sintheta)+(tau_1costheta+tau_2sintheta)}{sqrt2}+a^2frac{(-sigma_1sintheta+sigma_2costheta)+(-tau_1sintheta+tau_2costheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                    $$=a^1(e_1costheta+e_2sintheta)+a^2(-e_1sintheta+e_2costheta)+a^3e_3+cdots+a^ne_n$$



                                    Hyperbolic rotation by $theta$ from $e_1$ towards $e_2$:



                                    $$R=expBig(fractheta2(tau_2sigma_1-sigma_2tau_1)Big)=expBig(fractheta2tau_2sigma_1Big)expBig(-fractheta2sigma_2tau_1Big)$$



                                    $$=Big(sigma_1coshfractheta2+tau_2sinhfractheta2Big)sigma_1Big(-tau_1coshfractheta2-sigma_2sinhfractheta2Big)tau_1$$



                                    $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_2sinhtheta)+(tau_1coshtheta+sigma_2sinhtheta)}{sqrt2}+a^2frac{(tau_1sinhtheta+sigma_2coshtheta)+(sigma_1sinhtheta+tau_2coshtheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                    $$=a^1(e_1coshtheta+e_2sinhtheta)+a^2(e_1sinhtheta+e_2coshtheta)+a^3e_3+cdots+a^ne_n$$



                                    Shear by $theta$ from $e_1$ towards $e_2$:



                                    $$R=expBig(fractheta2e_2wedgevarepsilon^1Big)=1+fractheta2e_2wedgevarepsilon^1$$



                                    $$=-frac14Big(e_1-varepsilon^1+fractheta4e_2Big)Big(e_1-varepsilon^1-fractheta4e_2Big)Big(e_1+varepsilon^1+fractheta4e_2Big)Big(e_1+varepsilon^1-fractheta4e_2Big)$$



                                    $$RaR^{-1}=a^1(e_1+theta e_2)+a^2e_2+a^3e_3+cdots+a^ne_n$$





                                    This post is too long...



                                    Some of this is described in Doran, Hestenes, Sommen, & Van Acker's "Lie Groups as Spin Groups": http://geocalc.clas.asu.edu/html/GeoAlg.html . (Beware that $E,e$ have different meanings from mine, though $K$ is the same.)






                                    share|cite|improve this answer














                                    There are actually two ways to do this (in addition to the other answers'). They both use the same background, as follows.



                                    Given an $n$-dimensional real vector space $V$, we can construct a $2n$-dimensional space $Voplus V^*$, using the dual space $V^*$ (the set of all linear functions from $V$ to $mathbb R$). Define a dot product on $Voplus V^*$ by



                                    $$(a+alpha)cdot(b+beta)=acdotbeta+alphacdot b=beta(a)+alpha(b)$$



                                    where $ain V,alphain V^*,bin V,betain V^*$. Thus the dot product of any two vectors in $V$ is $0$ (so we don't have an "inner product" or "metric tensor" on $V$.)



                                    Take a basis ${e_i}={e_1,e_2,cdots,e_n}$ for $V$, and the dual basis ${varepsilon^i}$ for $V^*$, satisfying $varepsilon^icdot e_i=1$ and otherwise $varepsilon^icdot e_j=0$. These together form a basis for $Voplus V^*$. We can make a different basis ${sigma_i,tau_i}$, defined by



                                    $$sigma_i=frac{e_i+varepsilon^i}{sqrt2},qquadtau_i=frac{e_i-varepsilon^i}{sqrt2}.$$



                                    (If you want to avoid $sqrt2$ for some reason (like using $mathbb Q$ as the scalar field), then define $sigma_i=frac12e_i+varepsilon^i,;tau_i=frac12e_i-varepsilon^i$. The result is the same.)



                                    It can be seen that $sigma_icdottau_j=0$, and $sigma_icdotsigma_i=1=-tau_icdottau_i$ and otherwise $sigma_icdotsigma_j=0=tau_icdottau_j$. So we have an orthonormal basis of $n$ vectors $sigma_i$ squaring to ${^+}1$ and $n$ vectors $tau_i$ squaring to ${^-}1$, showing that $Voplus V^*$ is isomorphic to the pseudo-Euclidean space $mathbb R^{n,n}$.





                                    Method 1: Bivectors



                                    Any $ntimes n$ matrix (or linear transformation on $V$) can be represented by a bivector in the geometric algebra over $Voplus V^*$. Given the scalar components $M^i!_j$ of a matrix, the corresponding bivector is



                                    $$M=sum_{i,j}M^i!_j,e_iwedgevarepsilon^j.$$



                                    For example, with $n=2$, we would have



                                    $$M=begin{pmatrix}M^1!_1e_1wedgevarepsilon^1+M^1!_2e_1wedgevarepsilon^2 \ +M^2!_1e_2wedgevarepsilon^1+M^2!_2e_2wedgevarepsilon^2 end{pmatrix}congbegin{bmatrix}M^1!_1 & M^1!_2 \ M^2!_1 & M^2!_2end{bmatrix}.$$



                                    The transformation applying to a vector $a=sum_ia^ie_i$ is



                                    $$amapsto Mbullet a=M,llcorner,a=Mtimes a=-abullet M$$



                                    $$=sum_{i,j,k}M^i!_ja^k(e_iwedgevarepsilon^j)bullet e_k$$



                                    $$=sum_{i,j,k}M^i!_ja^kbig(e_i(varepsilon^jcdot e_k)-(e_icdot e_k)varepsilon^jbig)$$



                                    $$=sum_{i,j,k}M^i!_ja^kbig(e_i(delta^j_k)-0big)$$



                                    $$=sum_{i,j}M^i!_ja^je_i.$$



                                    There I used the bac-cab identity $(awedge b)bullet c=a(bcdot c)-(acdot c)b$, and the products $bullet,llcornertimes$ defined here.



                                    (Now, much of the remainder of this post is about a single bivector. For the product of two bivectors, you may skip to the highlighted equation.)



                                    The pullback/adjoint transformation on $V^*$ is $alphamapstoalphabullet M=-Mbulletalpha=sum_{i,j}alpha_iM^i!_jvarepsilon^j$. This relates to ordinary matrix multiplication, in that row vectors go on the left, vs column vectors on the right. Also relevant is the multivector identity $(A,lrcorner,B),llcorner,C=A,lrcorner,(B,llcorner,C)$, which implies $(alphabullet M)cdot b=alphacdot(Mbullet b)$. This relates to the associativity of matrix multiplication, or the definition of the adjoint.





                                    The outermorphism can be calculated using the exterior powers of $M$ :



                                    $$(Mbullet a)wedge(Mbullet b)=frac{Mwedge M}{2}bullet(awedge b)$$



                                    $$(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)=frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



                                    $$(Mbullet a_1)wedge(Mbullet a_2)wedgecdotswedge(Mbullet a_n)=frac{1(wedge M)^n}{n!}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



                                    $$=frac{Mwedge Mwedgecdotswedge M}{1;cdot;2;cdot;cdots;cdot;n}bullet(a_1wedge a_2wedgecdotswedge a_n)$$



                                    (This notation, $1(wedge M)^n$, is sometimes replaced with $wedge^nM$ or $M^{wedge n}$, but those don't look right to me.)



                                    I'll prove the trivector case; the others are similar. I'll use the identities $A,llcorner,(Bwedge C)=(A,llcorner,B),llcorner,C$, and $a,lrcorner,(Bwedge C)=(a,lrcorner,B)wedge C+(-1)^kBwedge(a,lrcorner,C)$ when $a$ has grade $1$ and $B$ has grade $k$.



                                    $$frac{Mwedge Mwedge M}{6}bullet(awedge bwedge c)$$



                                    $$=bigg(frac{Mwedge Mwedge M}{6}bullet abigg)bullet(bwedge c)$$



                                    $$=bigg(frac{Mwedge Mwedge(Mbullet a)+Mwedge(Mbullet a)wedge M+(Mbullet a)wedge Mwedge M}{6}bigg)bullet(bwedge c)$$



                                    (bivector $wedge$ is commutative, so these are all the same)



                                    $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bigg)bullet(bwedge c)$$



                                    $$=bigg(frac{(Mbullet a)wedge Mwedge M}{2}bullet bbigg)bullet c$$



                                    $$=bigg(frac{(Mbullet a)wedge Mwedge(Mbullet b)+(Mbullet a)wedge(Mbullet b)wedge M+big((Mbullet a)cdot bbig)wedge Mwedge M}{2}bigg)bullet c$$



                                    (remember, all vectors in $V$ are orthogonal, so $(Mbullet a)cdot b=0$ )



                                    $$=Big((Mbullet a)wedge(Mbullet b)wedge MBig)bullet c$$



                                    $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c)+(Mbullet a)wedgebig((Mbullet b)cdot cbig)wedge M+big((Mbullet a)cdot cbig)wedge(Mbullet b)wedge M$$



                                    $$=(Mbullet a)wedge(Mbullet b)wedge(Mbullet c).$$



                                    This provides a formula for the determinant. Take the $n$-blade $E=e_1wedge e_2wedgecdotswedge e_n=e_1e_2cdots e_n$. (This is basis-dependent, though unique up to a scalar.) Then



                                    $$frac{1(wedge M)^n}{n!}bullet E=(det M)E.$$



                                    And, using the commutator identity $Atimes(BC)=(Atimes B)C+B(Atimes C)$, we find the trace:



                                    $$ME=M,lrcorner,E+Mtimes E+Mwedge E=0+Mtimes E+0$$



                                    $$=(Mtimes e_1)e_2cdots e_n+e_1(Mtimes e_2)cdots e_n+cdots+e_1e_2cdots(Mtimes e_n)$$



                                    $$=Big(sum_iM^i!_1e_iBig)e_2cdots e_n+e_1Big(sum_iM^i!_2e_iBig)cdots e_n+cdots+e_1e_2cdotsBig(sum_iM^i!_ne_iBig)$$



                                    (most of the terms disappear because $e_ie_i=0$ )



                                    $$=(M^1!_1e_1)e_2cdots e_n+e_1(M^2!_2e_2)cdots e_n+cdots+e_1e_2cdots(M^n!_ne_n)$$



                                    $$=(M^1!_1+M^2!_2+cdots+M^n!_n)e_1e_2cdots e_n=(text{tr},M)E.$$



                                    More generally, the characteristic polynomial coefficients are determined by the geometric product



                                    $$frac{1(wedge M)^k}{k!}E=c_kE.$$



                                    These can be combined into (a variant of) the polynomial itself. With the exterior exponential defined by



                                    $$exp!wedge(A)=sum_kfrac{1(wedge A)^k}{k!}=1+A+frac{Awedge A}2+frac{Awedge Awedge A}{6}+cdots,$$



                                    we have



                                    $$big(exp!wedge(tM)big)E=Big(sum_kc_kt^kBig)E=big(1+(text{tr},M)t+c_2t^2+cdots+(det M)t^nbig)E$$



                                    $$=t^nbigg(frac{1}{t^n}+frac{text{tr},M}{t^{n-1}}+frac{c_2}{t^{n-2}}+cdots+frac{det M}{1}bigg)E.$$





                                    The reverse of a multivector is $tilde A=sum_k(-1)^{k(k-1)/2}langle Arangle_k$; the reverse of a product is $(AB)^sim=tilde Btilde A$. It can be shown that the scalar product of two blades, with one reversed, is the determinant of the matrix of dot products of the blades' component vectors. For example, $(a_2wedge a_1)bullet(b_1wedge b_2)=(a_1cdot b_1)(a_2cdot b_2)-(a_1cdot b_2)(a_2cdot b_1)$.



                                    Given the above, and the blades $E=e_1cdots e_n$ and $cal E=varepsilon^1cdotsvarepsilon^n$, it follows that $Ebullettilde{cal E}=1$. The full geometric product happens to be the exterior exponential $Etilde{cal E}=exp!wedge K$, where $K=sum_ie_iwedgevarepsilon^i$ represents the identity transformation. So we can multiply this equation



                                    $$frac{1(wedge M)^k}{k!}E=c_kE$$



                                    by $tilde{cal E}$ to get



                                    $$frac{1(wedge M)^k}{k!}exp!wedge K=c_kexp!wedge K$$



                                    and take the scalar part, to isolate the polynomial coefficients



                                    $$frac{1(wedge M)^k}{k!}bulletfrac{1(wedge K)^k}{k!}=c_k.$$



                                    Or, multiply the $exp!wedge(tM)$ equation by $tilde{cal E}$ to get



                                    $$big(exp!wedge(tM)big)exp!wedge K=Big(sum_kc_kt^kBig)exp!wedge K.$$



                                    This can be wedged with $exp!wedge(-K)$ to isolate the polynomial, because $(exp!wedge A)wedge(exp!wedge B)=exp!wedge(A+B)$ if $A$ or $B$ has even grade.



                                    We also have the adjugate, which can be used to calculate the matrix inverse:



                                    $$frac{1(wedge M)^{n-1}}{(n-1)!}bulletfrac{1(wedge K)^n}{n!}=text{adj},M.$$





                                    The geometric product of two transformation bivectors, $M$ and $N$, has three parts (with grades $0,2,4$); each one is significant.




                                    $$MN=Mbullet N+Mtimes N+Mwedge N$$




                                    The first part is the trace of the matrix product:



                                    $$Mbullet N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)bullet(e_kwedgevarepsilon^l)$$



                                    $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_kdelta^l_i)$$



                                    $$=sum_{i,j}M^i!_jN^j!_i=text{tr}(Mboxdot N).$$



                                    The second part is the commutator of matrix products:



                                    $$Mtimes N=sum_{i,j,k,l}M^i!_jN^k!_l(e_iwedgevarepsilon^j)times(e_kwedgevarepsilon^l)$$



                                    $$=sum_{i,j,k,l}M^i!_jN^k!_l(delta^j_ke_iwedgevarepsilon^l+delta^l_ivarepsilon^jwedge e_k)$$



                                    $$=sum_{i,j,l}M^i!_jN^j!_le_iwedgevarepsilon^l-sum_{j,k,l}N^k!_lM^l!_je_kwedgevarepsilon^j=Mboxdot N-Nboxdot M.$$



                                    (This can also be justified by Jacobi's identity $(Mtimes N)times a=Mtimes(Ntimes a)-Ntimes(Mtimes a)$.)



                                    The third part is similar to an outermorphism; when applied to a bivector from $V$, it produces



                                    $$(Mwedge N)bullet(awedge b)=(Mbullet a)wedge(Nbullet b)+(Nbullet a)wedge(Mbullet b).$$



                                    Unfortunately, there doesn't seem to be a simple expression for the ordinary matrix product. This is the best I could find, again using $K=sum_ie_iwedgevarepsilon^i$:



                                    $$Mboxdot N=frac{Mtimes N+(Mbullet K)N+(Nbullet K)M-(Mwedge N)bullet K}{2}=sum_{i,j,k}M^i!_jN^j!_ke_iwedgevarepsilon^k$$



                                    Note that $Mbullet K=text{tr},M$. And, of course, we have the defining relation $(Mboxdot N)bullet a=Mbullet(Nbullet a)$.



                                    (That formula is unnecessary for transformations between different spaces, say $V$ and $W$. Using the geometric algebra over $Voplus V^*oplus Woplus W^*$, with basis ${e_i,varepsilon^i,f_i,phi^i}$, if $M=sum_{i,j}M^i!_je_iwedgevarepsilon^j$ maps $V$ to itself, and $N=sum_{i,j}N^i!_je_iwedgephi^j$ maps $W$ to $V$, then the matrix product is simply $Mboxdot N=Mtimes N$.)





                                    Method 2: Rotors



                                    Any general linear transformation on $V$ can be represented by a rotor $R=r_{2k}r_{2k-1}cdots r_2r_1$, a geometric product of an even number of invertible vectors in $Voplus V^*$. Each vector squares to a positive or negative number. If the numbers of positive and negative vectors are both even, then the transformation's determinant is positive; if they're both odd, then the determinant is negative. The transformation is done by the "sandwich product"



                                    $$amapsto RaR^{-1}=r_{2k}cdots r_2r_1ar_1^{-1}r_2^{-1}cdots r_{2k}^{-1}.$$



                                    Any such transformation respects the geometric product: $(RAR^{-1})(RBR^{-1})=R(AB)R^{-1}$; in particular, for vectors, $(RaR^{-1})cdot(RbR^{-1})=R(acdot b)R^{-1}=acdot b$, and $(RaR^{-1})wedge(RbR^{-1})=R(awedge b)R^{-1}$. So the outermorphism uses the same formula for an arbitrary multivector: $Amapsto RAR^{-1}$.



                                    The composition of two transformations, with rotors $R$ and $S$, is represented by the geometric product $RS$:



                                    $$amapsto R(SaS^{-1})R^{-1}=(RS)a(RS)^{-1}.$$





                                    Here are some examples, using $sigma_i=(e_i+varepsilon^i)/sqrt2,;tau_i=(e_i-varepsilon^i)/sqrt2$, and



                                    $$a=sum_ia^ie_i=a^1frac{sigma_1+tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}.$$



                                    Reflection along $e_1$:



                                    $$R=tau_1sigma_1=e_1wedgevarepsilon^1$$



                                    $$RaR^{-1}=a^1frac{-sigma_1-tau_1}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                    $$=-a^1e_1+a^2e_2+cdots+a^ne_n$$



                                    Stretching by factor $exptheta$ along $e_1$:



                                    $$R=expBig(fractheta2tau_1sigma_1Big)=coshfractheta2+tau_1sigma_1sinhfractheta2$$



                                    $$=Big(sigma_1coshfractheta2+tau_1sinhfractheta2Big)sigma_1$$



                                    $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_1sinhtheta)+(tau_1coshtheta+sigma_1sinhtheta)}{sqrt2}+a^2frac{sigma_2+tau_2}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                    $$=a^1e_1exptheta+a^2e_2+cdots+a^ne_n$$



                                    Circular rotation by $theta$ from $e_1$ towards $e_2$ (note that $sigma_2sigma_1$ commutes with $tau_2tau_1$, and both square to $-1$ so Euler's formula applies) :



                                    $$R=expBig(fractheta2(sigma_2sigma_1-tau_2tau_1)Big)=expBig(fractheta2sigma_2sigma_1Big)expBig(-fractheta2tau_2tau_1Big)$$



                                    $$=Big(sigma_1cosfractheta2+sigma_2sinfractheta2Big)sigma_1Big(-tau_1cosfractheta2-tau_2sinfractheta2Big)tau_1$$



                                    $$RaR^{-1}=a^1frac{(sigma_1costheta+sigma_2sintheta)+(tau_1costheta+tau_2sintheta)}{sqrt2}+a^2frac{(-sigma_1sintheta+sigma_2costheta)+(-tau_1sintheta+tau_2costheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                    $$=a^1(e_1costheta+e_2sintheta)+a^2(-e_1sintheta+e_2costheta)+a^3e_3+cdots+a^ne_n$$



                                    Hyperbolic rotation by $theta$ from $e_1$ towards $e_2$:



                                    $$R=expBig(fractheta2(tau_2sigma_1-sigma_2tau_1)Big)=expBig(fractheta2tau_2sigma_1Big)expBig(-fractheta2sigma_2tau_1Big)$$



                                    $$=Big(sigma_1coshfractheta2+tau_2sinhfractheta2Big)sigma_1Big(-tau_1coshfractheta2-sigma_2sinhfractheta2Big)tau_1$$



                                    $$RaR^{-1}=a^1frac{(sigma_1coshtheta+tau_2sinhtheta)+(tau_1coshtheta+sigma_2sinhtheta)}{sqrt2}+a^2frac{(tau_1sinhtheta+sigma_2coshtheta)+(sigma_1sinhtheta+tau_2coshtheta)}{sqrt2}+a^3frac{sigma_3+tau_3}{sqrt2}+cdots+a^nfrac{sigma_n+tau_n}{sqrt2}$$



                                    $$=a^1(e_1coshtheta+e_2sinhtheta)+a^2(e_1sinhtheta+e_2coshtheta)+a^3e_3+cdots+a^ne_n$$



                                    Shear by $theta$ from $e_1$ towards $e_2$:



                                    $$R=expBig(fractheta2e_2wedgevarepsilon^1Big)=1+fractheta2e_2wedgevarepsilon^1$$



                                    $$=-frac14Big(e_1-varepsilon^1+fractheta4e_2Big)Big(e_1-varepsilon^1-fractheta4e_2Big)Big(e_1+varepsilon^1+fractheta4e_2Big)Big(e_1+varepsilon^1-fractheta4e_2Big)$$



                                    $$RaR^{-1}=a^1(e_1+theta e_2)+a^2e_2+a^3e_3+cdots+a^ne_n$$





                                    This post is too long...



                                    Some of this is described in Doran, Hestenes, Sommen, & Van Acker's "Lie Groups as Spin Groups": http://geocalc.clas.asu.edu/html/GeoAlg.html . (Beware that $E,e$ have different meanings from mine, though $K$ is the same.)







                                    share|cite|improve this answer














                                    share|cite|improve this answer



                                    share|cite|improve this answer








                                    edited Dec 9 '18 at 0:49

























                                    answered Nov 21 '18 at 6:36









                                    mr_e_man

                                    1,2751423




                                    1,2751423























                                        -1














                                        Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!



                                        Gone is the awkward distinction between "row vectors" and "column vectors". In Clifford there is no distinction. And many weird abstract concepts become concrete spatial concepts that are easy to visualize. The determinant becomes a three-dimensional volume between three vectors, and the volume goes to zero as the vectors become parallel.



                                        In Clifford algebra a matrix is just an array of vectors that span a space. The geometric product has two effects: rotation and scaling. So a geometric product of a matrix will tend to rotate and scale the geometric shape.



                                        I find that the most useful and interesting aspect of Clifford algebra is to try to picture all algebraic relationships as spatial structures, or operations by spatial structures on other structures.






                                        share|cite|improve this answer

















                                        • 6




                                          "Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!" I find this overstated to the point of being somewhat irresponsible. Vectors and matrices are fundamental mathematical tools which are certainly not obsolete and not in danger of becoming so anytime soon. If e.g. you want to solve a linear system of equations -- a ubiquitous problem in pure and applied mathematics -- then you will want to use matrices and Gaussian reduction. How would you do this using Clifford algebras??
                                          – Pete L. Clark
                                          Aug 16 '13 at 15:27






                                        • 4




                                          Also most mathematicians would regard, e.g., the exterior algebra as a core part of the theory of linear algebra, so e.g. the name of the course in which one would learn your description of determinants is "linear algebra".
                                          – Pete L. Clark
                                          Aug 16 '13 at 15:29






                                        • 2




                                          This is the sort of fanatical praise that seems to discolor whatever small reputation geometric algebra has attained. It is not likely to replace any part of linear algebra at all, but it is probably going to give rise to some interesting new explanations and illustrations for students.
                                          – rschwieb
                                          Aug 16 '13 at 17:22


















                                        -1














                                        Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!



                                        Gone is the awkward distinction between "row vectors" and "column vectors". In Clifford there is no distinction. And many weird abstract concepts become concrete spatial concepts that are easy to visualize. The determinant becomes a three-dimensional volume between three vectors, and the volume goes to zero as the vectors become parallel.



                                        In Clifford algebra a matrix is just an array of vectors that span a space. The geometric product has two effects: rotation and scaling. So a geometric product of a matrix will tend to rotate and scale the geometric shape.



                                        I find that the most useful and interesting aspect of Clifford algebra is to try to picture all algebraic relationships as spatial structures, or operations by spatial structures on other structures.






                                        share|cite|improve this answer

















                                        • 6




                                          "Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!" I find this overstated to the point of being somewhat irresponsible. Vectors and matrices are fundamental mathematical tools which are certainly not obsolete and not in danger of becoming so anytime soon. If e.g. you want to solve a linear system of equations -- a ubiquitous problem in pure and applied mathematics -- then you will want to use matrices and Gaussian reduction. How would you do this using Clifford algebras??
                                          – Pete L. Clark
                                          Aug 16 '13 at 15:27






                                        • 4




                                          Also most mathematicians would regard, e.g., the exterior algebra as a core part of the theory of linear algebra, so e.g. the name of the course in which one would learn your description of determinants is "linear algebra".
                                          – Pete L. Clark
                                          Aug 16 '13 at 15:29






                                        • 2




                                          This is the sort of fanatical praise that seems to discolor whatever small reputation geometric algebra has attained. It is not likely to replace any part of linear algebra at all, but it is probably going to give rise to some interesting new explanations and illustrations for students.
                                          – rschwieb
                                          Aug 16 '13 at 17:22
















                                        -1












                                        -1








                                        -1






                                        Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!



                                        Gone is the awkward distinction between "row vectors" and "column vectors". In Clifford there is no distinction. And many weird abstract concepts become concrete spatial concepts that are easy to visualize. The determinant becomes a three-dimensional volume between three vectors, and the volume goes to zero as the vectors become parallel.



                                        In Clifford algebra a matrix is just an array of vectors that span a space. The geometric product has two effects: rotation and scaling. So a geometric product of a matrix will tend to rotate and scale the geometric shape.



                                        I find that the most useful and interesting aspect of Clifford algebra is to try to picture all algebraic relationships as spatial structures, or operations by spatial structures on other structures.






                                        share|cite|improve this answer












                                        Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!



                                        Gone is the awkward distinction between "row vectors" and "column vectors". In Clifford there is no distinction. And many weird abstract concepts become concrete spatial concepts that are easy to visualize. The determinant becomes a three-dimensional volume between three vectors, and the volume goes to zero as the vectors become parallel.



                                        In Clifford algebra a matrix is just an array of vectors that span a space. The geometric product has two effects: rotation and scaling. So a geometric product of a matrix will tend to rotate and scale the geometric shape.



                                        I find that the most useful and interesting aspect of Clifford algebra is to try to picture all algebraic relationships as spatial structures, or operations by spatial structures on other structures.







                                        share|cite|improve this answer












                                        share|cite|improve this answer



                                        share|cite|improve this answer










                                        answered Aug 16 '13 at 14:15









                                        slehar

                                        1085




                                        1085








                                        • 6




                                          "Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!" I find this overstated to the point of being somewhat irresponsible. Vectors and matrices are fundamental mathematical tools which are certainly not obsolete and not in danger of becoming so anytime soon. If e.g. you want to solve a linear system of equations -- a ubiquitous problem in pure and applied mathematics -- then you will want to use matrices and Gaussian reduction. How would you do this using Clifford algebras??
                                          – Pete L. Clark
                                          Aug 16 '13 at 15:27






                                        • 4




                                          Also most mathematicians would regard, e.g., the exterior algebra as a core part of the theory of linear algebra, so e.g. the name of the course in which one would learn your description of determinants is "linear algebra".
                                          – Pete L. Clark
                                          Aug 16 '13 at 15:29






                                        • 2




                                          This is the sort of fanatical praise that seems to discolor whatever small reputation geometric algebra has attained. It is not likely to replace any part of linear algebra at all, but it is probably going to give rise to some interesting new explanations and illustrations for students.
                                          – rschwieb
                                          Aug 16 '13 at 17:22
















                                        • 6




                                          "Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!" I find this overstated to the point of being somewhat irresponsible. Vectors and matrices are fundamental mathematical tools which are certainly not obsolete and not in danger of becoming so anytime soon. If e.g. you want to solve a linear system of equations -- a ubiquitous problem in pure and applied mathematics -- then you will want to use matrices and Gaussian reduction. How would you do this using Clifford algebras??
                                          – Pete L. Clark
                                          Aug 16 '13 at 15:27






                                        • 4




                                          Also most mathematicians would regard, e.g., the exterior algebra as a core part of the theory of linear algebra, so e.g. the name of the course in which one would learn your description of determinants is "linear algebra".
                                          – Pete L. Clark
                                          Aug 16 '13 at 15:29






                                        • 2




                                          This is the sort of fanatical praise that seems to discolor whatever small reputation geometric algebra has attained. It is not likely to replace any part of linear algebra at all, but it is probably going to give rise to some interesting new explanations and illustrations for students.
                                          – rschwieb
                                          Aug 16 '13 at 17:22










                                        6




                                        6




                                        "Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!" I find this overstated to the point of being somewhat irresponsible. Vectors and matrices are fundamental mathematical tools which are certainly not obsolete and not in danger of becoming so anytime soon. If e.g. you want to solve a linear system of equations -- a ubiquitous problem in pure and applied mathematics -- then you will want to use matrices and Gaussian reduction. How would you do this using Clifford algebras??
                                        – Pete L. Clark
                                        Aug 16 '13 at 15:27




                                        "Linear algebra with its vectors and matrices is made entirely obsolete by Clifford algebra which provides a better way. Good riddance!" I find this overstated to the point of being somewhat irresponsible. Vectors and matrices are fundamental mathematical tools which are certainly not obsolete and not in danger of becoming so anytime soon. If e.g. you want to solve a linear system of equations -- a ubiquitous problem in pure and applied mathematics -- then you will want to use matrices and Gaussian reduction. How would you do this using Clifford algebras??
                                        – Pete L. Clark
                                        Aug 16 '13 at 15:27




                                        4




                                        4




                                        Also most mathematicians would regard, e.g., the exterior algebra as a core part of the theory of linear algebra, so e.g. the name of the course in which one would learn your description of determinants is "linear algebra".
                                        – Pete L. Clark
                                        Aug 16 '13 at 15:29




                                        Also most mathematicians would regard, e.g., the exterior algebra as a core part of the theory of linear algebra, so e.g. the name of the course in which one would learn your description of determinants is "linear algebra".
                                        – Pete L. Clark
                                        Aug 16 '13 at 15:29




                                        2




                                        2




                                        This is the sort of fanatical praise that seems to discolor whatever small reputation geometric algebra has attained. It is not likely to replace any part of linear algebra at all, but it is probably going to give rise to some interesting new explanations and illustrations for students.
                                        – rschwieb
                                        Aug 16 '13 at 17:22






                                        This is the sort of fanatical praise that seems to discolor whatever small reputation geometric algebra has attained. It is not likely to replace any part of linear algebra at all, but it is probably going to give rise to some interesting new explanations and illustrations for students.
                                        – rschwieb
                                        Aug 16 '13 at 17:22




















                                        draft saved

                                        draft discarded




















































                                        Thanks for contributing an answer to Mathematics Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        Use MathJax to format equations. MathJax reference.


                                        To learn more, see our tips on writing great answers.





                                        Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                        Please pay close attention to the following guidance:


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid



                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function () {
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f468532%2fin-geometric-algebra-is-there-a-geometric-product-between-matrices%23new-answer', 'question_page');
                                        }
                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        MongoDB - Not Authorized To Execute Command

                                        How to fix TextFormField cause rebuild widget in Flutter

                                        in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith