Non-integral powers of a matrix












11












$begingroup$


Question



Given a square complex matrix $A$, what ways are there to define and compute $A^p$ for non-integral scalar exponents $pinmathbb R$, and for what matrices do they work?



My thoughts



Integral exponents



Defining $A^k$ for $kinmathbb N$ is easy in terms of repeated multiplication, and works for every matrix. This includes $A^0=I$. Using $A^{-1}$ as the inverse, $A^{-k}=left(A^{-1}right)^k$ is easy to define, but requires the matrix to be invertible. So much for integral exponents.



Rational definition



I guess for a rational exponent, one could define



$$A^{frac pq}=Bquad:Leftrightarrowquad A^p=B^q$$



This will allow for more than one solution, and I'm not sure if the computations I'll describe below will find all solutions satisfying the above equation. So I'm not sure whether that's a reasonable definition. For non-rational exponents, a limit using a convergent series of rational exponents might work.



Diagonalizable computation



If $A$ is diagonalizable, then one has $A=W,D,W^{-1}$ for some diagonal matrix $D$. One can simply raise all the diagonal elements to the $p$-th power, obtaining a matrix which will satisfy the above equation. For each diagonal element, I'd define $lambda^p=e^{(plnlambda)}$, and since $lnlambda$ is only defined up to $2pi imathbb Z$, this allows for multiple possible solutions. If one requires $-pi<operatorname{Im}(lnlambda)lepi$, then the solution should be well defined, and I guess this definition even has a name, although I don't know it.



Non-diagonalizable computation



If $A$ is not diagonalizable, then there is still a Jordan normal form, so instead of raising diagonal elements to a fractional power, one could attempt to do the same with Jordan blocks. Unless I made a mistake, this appears to be possible. At least for my example of a $3times3$ Jordan block, I was able to obtain a $k$-th root.



$$
begin{pmatrix}
lambda^{frac1k} & tfrac1klambda^{frac1k-1} & tfrac{1-k}{2k^2}lambda^{frac1k-2} & \
0 & lambda^{frac1k} & tfrac1klambda^{frac1k-1} \
0 & 0 & lambda^{frac1k}
end{pmatrix}^k
=
begin{pmatrix}
lambda & 1 & 0 \
0 & lambda & 1 \
0 & 0 & lambda
end{pmatrix}
$$



If the eigenvalue $lambda$ of this block is zero, then the root as computed above would be the zero matrix, which doesn't result in a Jordan block. But otherwise it should work.



Conclusion



Edited since this question was first asked.



So it seems that every invertible matrix can be raised to every rational power, as long as uniqueness is not a strong requirement. A non-invertible matrix apparently can be raised to non-negative powers as long as all Jordan blocks for eigenvalue zero have size one.



Is this true? If not, where is my mistake? If it is, is there a good reference for this?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    As with real numbers you can define power with logarithm. Check wiki for matrix logarithm en.wikipedia.org/wiki/Logarithm_of_a_matrix
    $endgroup$
    – tom
    Dec 22 '13 at 15:09












  • $begingroup$
    @tom: That link states that the logarithm exists iff the matrix is invertible. Now suppose a diagonalizable matrix has one eigenvalue being zero. Then my diagonalization approach should still work, since $0^p=0$ for all $pneq 0$. So it seems that definition doesn't exactly match my thoughts, right? Nevertheless, it looks like a very reasonable definition, and therefore a good answer to my original question. I would like to see this posted as a full answer.
    $endgroup$
    – MvG
    Dec 22 '13 at 15:23










  • $begingroup$
    Yes this is similar to the real number case. You don't define $log{0}$ but you define $0^a=0$ for $aneq 0$.
    $endgroup$
    – tom
    Dec 22 '13 at 15:35










  • $begingroup$
    Since already $x^y$ is not uniquely defined when $y$ is non-integer and $x$ is not a non-negative real (and forcing an extension to these cases necessarily makes usual laws for exponentiation fail) there seems little to be gained (for $y$ non-integer) by allowing $x$ to be a matrix: the same definitional problems reappear. Sometimes they are worse; for instance the matrix square root may take infinitely many values. Note however that when $x$ is a positive real you can define $x^A$ without any problem, as $exp(ln(x)A)$.
    $endgroup$
    – Marc van Leeuwen
    Dec 23 '13 at 7:16


















11












$begingroup$


Question



Given a square complex matrix $A$, what ways are there to define and compute $A^p$ for non-integral scalar exponents $pinmathbb R$, and for what matrices do they work?



My thoughts



Integral exponents



Defining $A^k$ for $kinmathbb N$ is easy in terms of repeated multiplication, and works for every matrix. This includes $A^0=I$. Using $A^{-1}$ as the inverse, $A^{-k}=left(A^{-1}right)^k$ is easy to define, but requires the matrix to be invertible. So much for integral exponents.



Rational definition



I guess for a rational exponent, one could define



$$A^{frac pq}=Bquad:Leftrightarrowquad A^p=B^q$$



This will allow for more than one solution, and I'm not sure if the computations I'll describe below will find all solutions satisfying the above equation. So I'm not sure whether that's a reasonable definition. For non-rational exponents, a limit using a convergent series of rational exponents might work.



Diagonalizable computation



If $A$ is diagonalizable, then one has $A=W,D,W^{-1}$ for some diagonal matrix $D$. One can simply raise all the diagonal elements to the $p$-th power, obtaining a matrix which will satisfy the above equation. For each diagonal element, I'd define $lambda^p=e^{(plnlambda)}$, and since $lnlambda$ is only defined up to $2pi imathbb Z$, this allows for multiple possible solutions. If one requires $-pi<operatorname{Im}(lnlambda)lepi$, then the solution should be well defined, and I guess this definition even has a name, although I don't know it.



Non-diagonalizable computation



If $A$ is not diagonalizable, then there is still a Jordan normal form, so instead of raising diagonal elements to a fractional power, one could attempt to do the same with Jordan blocks. Unless I made a mistake, this appears to be possible. At least for my example of a $3times3$ Jordan block, I was able to obtain a $k$-th root.



$$
begin{pmatrix}
lambda^{frac1k} & tfrac1klambda^{frac1k-1} & tfrac{1-k}{2k^2}lambda^{frac1k-2} & \
0 & lambda^{frac1k} & tfrac1klambda^{frac1k-1} \
0 & 0 & lambda^{frac1k}
end{pmatrix}^k
=
begin{pmatrix}
lambda & 1 & 0 \
0 & lambda & 1 \
0 & 0 & lambda
end{pmatrix}
$$



If the eigenvalue $lambda$ of this block is zero, then the root as computed above would be the zero matrix, which doesn't result in a Jordan block. But otherwise it should work.



Conclusion



Edited since this question was first asked.



So it seems that every invertible matrix can be raised to every rational power, as long as uniqueness is not a strong requirement. A non-invertible matrix apparently can be raised to non-negative powers as long as all Jordan blocks for eigenvalue zero have size one.



Is this true? If not, where is my mistake? If it is, is there a good reference for this?










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    As with real numbers you can define power with logarithm. Check wiki for matrix logarithm en.wikipedia.org/wiki/Logarithm_of_a_matrix
    $endgroup$
    – tom
    Dec 22 '13 at 15:09












  • $begingroup$
    @tom: That link states that the logarithm exists iff the matrix is invertible. Now suppose a diagonalizable matrix has one eigenvalue being zero. Then my diagonalization approach should still work, since $0^p=0$ for all $pneq 0$. So it seems that definition doesn't exactly match my thoughts, right? Nevertheless, it looks like a very reasonable definition, and therefore a good answer to my original question. I would like to see this posted as a full answer.
    $endgroup$
    – MvG
    Dec 22 '13 at 15:23










  • $begingroup$
    Yes this is similar to the real number case. You don't define $log{0}$ but you define $0^a=0$ for $aneq 0$.
    $endgroup$
    – tom
    Dec 22 '13 at 15:35










  • $begingroup$
    Since already $x^y$ is not uniquely defined when $y$ is non-integer and $x$ is not a non-negative real (and forcing an extension to these cases necessarily makes usual laws for exponentiation fail) there seems little to be gained (for $y$ non-integer) by allowing $x$ to be a matrix: the same definitional problems reappear. Sometimes they are worse; for instance the matrix square root may take infinitely many values. Note however that when $x$ is a positive real you can define $x^A$ without any problem, as $exp(ln(x)A)$.
    $endgroup$
    – Marc van Leeuwen
    Dec 23 '13 at 7:16
















11












11








11


4



$begingroup$


Question



Given a square complex matrix $A$, what ways are there to define and compute $A^p$ for non-integral scalar exponents $pinmathbb R$, and for what matrices do they work?



My thoughts



Integral exponents



Defining $A^k$ for $kinmathbb N$ is easy in terms of repeated multiplication, and works for every matrix. This includes $A^0=I$. Using $A^{-1}$ as the inverse, $A^{-k}=left(A^{-1}right)^k$ is easy to define, but requires the matrix to be invertible. So much for integral exponents.



Rational definition



I guess for a rational exponent, one could define



$$A^{frac pq}=Bquad:Leftrightarrowquad A^p=B^q$$



This will allow for more than one solution, and I'm not sure if the computations I'll describe below will find all solutions satisfying the above equation. So I'm not sure whether that's a reasonable definition. For non-rational exponents, a limit using a convergent series of rational exponents might work.



Diagonalizable computation



If $A$ is diagonalizable, then one has $A=W,D,W^{-1}$ for some diagonal matrix $D$. One can simply raise all the diagonal elements to the $p$-th power, obtaining a matrix which will satisfy the above equation. For each diagonal element, I'd define $lambda^p=e^{(plnlambda)}$, and since $lnlambda$ is only defined up to $2pi imathbb Z$, this allows for multiple possible solutions. If one requires $-pi<operatorname{Im}(lnlambda)lepi$, then the solution should be well defined, and I guess this definition even has a name, although I don't know it.



Non-diagonalizable computation



If $A$ is not diagonalizable, then there is still a Jordan normal form, so instead of raising diagonal elements to a fractional power, one could attempt to do the same with Jordan blocks. Unless I made a mistake, this appears to be possible. At least for my example of a $3times3$ Jordan block, I was able to obtain a $k$-th root.



$$
begin{pmatrix}
lambda^{frac1k} & tfrac1klambda^{frac1k-1} & tfrac{1-k}{2k^2}lambda^{frac1k-2} & \
0 & lambda^{frac1k} & tfrac1klambda^{frac1k-1} \
0 & 0 & lambda^{frac1k}
end{pmatrix}^k
=
begin{pmatrix}
lambda & 1 & 0 \
0 & lambda & 1 \
0 & 0 & lambda
end{pmatrix}
$$



If the eigenvalue $lambda$ of this block is zero, then the root as computed above would be the zero matrix, which doesn't result in a Jordan block. But otherwise it should work.



Conclusion



Edited since this question was first asked.



So it seems that every invertible matrix can be raised to every rational power, as long as uniqueness is not a strong requirement. A non-invertible matrix apparently can be raised to non-negative powers as long as all Jordan blocks for eigenvalue zero have size one.



Is this true? If not, where is my mistake? If it is, is there a good reference for this?










share|cite|improve this question











$endgroup$




Question



Given a square complex matrix $A$, what ways are there to define and compute $A^p$ for non-integral scalar exponents $pinmathbb R$, and for what matrices do they work?



My thoughts



Integral exponents



Defining $A^k$ for $kinmathbb N$ is easy in terms of repeated multiplication, and works for every matrix. This includes $A^0=I$. Using $A^{-1}$ as the inverse, $A^{-k}=left(A^{-1}right)^k$ is easy to define, but requires the matrix to be invertible. So much for integral exponents.



Rational definition



I guess for a rational exponent, one could define



$$A^{frac pq}=Bquad:Leftrightarrowquad A^p=B^q$$



This will allow for more than one solution, and I'm not sure if the computations I'll describe below will find all solutions satisfying the above equation. So I'm not sure whether that's a reasonable definition. For non-rational exponents, a limit using a convergent series of rational exponents might work.



Diagonalizable computation



If $A$ is diagonalizable, then one has $A=W,D,W^{-1}$ for some diagonal matrix $D$. One can simply raise all the diagonal elements to the $p$-th power, obtaining a matrix which will satisfy the above equation. For each diagonal element, I'd define $lambda^p=e^{(plnlambda)}$, and since $lnlambda$ is only defined up to $2pi imathbb Z$, this allows for multiple possible solutions. If one requires $-pi<operatorname{Im}(lnlambda)lepi$, then the solution should be well defined, and I guess this definition even has a name, although I don't know it.



Non-diagonalizable computation



If $A$ is not diagonalizable, then there is still a Jordan normal form, so instead of raising diagonal elements to a fractional power, one could attempt to do the same with Jordan blocks. Unless I made a mistake, this appears to be possible. At least for my example of a $3times3$ Jordan block, I was able to obtain a $k$-th root.



$$
begin{pmatrix}
lambda^{frac1k} & tfrac1klambda^{frac1k-1} & tfrac{1-k}{2k^2}lambda^{frac1k-2} & \
0 & lambda^{frac1k} & tfrac1klambda^{frac1k-1} \
0 & 0 & lambda^{frac1k}
end{pmatrix}^k
=
begin{pmatrix}
lambda & 1 & 0 \
0 & lambda & 1 \
0 & 0 & lambda
end{pmatrix}
$$



If the eigenvalue $lambda$ of this block is zero, then the root as computed above would be the zero matrix, which doesn't result in a Jordan block. But otherwise it should work.



Conclusion



Edited since this question was first asked.



So it seems that every invertible matrix can be raised to every rational power, as long as uniqueness is not a strong requirement. A non-invertible matrix apparently can be raised to non-negative powers as long as all Jordan blocks for eigenvalue zero have size one.



Is this true? If not, where is my mistake? If it is, is there a good reference for this?







linear-algebra complex-numbers eigenvalues-eigenvectors exponentiation jordan-normal-form






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 23 '13 at 6:20







MvG

















asked Dec 22 '13 at 14:40









MvGMvG

30.9k450104




30.9k450104








  • 1




    $begingroup$
    As with real numbers you can define power with logarithm. Check wiki for matrix logarithm en.wikipedia.org/wiki/Logarithm_of_a_matrix
    $endgroup$
    – tom
    Dec 22 '13 at 15:09












  • $begingroup$
    @tom: That link states that the logarithm exists iff the matrix is invertible. Now suppose a diagonalizable matrix has one eigenvalue being zero. Then my diagonalization approach should still work, since $0^p=0$ for all $pneq 0$. So it seems that definition doesn't exactly match my thoughts, right? Nevertheless, it looks like a very reasonable definition, and therefore a good answer to my original question. I would like to see this posted as a full answer.
    $endgroup$
    – MvG
    Dec 22 '13 at 15:23










  • $begingroup$
    Yes this is similar to the real number case. You don't define $log{0}$ but you define $0^a=0$ for $aneq 0$.
    $endgroup$
    – tom
    Dec 22 '13 at 15:35










  • $begingroup$
    Since already $x^y$ is not uniquely defined when $y$ is non-integer and $x$ is not a non-negative real (and forcing an extension to these cases necessarily makes usual laws for exponentiation fail) there seems little to be gained (for $y$ non-integer) by allowing $x$ to be a matrix: the same definitional problems reappear. Sometimes they are worse; for instance the matrix square root may take infinitely many values. Note however that when $x$ is a positive real you can define $x^A$ without any problem, as $exp(ln(x)A)$.
    $endgroup$
    – Marc van Leeuwen
    Dec 23 '13 at 7:16
















  • 1




    $begingroup$
    As with real numbers you can define power with logarithm. Check wiki for matrix logarithm en.wikipedia.org/wiki/Logarithm_of_a_matrix
    $endgroup$
    – tom
    Dec 22 '13 at 15:09












  • $begingroup$
    @tom: That link states that the logarithm exists iff the matrix is invertible. Now suppose a diagonalizable matrix has one eigenvalue being zero. Then my diagonalization approach should still work, since $0^p=0$ for all $pneq 0$. So it seems that definition doesn't exactly match my thoughts, right? Nevertheless, it looks like a very reasonable definition, and therefore a good answer to my original question. I would like to see this posted as a full answer.
    $endgroup$
    – MvG
    Dec 22 '13 at 15:23










  • $begingroup$
    Yes this is similar to the real number case. You don't define $log{0}$ but you define $0^a=0$ for $aneq 0$.
    $endgroup$
    – tom
    Dec 22 '13 at 15:35










  • $begingroup$
    Since already $x^y$ is not uniquely defined when $y$ is non-integer and $x$ is not a non-negative real (and forcing an extension to these cases necessarily makes usual laws for exponentiation fail) there seems little to be gained (for $y$ non-integer) by allowing $x$ to be a matrix: the same definitional problems reappear. Sometimes they are worse; for instance the matrix square root may take infinitely many values. Note however that when $x$ is a positive real you can define $x^A$ without any problem, as $exp(ln(x)A)$.
    $endgroup$
    – Marc van Leeuwen
    Dec 23 '13 at 7:16










1




1




$begingroup$
As with real numbers you can define power with logarithm. Check wiki for matrix logarithm en.wikipedia.org/wiki/Logarithm_of_a_matrix
$endgroup$
– tom
Dec 22 '13 at 15:09






$begingroup$
As with real numbers you can define power with logarithm. Check wiki for matrix logarithm en.wikipedia.org/wiki/Logarithm_of_a_matrix
$endgroup$
– tom
Dec 22 '13 at 15:09














$begingroup$
@tom: That link states that the logarithm exists iff the matrix is invertible. Now suppose a diagonalizable matrix has one eigenvalue being zero. Then my diagonalization approach should still work, since $0^p=0$ for all $pneq 0$. So it seems that definition doesn't exactly match my thoughts, right? Nevertheless, it looks like a very reasonable definition, and therefore a good answer to my original question. I would like to see this posted as a full answer.
$endgroup$
– MvG
Dec 22 '13 at 15:23




$begingroup$
@tom: That link states that the logarithm exists iff the matrix is invertible. Now suppose a diagonalizable matrix has one eigenvalue being zero. Then my diagonalization approach should still work, since $0^p=0$ for all $pneq 0$. So it seems that definition doesn't exactly match my thoughts, right? Nevertheless, it looks like a very reasonable definition, and therefore a good answer to my original question. I would like to see this posted as a full answer.
$endgroup$
– MvG
Dec 22 '13 at 15:23












$begingroup$
Yes this is similar to the real number case. You don't define $log{0}$ but you define $0^a=0$ for $aneq 0$.
$endgroup$
– tom
Dec 22 '13 at 15:35




$begingroup$
Yes this is similar to the real number case. You don't define $log{0}$ but you define $0^a=0$ for $aneq 0$.
$endgroup$
– tom
Dec 22 '13 at 15:35












$begingroup$
Since already $x^y$ is not uniquely defined when $y$ is non-integer and $x$ is not a non-negative real (and forcing an extension to these cases necessarily makes usual laws for exponentiation fail) there seems little to be gained (for $y$ non-integer) by allowing $x$ to be a matrix: the same definitional problems reappear. Sometimes they are worse; for instance the matrix square root may take infinitely many values. Note however that when $x$ is a positive real you can define $x^A$ without any problem, as $exp(ln(x)A)$.
$endgroup$
– Marc van Leeuwen
Dec 23 '13 at 7:16






$begingroup$
Since already $x^y$ is not uniquely defined when $y$ is non-integer and $x$ is not a non-negative real (and forcing an extension to these cases necessarily makes usual laws for exponentiation fail) there seems little to be gained (for $y$ non-integer) by allowing $x$ to be a matrix: the same definitional problems reappear. Sometimes they are worse; for instance the matrix square root may take infinitely many values. Note however that when $x$ is a positive real you can define $x^A$ without any problem, as $exp(ln(x)A)$.
$endgroup$
– Marc van Leeuwen
Dec 23 '13 at 7:16












3 Answers
3






active

oldest

votes


















5












$begingroup$

As @tom pointed out in a comment, the power of a matrix can be defined in terms of logarithm of a matrix and matrix exponential, using



$$A^p:=expleft(pln Aright)$$



Using the principal logarithm (this is the name for that choice described in the question without giving a name), the above even yields unique results.



The matrix exponential is defined for every matrix, the matrix logarithm only for invertible matrices.
The case of singular matrices mentioned in the question is therefore not covered by this definition.






share|cite|improve this answer











$endgroup$





















    3












    $begingroup$

    You are correct that your proposed definition for rational exponents can run into issues of uniqueness. Consider just the problem of trying to find the square root of a matrix. If $I$ is the 2x2 identity, then any matrix of the form



    begin{pmatrix}
    pm1 & a \
    0 & mp1 \
    end{pmatrix}



    satisfies $A^2=I$. Now, there is a case where you can define a unique square root. In particular, your matrix must be positive definite [1]



    For a more general discussion, see this






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Since talking about positivity in the presence of complex numbers is problematic, so is positive definiteness for complex matrices. So while these considerations are valuable for real matrices, they don't exactly match the setup I've been asking about. The non-uniqueness obervation of course still stands, so this is a valuable answer in any case.
      $endgroup$
      – MvG
      Dec 23 '13 at 6:36










    • $begingroup$
      I'm not sure what you mean by positive definiteness being problematic for complex matrices, but I won't belabor the point since uniqueness isn't a concern for the moment.
      $endgroup$
      – Philip Hoskins
      Dec 23 '13 at 7:41



















    0












    $begingroup$

    I came to the need/wish to extend $A^n$ to $A^z$ from a geometrical perspective (where $n$ is an integer number and $z$ real one), so I hope this can help, by providing a geometrical meaning for "real powers of a matrix".



    this is the geometrical setup



    The hyperboloid shown on the image comes from the Pells's equation:



    $$x^2 - 2y^2 = 1$$



    The "vertical planes" shown there (only 3 shown, for $n=-1,0,1$) are identified by their $z$ coordinate and placed at $n$ (integer) values only.



    That was the starting point: extending integer values $n$ to all real values $z$, will mean considering a continuous set of "vertical planes": that will involve powers of matrices, as $n$ will show up as a matrix exponent, in the following.



    I assume each $z=n$ vertical plane "hosts" the coordinate system $(x_n, y_n)$, just like as these coordinate systems "lived" on those planes.



    With this setup, the matrix



    begin{pmatrix}
    3 & 4 \
    2 & 3 \
    end{pmatrix}



    is a representation for this coordinate transform:



    $$(x_n, y_n) = begin{pmatrix}
    3 & 4 \
    2 & 3 \
    end{pmatrix}^n (x_0, y_0)$$



    which also includes "jumping" from plane $z=0$ to plane $z=n$.



    BTW: the Pells's equation above is invariant to this coordinate transform.



    Noticeably, the same matrix was used by ancient Greeks "to generate the next integer solution" (to Pells's equation), as $(x, y)_{(0)}=(1,0)$ was known to be "a trivial solution":



    $$(x, y)_{(n+1)} = begin{pmatrix}
    3 & 4 \
    2 & 3 \
    end{pmatrix} (x, y)_{(n)}$$



    or



    $$(x, y)_{(n+1)} = begin{pmatrix}
    3 & 4 \
    2 & 3 \
    end{pmatrix}^n (x, y)_{(0)} = begin{pmatrix}
    3 & 4 \
    2 & 3 \
    end{pmatrix}^n (1, 0)$$



    where



    $$begin{pmatrix}
    3 & 4 \
    2 & 3 \
    end{pmatrix}^n$$



    means "apply transform $n$ times" and



    $$begin{pmatrix}
    3 & 4 \
    2 & 3 \
    end{pmatrix}^{(-1)}$$



    means "apply inverse transform once".



    But, considering that matrix as a coordinate transform, we see point $(1,0)$ on plane $n=0$ "jumping FWD" to plane $n=1$ and "rotating CW" to point $(3,2)$, when applying the "forward transform", while we see that same point $(1,0)$ "jumping BWD" to plane $n=-1$ and "rotating CCW" to point $(3,-2)$, when applying the "inverse transform".



    Due to the recursivity of the "next solution generator":



    $$(x, y)_{(n+1)} = begin{pmatrix}
    3 & 4 \
    2 & 3 \
    end{pmatrix} (x, y)_{(n)}$$



    that kind of "jump and rotate" applies to all "integer solutions" to Pells's equation: indeed, as a coordinate transform, all the points in the plane "rotate to" other points.



    That would also suggest that "all points on the paraboloids 'rotate' in 3D":




    • $(1,0,0)$ gets transformed to $(3,2,1)$


    • $(1,0,0)$ gets inverse-transformed to $(3,-2,-1)$



    Having put the "vertical planes" in between "helps visualizing the motion": it's a discrete motion ($n$ steps by $1$, not by a $dz$): could it be "extended" so it's continuous?



    With the geometrical interpretation above, "extending" this "motion" to be "continuous" means "extending integer powers of matrices to real powers".



    Would you agree?



    Cheers,



    .k.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f615614%2fnon-integral-powers-of-a-matrix%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      5












      $begingroup$

      As @tom pointed out in a comment, the power of a matrix can be defined in terms of logarithm of a matrix and matrix exponential, using



      $$A^p:=expleft(pln Aright)$$



      Using the principal logarithm (this is the name for that choice described in the question without giving a name), the above even yields unique results.



      The matrix exponential is defined for every matrix, the matrix logarithm only for invertible matrices.
      The case of singular matrices mentioned in the question is therefore not covered by this definition.






      share|cite|improve this answer











      $endgroup$


















        5












        $begingroup$

        As @tom pointed out in a comment, the power of a matrix can be defined in terms of logarithm of a matrix and matrix exponential, using



        $$A^p:=expleft(pln Aright)$$



        Using the principal logarithm (this is the name for that choice described in the question without giving a name), the above even yields unique results.



        The matrix exponential is defined for every matrix, the matrix logarithm only for invertible matrices.
        The case of singular matrices mentioned in the question is therefore not covered by this definition.






        share|cite|improve this answer











        $endgroup$
















          5












          5








          5





          $begingroup$

          As @tom pointed out in a comment, the power of a matrix can be defined in terms of logarithm of a matrix and matrix exponential, using



          $$A^p:=expleft(pln Aright)$$



          Using the principal logarithm (this is the name for that choice described in the question without giving a name), the above even yields unique results.



          The matrix exponential is defined for every matrix, the matrix logarithm only for invertible matrices.
          The case of singular matrices mentioned in the question is therefore not covered by this definition.






          share|cite|improve this answer











          $endgroup$



          As @tom pointed out in a comment, the power of a matrix can be defined in terms of logarithm of a matrix and matrix exponential, using



          $$A^p:=expleft(pln Aright)$$



          Using the principal logarithm (this is the name for that choice described in the question without giving a name), the above even yields unique results.



          The matrix exponential is defined for every matrix, the matrix logarithm only for invertible matrices.
          The case of singular matrices mentioned in the question is therefore not covered by this definition.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Apr 13 '17 at 12:20


























          community wiki





          2 revs
          MvG
























              3












              $begingroup$

              You are correct that your proposed definition for rational exponents can run into issues of uniqueness. Consider just the problem of trying to find the square root of a matrix. If $I$ is the 2x2 identity, then any matrix of the form



              begin{pmatrix}
              pm1 & a \
              0 & mp1 \
              end{pmatrix}



              satisfies $A^2=I$. Now, there is a case where you can define a unique square root. In particular, your matrix must be positive definite [1]



              For a more general discussion, see this






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                Since talking about positivity in the presence of complex numbers is problematic, so is positive definiteness for complex matrices. So while these considerations are valuable for real matrices, they don't exactly match the setup I've been asking about. The non-uniqueness obervation of course still stands, so this is a valuable answer in any case.
                $endgroup$
                – MvG
                Dec 23 '13 at 6:36










              • $begingroup$
                I'm not sure what you mean by positive definiteness being problematic for complex matrices, but I won't belabor the point since uniqueness isn't a concern for the moment.
                $endgroup$
                – Philip Hoskins
                Dec 23 '13 at 7:41
















              3












              $begingroup$

              You are correct that your proposed definition for rational exponents can run into issues of uniqueness. Consider just the problem of trying to find the square root of a matrix. If $I$ is the 2x2 identity, then any matrix of the form



              begin{pmatrix}
              pm1 & a \
              0 & mp1 \
              end{pmatrix}



              satisfies $A^2=I$. Now, there is a case where you can define a unique square root. In particular, your matrix must be positive definite [1]



              For a more general discussion, see this






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                Since talking about positivity in the presence of complex numbers is problematic, so is positive definiteness for complex matrices. So while these considerations are valuable for real matrices, they don't exactly match the setup I've been asking about. The non-uniqueness obervation of course still stands, so this is a valuable answer in any case.
                $endgroup$
                – MvG
                Dec 23 '13 at 6:36










              • $begingroup$
                I'm not sure what you mean by positive definiteness being problematic for complex matrices, but I won't belabor the point since uniqueness isn't a concern for the moment.
                $endgroup$
                – Philip Hoskins
                Dec 23 '13 at 7:41














              3












              3








              3





              $begingroup$

              You are correct that your proposed definition for rational exponents can run into issues of uniqueness. Consider just the problem of trying to find the square root of a matrix. If $I$ is the 2x2 identity, then any matrix of the form



              begin{pmatrix}
              pm1 & a \
              0 & mp1 \
              end{pmatrix}



              satisfies $A^2=I$. Now, there is a case where you can define a unique square root. In particular, your matrix must be positive definite [1]



              For a more general discussion, see this






              share|cite|improve this answer











              $endgroup$



              You are correct that your proposed definition for rational exponents can run into issues of uniqueness. Consider just the problem of trying to find the square root of a matrix. If $I$ is the 2x2 identity, then any matrix of the form



              begin{pmatrix}
              pm1 & a \
              0 & mp1 \
              end{pmatrix}



              satisfies $A^2=I$. Now, there is a case where you can define a unique square root. In particular, your matrix must be positive definite [1]



              For a more general discussion, see this







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Apr 13 '17 at 12:21









              Community

              1




              1










              answered Dec 22 '13 at 15:03









              Philip HoskinsPhilip Hoskins

              1,445715




              1,445715












              • $begingroup$
                Since talking about positivity in the presence of complex numbers is problematic, so is positive definiteness for complex matrices. So while these considerations are valuable for real matrices, they don't exactly match the setup I've been asking about. The non-uniqueness obervation of course still stands, so this is a valuable answer in any case.
                $endgroup$
                – MvG
                Dec 23 '13 at 6:36










              • $begingroup$
                I'm not sure what you mean by positive definiteness being problematic for complex matrices, but I won't belabor the point since uniqueness isn't a concern for the moment.
                $endgroup$
                – Philip Hoskins
                Dec 23 '13 at 7:41


















              • $begingroup$
                Since talking about positivity in the presence of complex numbers is problematic, so is positive definiteness for complex matrices. So while these considerations are valuable for real matrices, they don't exactly match the setup I've been asking about. The non-uniqueness obervation of course still stands, so this is a valuable answer in any case.
                $endgroup$
                – MvG
                Dec 23 '13 at 6:36










              • $begingroup$
                I'm not sure what you mean by positive definiteness being problematic for complex matrices, but I won't belabor the point since uniqueness isn't a concern for the moment.
                $endgroup$
                – Philip Hoskins
                Dec 23 '13 at 7:41
















              $begingroup$
              Since talking about positivity in the presence of complex numbers is problematic, so is positive definiteness for complex matrices. So while these considerations are valuable for real matrices, they don't exactly match the setup I've been asking about. The non-uniqueness obervation of course still stands, so this is a valuable answer in any case.
              $endgroup$
              – MvG
              Dec 23 '13 at 6:36




              $begingroup$
              Since talking about positivity in the presence of complex numbers is problematic, so is positive definiteness for complex matrices. So while these considerations are valuable for real matrices, they don't exactly match the setup I've been asking about. The non-uniqueness obervation of course still stands, so this is a valuable answer in any case.
              $endgroup$
              – MvG
              Dec 23 '13 at 6:36












              $begingroup$
              I'm not sure what you mean by positive definiteness being problematic for complex matrices, but I won't belabor the point since uniqueness isn't a concern for the moment.
              $endgroup$
              – Philip Hoskins
              Dec 23 '13 at 7:41




              $begingroup$
              I'm not sure what you mean by positive definiteness being problematic for complex matrices, but I won't belabor the point since uniqueness isn't a concern for the moment.
              $endgroup$
              – Philip Hoskins
              Dec 23 '13 at 7:41











              0












              $begingroup$

              I came to the need/wish to extend $A^n$ to $A^z$ from a geometrical perspective (where $n$ is an integer number and $z$ real one), so I hope this can help, by providing a geometrical meaning for "real powers of a matrix".



              this is the geometrical setup



              The hyperboloid shown on the image comes from the Pells's equation:



              $$x^2 - 2y^2 = 1$$



              The "vertical planes" shown there (only 3 shown, for $n=-1,0,1$) are identified by their $z$ coordinate and placed at $n$ (integer) values only.



              That was the starting point: extending integer values $n$ to all real values $z$, will mean considering a continuous set of "vertical planes": that will involve powers of matrices, as $n$ will show up as a matrix exponent, in the following.



              I assume each $z=n$ vertical plane "hosts" the coordinate system $(x_n, y_n)$, just like as these coordinate systems "lived" on those planes.



              With this setup, the matrix



              begin{pmatrix}
              3 & 4 \
              2 & 3 \
              end{pmatrix}



              is a representation for this coordinate transform:



              $$(x_n, y_n) = begin{pmatrix}
              3 & 4 \
              2 & 3 \
              end{pmatrix}^n (x_0, y_0)$$



              which also includes "jumping" from plane $z=0$ to plane $z=n$.



              BTW: the Pells's equation above is invariant to this coordinate transform.



              Noticeably, the same matrix was used by ancient Greeks "to generate the next integer solution" (to Pells's equation), as $(x, y)_{(0)}=(1,0)$ was known to be "a trivial solution":



              $$(x, y)_{(n+1)} = begin{pmatrix}
              3 & 4 \
              2 & 3 \
              end{pmatrix} (x, y)_{(n)}$$



              or



              $$(x, y)_{(n+1)} = begin{pmatrix}
              3 & 4 \
              2 & 3 \
              end{pmatrix}^n (x, y)_{(0)} = begin{pmatrix}
              3 & 4 \
              2 & 3 \
              end{pmatrix}^n (1, 0)$$



              where



              $$begin{pmatrix}
              3 & 4 \
              2 & 3 \
              end{pmatrix}^n$$



              means "apply transform $n$ times" and



              $$begin{pmatrix}
              3 & 4 \
              2 & 3 \
              end{pmatrix}^{(-1)}$$



              means "apply inverse transform once".



              But, considering that matrix as a coordinate transform, we see point $(1,0)$ on plane $n=0$ "jumping FWD" to plane $n=1$ and "rotating CW" to point $(3,2)$, when applying the "forward transform", while we see that same point $(1,0)$ "jumping BWD" to plane $n=-1$ and "rotating CCW" to point $(3,-2)$, when applying the "inverse transform".



              Due to the recursivity of the "next solution generator":



              $$(x, y)_{(n+1)} = begin{pmatrix}
              3 & 4 \
              2 & 3 \
              end{pmatrix} (x, y)_{(n)}$$



              that kind of "jump and rotate" applies to all "integer solutions" to Pells's equation: indeed, as a coordinate transform, all the points in the plane "rotate to" other points.



              That would also suggest that "all points on the paraboloids 'rotate' in 3D":




              • $(1,0,0)$ gets transformed to $(3,2,1)$


              • $(1,0,0)$ gets inverse-transformed to $(3,-2,-1)$



              Having put the "vertical planes" in between "helps visualizing the motion": it's a discrete motion ($n$ steps by $1$, not by a $dz$): could it be "extended" so it's continuous?



              With the geometrical interpretation above, "extending" this "motion" to be "continuous" means "extending integer powers of matrices to real powers".



              Would you agree?



              Cheers,



              .k.






              share|cite|improve this answer









              $endgroup$


















                0












                $begingroup$

                I came to the need/wish to extend $A^n$ to $A^z$ from a geometrical perspective (where $n$ is an integer number and $z$ real one), so I hope this can help, by providing a geometrical meaning for "real powers of a matrix".



                this is the geometrical setup



                The hyperboloid shown on the image comes from the Pells's equation:



                $$x^2 - 2y^2 = 1$$



                The "vertical planes" shown there (only 3 shown, for $n=-1,0,1$) are identified by their $z$ coordinate and placed at $n$ (integer) values only.



                That was the starting point: extending integer values $n$ to all real values $z$, will mean considering a continuous set of "vertical planes": that will involve powers of matrices, as $n$ will show up as a matrix exponent, in the following.



                I assume each $z=n$ vertical plane "hosts" the coordinate system $(x_n, y_n)$, just like as these coordinate systems "lived" on those planes.



                With this setup, the matrix



                begin{pmatrix}
                3 & 4 \
                2 & 3 \
                end{pmatrix}



                is a representation for this coordinate transform:



                $$(x_n, y_n) = begin{pmatrix}
                3 & 4 \
                2 & 3 \
                end{pmatrix}^n (x_0, y_0)$$



                which also includes "jumping" from plane $z=0$ to plane $z=n$.



                BTW: the Pells's equation above is invariant to this coordinate transform.



                Noticeably, the same matrix was used by ancient Greeks "to generate the next integer solution" (to Pells's equation), as $(x, y)_{(0)}=(1,0)$ was known to be "a trivial solution":



                $$(x, y)_{(n+1)} = begin{pmatrix}
                3 & 4 \
                2 & 3 \
                end{pmatrix} (x, y)_{(n)}$$



                or



                $$(x, y)_{(n+1)} = begin{pmatrix}
                3 & 4 \
                2 & 3 \
                end{pmatrix}^n (x, y)_{(0)} = begin{pmatrix}
                3 & 4 \
                2 & 3 \
                end{pmatrix}^n (1, 0)$$



                where



                $$begin{pmatrix}
                3 & 4 \
                2 & 3 \
                end{pmatrix}^n$$



                means "apply transform $n$ times" and



                $$begin{pmatrix}
                3 & 4 \
                2 & 3 \
                end{pmatrix}^{(-1)}$$



                means "apply inverse transform once".



                But, considering that matrix as a coordinate transform, we see point $(1,0)$ on plane $n=0$ "jumping FWD" to plane $n=1$ and "rotating CW" to point $(3,2)$, when applying the "forward transform", while we see that same point $(1,0)$ "jumping BWD" to plane $n=-1$ and "rotating CCW" to point $(3,-2)$, when applying the "inverse transform".



                Due to the recursivity of the "next solution generator":



                $$(x, y)_{(n+1)} = begin{pmatrix}
                3 & 4 \
                2 & 3 \
                end{pmatrix} (x, y)_{(n)}$$



                that kind of "jump and rotate" applies to all "integer solutions" to Pells's equation: indeed, as a coordinate transform, all the points in the plane "rotate to" other points.



                That would also suggest that "all points on the paraboloids 'rotate' in 3D":




                • $(1,0,0)$ gets transformed to $(3,2,1)$


                • $(1,0,0)$ gets inverse-transformed to $(3,-2,-1)$



                Having put the "vertical planes" in between "helps visualizing the motion": it's a discrete motion ($n$ steps by $1$, not by a $dz$): could it be "extended" so it's continuous?



                With the geometrical interpretation above, "extending" this "motion" to be "continuous" means "extending integer powers of matrices to real powers".



                Would you agree?



                Cheers,



                .k.






                share|cite|improve this answer









                $endgroup$
















                  0












                  0








                  0





                  $begingroup$

                  I came to the need/wish to extend $A^n$ to $A^z$ from a geometrical perspective (where $n$ is an integer number and $z$ real one), so I hope this can help, by providing a geometrical meaning for "real powers of a matrix".



                  this is the geometrical setup



                  The hyperboloid shown on the image comes from the Pells's equation:



                  $$x^2 - 2y^2 = 1$$



                  The "vertical planes" shown there (only 3 shown, for $n=-1,0,1$) are identified by their $z$ coordinate and placed at $n$ (integer) values only.



                  That was the starting point: extending integer values $n$ to all real values $z$, will mean considering a continuous set of "vertical planes": that will involve powers of matrices, as $n$ will show up as a matrix exponent, in the following.



                  I assume each $z=n$ vertical plane "hosts" the coordinate system $(x_n, y_n)$, just like as these coordinate systems "lived" on those planes.



                  With this setup, the matrix



                  begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}



                  is a representation for this coordinate transform:



                  $$(x_n, y_n) = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^n (x_0, y_0)$$



                  which also includes "jumping" from plane $z=0$ to plane $z=n$.



                  BTW: the Pells's equation above is invariant to this coordinate transform.



                  Noticeably, the same matrix was used by ancient Greeks "to generate the next integer solution" (to Pells's equation), as $(x, y)_{(0)}=(1,0)$ was known to be "a trivial solution":



                  $$(x, y)_{(n+1)} = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix} (x, y)_{(n)}$$



                  or



                  $$(x, y)_{(n+1)} = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^n (x, y)_{(0)} = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^n (1, 0)$$



                  where



                  $$begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^n$$



                  means "apply transform $n$ times" and



                  $$begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^{(-1)}$$



                  means "apply inverse transform once".



                  But, considering that matrix as a coordinate transform, we see point $(1,0)$ on plane $n=0$ "jumping FWD" to plane $n=1$ and "rotating CW" to point $(3,2)$, when applying the "forward transform", while we see that same point $(1,0)$ "jumping BWD" to plane $n=-1$ and "rotating CCW" to point $(3,-2)$, when applying the "inverse transform".



                  Due to the recursivity of the "next solution generator":



                  $$(x, y)_{(n+1)} = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix} (x, y)_{(n)}$$



                  that kind of "jump and rotate" applies to all "integer solutions" to Pells's equation: indeed, as a coordinate transform, all the points in the plane "rotate to" other points.



                  That would also suggest that "all points on the paraboloids 'rotate' in 3D":




                  • $(1,0,0)$ gets transformed to $(3,2,1)$


                  • $(1,0,0)$ gets inverse-transformed to $(3,-2,-1)$



                  Having put the "vertical planes" in between "helps visualizing the motion": it's a discrete motion ($n$ steps by $1$, not by a $dz$): could it be "extended" so it's continuous?



                  With the geometrical interpretation above, "extending" this "motion" to be "continuous" means "extending integer powers of matrices to real powers".



                  Would you agree?



                  Cheers,



                  .k.






                  share|cite|improve this answer









                  $endgroup$



                  I came to the need/wish to extend $A^n$ to $A^z$ from a geometrical perspective (where $n$ is an integer number and $z$ real one), so I hope this can help, by providing a geometrical meaning for "real powers of a matrix".



                  this is the geometrical setup



                  The hyperboloid shown on the image comes from the Pells's equation:



                  $$x^2 - 2y^2 = 1$$



                  The "vertical planes" shown there (only 3 shown, for $n=-1,0,1$) are identified by their $z$ coordinate and placed at $n$ (integer) values only.



                  That was the starting point: extending integer values $n$ to all real values $z$, will mean considering a continuous set of "vertical planes": that will involve powers of matrices, as $n$ will show up as a matrix exponent, in the following.



                  I assume each $z=n$ vertical plane "hosts" the coordinate system $(x_n, y_n)$, just like as these coordinate systems "lived" on those planes.



                  With this setup, the matrix



                  begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}



                  is a representation for this coordinate transform:



                  $$(x_n, y_n) = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^n (x_0, y_0)$$



                  which also includes "jumping" from plane $z=0$ to plane $z=n$.



                  BTW: the Pells's equation above is invariant to this coordinate transform.



                  Noticeably, the same matrix was used by ancient Greeks "to generate the next integer solution" (to Pells's equation), as $(x, y)_{(0)}=(1,0)$ was known to be "a trivial solution":



                  $$(x, y)_{(n+1)} = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix} (x, y)_{(n)}$$



                  or



                  $$(x, y)_{(n+1)} = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^n (x, y)_{(0)} = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^n (1, 0)$$



                  where



                  $$begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^n$$



                  means "apply transform $n$ times" and



                  $$begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix}^{(-1)}$$



                  means "apply inverse transform once".



                  But, considering that matrix as a coordinate transform, we see point $(1,0)$ on plane $n=0$ "jumping FWD" to plane $n=1$ and "rotating CW" to point $(3,2)$, when applying the "forward transform", while we see that same point $(1,0)$ "jumping BWD" to plane $n=-1$ and "rotating CCW" to point $(3,-2)$, when applying the "inverse transform".



                  Due to the recursivity of the "next solution generator":



                  $$(x, y)_{(n+1)} = begin{pmatrix}
                  3 & 4 \
                  2 & 3 \
                  end{pmatrix} (x, y)_{(n)}$$



                  that kind of "jump and rotate" applies to all "integer solutions" to Pells's equation: indeed, as a coordinate transform, all the points in the plane "rotate to" other points.



                  That would also suggest that "all points on the paraboloids 'rotate' in 3D":




                  • $(1,0,0)$ gets transformed to $(3,2,1)$


                  • $(1,0,0)$ gets inverse-transformed to $(3,-2,-1)$



                  Having put the "vertical planes" in between "helps visualizing the motion": it's a discrete motion ($n$ steps by $1$, not by a $dz$): could it be "extended" so it's continuous?



                  With the geometrical interpretation above, "extending" this "motion" to be "continuous" means "extending integer powers of matrices to real powers".



                  Would you agree?



                  Cheers,



                  .k.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jan 9 at 11:19









                  ccampisanoccampisano

                  62




                  62






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f615614%2fnon-integral-powers-of-a-matrix%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      MongoDB - Not Authorized To Execute Command

                      in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

                      How to fix TextFormField cause rebuild widget in Flutter