Inverting a power series matrix












1












$begingroup$


Let's assume I want to invert a matrix function $M=M(x)$, which is expressed as a power series of the small parameter $epsilon$



$$M = M_0(x) + M_1(x) epsilon + M_2(x) epsilon^2 + mathcal{O}(epsilon^3)$$



I would like to invert this and obtain $M^{-1}(x)$ as a series expansion itself.



Option 1 - The temptation is to invert a truncated version of $M$, but this could cause distortions, due to the possible role of the terms $mathcal{O}(epsilon^3)$.



Option 2 - I could also truncate $M$ and add to it a placeholder matrix $O$, to symbolize all the terms $mathcal{O}(epsilon^3)$. When I obtain the inverse, it will be in terms of $x$ and $O$. I could then substitute $Orightarrow epsilon^3$ and expand in $epsilon$ again.



None of these strike me as good methods. What would be the best way to proceed?










share|cite|improve this question











$endgroup$

















    1












    $begingroup$


    Let's assume I want to invert a matrix function $M=M(x)$, which is expressed as a power series of the small parameter $epsilon$



    $$M = M_0(x) + M_1(x) epsilon + M_2(x) epsilon^2 + mathcal{O}(epsilon^3)$$



    I would like to invert this and obtain $M^{-1}(x)$ as a series expansion itself.



    Option 1 - The temptation is to invert a truncated version of $M$, but this could cause distortions, due to the possible role of the terms $mathcal{O}(epsilon^3)$.



    Option 2 - I could also truncate $M$ and add to it a placeholder matrix $O$, to symbolize all the terms $mathcal{O}(epsilon^3)$. When I obtain the inverse, it will be in terms of $x$ and $O$. I could then substitute $Orightarrow epsilon^3$ and expand in $epsilon$ again.



    None of these strike me as good methods. What would be the best way to proceed?










    share|cite|improve this question











    $endgroup$















      1












      1








      1





      $begingroup$


      Let's assume I want to invert a matrix function $M=M(x)$, which is expressed as a power series of the small parameter $epsilon$



      $$M = M_0(x) + M_1(x) epsilon + M_2(x) epsilon^2 + mathcal{O}(epsilon^3)$$



      I would like to invert this and obtain $M^{-1}(x)$ as a series expansion itself.



      Option 1 - The temptation is to invert a truncated version of $M$, but this could cause distortions, due to the possible role of the terms $mathcal{O}(epsilon^3)$.



      Option 2 - I could also truncate $M$ and add to it a placeholder matrix $O$, to symbolize all the terms $mathcal{O}(epsilon^3)$. When I obtain the inverse, it will be in terms of $x$ and $O$. I could then substitute $Orightarrow epsilon^3$ and expand in $epsilon$ again.



      None of these strike me as good methods. What would be the best way to proceed?










      share|cite|improve this question











      $endgroup$




      Let's assume I want to invert a matrix function $M=M(x)$, which is expressed as a power series of the small parameter $epsilon$



      $$M = M_0(x) + M_1(x) epsilon + M_2(x) epsilon^2 + mathcal{O}(epsilon^3)$$



      I would like to invert this and obtain $M^{-1}(x)$ as a series expansion itself.



      Option 1 - The temptation is to invert a truncated version of $M$, but this could cause distortions, due to the possible role of the terms $mathcal{O}(epsilon^3)$.



      Option 2 - I could also truncate $M$ and add to it a placeholder matrix $O$, to symbolize all the terms $mathcal{O}(epsilon^3)$. When I obtain the inverse, it will be in terms of $x$ and $O$. I could then substitute $Orightarrow epsilon^3$ and expand in $epsilon$ again.



      None of these strike me as good methods. What would be the best way to proceed?







      matrices power-series inverse perturbation-theory






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jan 11 at 14:46







      usumdelphini

















      asked Jan 11 at 12:43









      usumdelphiniusumdelphini

      323111




      323111






















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          Well $A = sum_{i=0}^infty a_ivarepsilon^i$ is invertible as formal power series in $varepsilon$ so long as $a_0$ is invertible. In this case if the inverse is $B = sum_{i=0}^infty b_ivarepsilon^i$ then by writing $sum_{i=0}^infty c_ivarepsilon^i = AB$, solving for the coefficients $c_i$ and setting $c_i = delta_{i,0}$ you will get coefficients. The first few terms for the inverse are:



          $$b_0 = a_0^{-1},$$
          $$b_1 = -a_0^{-1}a_1a_0^{-1}$$
          $$b_2 = a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} -a_0^{-1}a_2a_0^{-1}$$
          $$b_3 = - a_0^{-1}a_1a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} + a_0^{-1}a_1a_0^{-1}a_2a_0^{-1} + a_0^{-1}a_2a_0^{-1}a_1a_0^{-1} - a_0^{-1}a_3a_0^{-1}$$



          and I imagine you see the pattern by now (the formula is a lot easier to read if $a_0 = 1$).



          However, this won't work for your function as stated because it appears the constant term is zero. Did you mean to write:



          $$M(x+varepsilon) = M_0(x) + M_1(x)varepsilon + M_2(x)varepsilon^2 + O(varepsilon^3)$$



          or do you really assume $M_0(x) = 0$?



          If $M_0 = 0$ then $M(x+varepsilon)$ is not invertible at $varepsilon = 0$, but you can find an inverse in the punctured neighborhood around $x$, i.e. with Laurent series. If you assume $M_1$ is invertible then you can do the same thing as above to get an inverse for $varepsilon^{-1}M(x+varepsilon)$ given by:
          $$M_1^{-1}(x)varepsilon^{-1} - M_1(x)^{-1}M_2(x)M_1(x)^{-1} + (cdots)varepsilon + (cdots)varepsilon^2 + O(varepsilon^3)$$
          where the linear and quadratic terms are as in $b_2,b_3$ above.



          ADDED:



          There is no proof anywhere for the coefficients, but here is how you get them.



          begin{align}
          1 &= (a_0+a_1varepsilon + a_2varepsilon^2 + cdots)(b_0+b_1varepsilon + b_2varepsilon^2 + cdots)\
          &= a_0b_0 + (a_1b_0 + a_0b_1)varepsilon + (a_2b_0 + a_1b_1 + a_0b_2)varepsilon^2 + (a_3b_0 + a_2b_1 + a_1b_2 + a_0b_3)varepsilon^3 + cdots
          end{align}



          Now:




          • Set $a_0b_0 = 1$ to get $b_0 = a_0^{-1}$.


          • All the higher terms have to be zero, so set $a_1b_0 + a_0b_1 = 0$ and substitute the value of $b_0$ to get $a_1a_0^{-1} + a_0b_1 = 0$. Now solve for $b_1$.


          • Plug $b_0,b_1$ into the next equation $a_2b_0 + a_1b_1 + a_0b_2 = 0$ and solve.


          • Keep going with this as long as you want.



          Or you can prove that the formula looks a certain way:



          $$b_n = sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}$$



          You have to see that this solution satisfies $sum_{i=0}^n a_ib_{n-i} = 0$ for $n geq 1$. It seems more or less clear, but I guess it would go something like this:



          begin{align}
          a_0b_n &= a_0bigg(sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{1leq i_1 leq n} a_{i_1}bigg(sum_{n - i_1 = i_2+cdots + i_r\1leq i_2,ldots, i_r} (-1)^ra_0^{-1}a_{i_2}a_0^{-1}a_{i_3}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{j=1}^n a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k-1}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= sum_{j=1}^n(-1)a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= (-1)sum_{j=1}^n a_jb_{n-j}
          end{align}



          Done. Moving to the second line I factored out $a_0^{-1}a_{i_1}$, to the next line reindexed all the sums, to the next line factored out $(-1)$, to the next line substituted the $b$ coefficients.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Could you please add a link to a proof for the coefficients' inverse formula?
            $endgroup$
            – usumdelphini
            Jan 11 at 14:48










          • $begingroup$
            @usumdelphini Sure, added.
            $endgroup$
            – Ben
            Jan 11 at 15:49










          • $begingroup$
            Thanks, can you please have a look at this as well? math.stackexchange.com/questions/3075846/…
            $endgroup$
            – usumdelphini
            Jan 16 at 15:15












          • $begingroup$
            @usumdelphini Since the collection of doubly infinite series no longer forms a ring (you can't multiply), this is not a formal algebraic problem in the same way. There may not be an inverse anymore, you actually have to think about convergence conditions on sums of coefficients, and this makes the problem much harder. I don't have anything to add there.
            $endgroup$
            – Ben
            Jan 16 at 15:36











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3069794%2finverting-a-power-series-matrix%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2












          $begingroup$

          Well $A = sum_{i=0}^infty a_ivarepsilon^i$ is invertible as formal power series in $varepsilon$ so long as $a_0$ is invertible. In this case if the inverse is $B = sum_{i=0}^infty b_ivarepsilon^i$ then by writing $sum_{i=0}^infty c_ivarepsilon^i = AB$, solving for the coefficients $c_i$ and setting $c_i = delta_{i,0}$ you will get coefficients. The first few terms for the inverse are:



          $$b_0 = a_0^{-1},$$
          $$b_1 = -a_0^{-1}a_1a_0^{-1}$$
          $$b_2 = a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} -a_0^{-1}a_2a_0^{-1}$$
          $$b_3 = - a_0^{-1}a_1a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} + a_0^{-1}a_1a_0^{-1}a_2a_0^{-1} + a_0^{-1}a_2a_0^{-1}a_1a_0^{-1} - a_0^{-1}a_3a_0^{-1}$$



          and I imagine you see the pattern by now (the formula is a lot easier to read if $a_0 = 1$).



          However, this won't work for your function as stated because it appears the constant term is zero. Did you mean to write:



          $$M(x+varepsilon) = M_0(x) + M_1(x)varepsilon + M_2(x)varepsilon^2 + O(varepsilon^3)$$



          or do you really assume $M_0(x) = 0$?



          If $M_0 = 0$ then $M(x+varepsilon)$ is not invertible at $varepsilon = 0$, but you can find an inverse in the punctured neighborhood around $x$, i.e. with Laurent series. If you assume $M_1$ is invertible then you can do the same thing as above to get an inverse for $varepsilon^{-1}M(x+varepsilon)$ given by:
          $$M_1^{-1}(x)varepsilon^{-1} - M_1(x)^{-1}M_2(x)M_1(x)^{-1} + (cdots)varepsilon + (cdots)varepsilon^2 + O(varepsilon^3)$$
          where the linear and quadratic terms are as in $b_2,b_3$ above.



          ADDED:



          There is no proof anywhere for the coefficients, but here is how you get them.



          begin{align}
          1 &= (a_0+a_1varepsilon + a_2varepsilon^2 + cdots)(b_0+b_1varepsilon + b_2varepsilon^2 + cdots)\
          &= a_0b_0 + (a_1b_0 + a_0b_1)varepsilon + (a_2b_0 + a_1b_1 + a_0b_2)varepsilon^2 + (a_3b_0 + a_2b_1 + a_1b_2 + a_0b_3)varepsilon^3 + cdots
          end{align}



          Now:




          • Set $a_0b_0 = 1$ to get $b_0 = a_0^{-1}$.


          • All the higher terms have to be zero, so set $a_1b_0 + a_0b_1 = 0$ and substitute the value of $b_0$ to get $a_1a_0^{-1} + a_0b_1 = 0$. Now solve for $b_1$.


          • Plug $b_0,b_1$ into the next equation $a_2b_0 + a_1b_1 + a_0b_2 = 0$ and solve.


          • Keep going with this as long as you want.



          Or you can prove that the formula looks a certain way:



          $$b_n = sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}$$



          You have to see that this solution satisfies $sum_{i=0}^n a_ib_{n-i} = 0$ for $n geq 1$. It seems more or less clear, but I guess it would go something like this:



          begin{align}
          a_0b_n &= a_0bigg(sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{1leq i_1 leq n} a_{i_1}bigg(sum_{n - i_1 = i_2+cdots + i_r\1leq i_2,ldots, i_r} (-1)^ra_0^{-1}a_{i_2}a_0^{-1}a_{i_3}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{j=1}^n a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k-1}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= sum_{j=1}^n(-1)a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= (-1)sum_{j=1}^n a_jb_{n-j}
          end{align}



          Done. Moving to the second line I factored out $a_0^{-1}a_{i_1}$, to the next line reindexed all the sums, to the next line factored out $(-1)$, to the next line substituted the $b$ coefficients.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Could you please add a link to a proof for the coefficients' inverse formula?
            $endgroup$
            – usumdelphini
            Jan 11 at 14:48










          • $begingroup$
            @usumdelphini Sure, added.
            $endgroup$
            – Ben
            Jan 11 at 15:49










          • $begingroup$
            Thanks, can you please have a look at this as well? math.stackexchange.com/questions/3075846/…
            $endgroup$
            – usumdelphini
            Jan 16 at 15:15












          • $begingroup$
            @usumdelphini Since the collection of doubly infinite series no longer forms a ring (you can't multiply), this is not a formal algebraic problem in the same way. There may not be an inverse anymore, you actually have to think about convergence conditions on sums of coefficients, and this makes the problem much harder. I don't have anything to add there.
            $endgroup$
            – Ben
            Jan 16 at 15:36
















          2












          $begingroup$

          Well $A = sum_{i=0}^infty a_ivarepsilon^i$ is invertible as formal power series in $varepsilon$ so long as $a_0$ is invertible. In this case if the inverse is $B = sum_{i=0}^infty b_ivarepsilon^i$ then by writing $sum_{i=0}^infty c_ivarepsilon^i = AB$, solving for the coefficients $c_i$ and setting $c_i = delta_{i,0}$ you will get coefficients. The first few terms for the inverse are:



          $$b_0 = a_0^{-1},$$
          $$b_1 = -a_0^{-1}a_1a_0^{-1}$$
          $$b_2 = a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} -a_0^{-1}a_2a_0^{-1}$$
          $$b_3 = - a_0^{-1}a_1a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} + a_0^{-1}a_1a_0^{-1}a_2a_0^{-1} + a_0^{-1}a_2a_0^{-1}a_1a_0^{-1} - a_0^{-1}a_3a_0^{-1}$$



          and I imagine you see the pattern by now (the formula is a lot easier to read if $a_0 = 1$).



          However, this won't work for your function as stated because it appears the constant term is zero. Did you mean to write:



          $$M(x+varepsilon) = M_0(x) + M_1(x)varepsilon + M_2(x)varepsilon^2 + O(varepsilon^3)$$



          or do you really assume $M_0(x) = 0$?



          If $M_0 = 0$ then $M(x+varepsilon)$ is not invertible at $varepsilon = 0$, but you can find an inverse in the punctured neighborhood around $x$, i.e. with Laurent series. If you assume $M_1$ is invertible then you can do the same thing as above to get an inverse for $varepsilon^{-1}M(x+varepsilon)$ given by:
          $$M_1^{-1}(x)varepsilon^{-1} - M_1(x)^{-1}M_2(x)M_1(x)^{-1} + (cdots)varepsilon + (cdots)varepsilon^2 + O(varepsilon^3)$$
          where the linear and quadratic terms are as in $b_2,b_3$ above.



          ADDED:



          There is no proof anywhere for the coefficients, but here is how you get them.



          begin{align}
          1 &= (a_0+a_1varepsilon + a_2varepsilon^2 + cdots)(b_0+b_1varepsilon + b_2varepsilon^2 + cdots)\
          &= a_0b_0 + (a_1b_0 + a_0b_1)varepsilon + (a_2b_0 + a_1b_1 + a_0b_2)varepsilon^2 + (a_3b_0 + a_2b_1 + a_1b_2 + a_0b_3)varepsilon^3 + cdots
          end{align}



          Now:




          • Set $a_0b_0 = 1$ to get $b_0 = a_0^{-1}$.


          • All the higher terms have to be zero, so set $a_1b_0 + a_0b_1 = 0$ and substitute the value of $b_0$ to get $a_1a_0^{-1} + a_0b_1 = 0$. Now solve for $b_1$.


          • Plug $b_0,b_1$ into the next equation $a_2b_0 + a_1b_1 + a_0b_2 = 0$ and solve.


          • Keep going with this as long as you want.



          Or you can prove that the formula looks a certain way:



          $$b_n = sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}$$



          You have to see that this solution satisfies $sum_{i=0}^n a_ib_{n-i} = 0$ for $n geq 1$. It seems more or less clear, but I guess it would go something like this:



          begin{align}
          a_0b_n &= a_0bigg(sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{1leq i_1 leq n} a_{i_1}bigg(sum_{n - i_1 = i_2+cdots + i_r\1leq i_2,ldots, i_r} (-1)^ra_0^{-1}a_{i_2}a_0^{-1}a_{i_3}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{j=1}^n a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k-1}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= sum_{j=1}^n(-1)a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= (-1)sum_{j=1}^n a_jb_{n-j}
          end{align}



          Done. Moving to the second line I factored out $a_0^{-1}a_{i_1}$, to the next line reindexed all the sums, to the next line factored out $(-1)$, to the next line substituted the $b$ coefficients.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Could you please add a link to a proof for the coefficients' inverse formula?
            $endgroup$
            – usumdelphini
            Jan 11 at 14:48










          • $begingroup$
            @usumdelphini Sure, added.
            $endgroup$
            – Ben
            Jan 11 at 15:49










          • $begingroup$
            Thanks, can you please have a look at this as well? math.stackexchange.com/questions/3075846/…
            $endgroup$
            – usumdelphini
            Jan 16 at 15:15












          • $begingroup$
            @usumdelphini Since the collection of doubly infinite series no longer forms a ring (you can't multiply), this is not a formal algebraic problem in the same way. There may not be an inverse anymore, you actually have to think about convergence conditions on sums of coefficients, and this makes the problem much harder. I don't have anything to add there.
            $endgroup$
            – Ben
            Jan 16 at 15:36














          2












          2








          2





          $begingroup$

          Well $A = sum_{i=0}^infty a_ivarepsilon^i$ is invertible as formal power series in $varepsilon$ so long as $a_0$ is invertible. In this case if the inverse is $B = sum_{i=0}^infty b_ivarepsilon^i$ then by writing $sum_{i=0}^infty c_ivarepsilon^i = AB$, solving for the coefficients $c_i$ and setting $c_i = delta_{i,0}$ you will get coefficients. The first few terms for the inverse are:



          $$b_0 = a_0^{-1},$$
          $$b_1 = -a_0^{-1}a_1a_0^{-1}$$
          $$b_2 = a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} -a_0^{-1}a_2a_0^{-1}$$
          $$b_3 = - a_0^{-1}a_1a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} + a_0^{-1}a_1a_0^{-1}a_2a_0^{-1} + a_0^{-1}a_2a_0^{-1}a_1a_0^{-1} - a_0^{-1}a_3a_0^{-1}$$



          and I imagine you see the pattern by now (the formula is a lot easier to read if $a_0 = 1$).



          However, this won't work for your function as stated because it appears the constant term is zero. Did you mean to write:



          $$M(x+varepsilon) = M_0(x) + M_1(x)varepsilon + M_2(x)varepsilon^2 + O(varepsilon^3)$$



          or do you really assume $M_0(x) = 0$?



          If $M_0 = 0$ then $M(x+varepsilon)$ is not invertible at $varepsilon = 0$, but you can find an inverse in the punctured neighborhood around $x$, i.e. with Laurent series. If you assume $M_1$ is invertible then you can do the same thing as above to get an inverse for $varepsilon^{-1}M(x+varepsilon)$ given by:
          $$M_1^{-1}(x)varepsilon^{-1} - M_1(x)^{-1}M_2(x)M_1(x)^{-1} + (cdots)varepsilon + (cdots)varepsilon^2 + O(varepsilon^3)$$
          where the linear and quadratic terms are as in $b_2,b_3$ above.



          ADDED:



          There is no proof anywhere for the coefficients, but here is how you get them.



          begin{align}
          1 &= (a_0+a_1varepsilon + a_2varepsilon^2 + cdots)(b_0+b_1varepsilon + b_2varepsilon^2 + cdots)\
          &= a_0b_0 + (a_1b_0 + a_0b_1)varepsilon + (a_2b_0 + a_1b_1 + a_0b_2)varepsilon^2 + (a_3b_0 + a_2b_1 + a_1b_2 + a_0b_3)varepsilon^3 + cdots
          end{align}



          Now:




          • Set $a_0b_0 = 1$ to get $b_0 = a_0^{-1}$.


          • All the higher terms have to be zero, so set $a_1b_0 + a_0b_1 = 0$ and substitute the value of $b_0$ to get $a_1a_0^{-1} + a_0b_1 = 0$. Now solve for $b_1$.


          • Plug $b_0,b_1$ into the next equation $a_2b_0 + a_1b_1 + a_0b_2 = 0$ and solve.


          • Keep going with this as long as you want.



          Or you can prove that the formula looks a certain way:



          $$b_n = sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}$$



          You have to see that this solution satisfies $sum_{i=0}^n a_ib_{n-i} = 0$ for $n geq 1$. It seems more or less clear, but I guess it would go something like this:



          begin{align}
          a_0b_n &= a_0bigg(sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{1leq i_1 leq n} a_{i_1}bigg(sum_{n - i_1 = i_2+cdots + i_r\1leq i_2,ldots, i_r} (-1)^ra_0^{-1}a_{i_2}a_0^{-1}a_{i_3}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{j=1}^n a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k-1}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= sum_{j=1}^n(-1)a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= (-1)sum_{j=1}^n a_jb_{n-j}
          end{align}



          Done. Moving to the second line I factored out $a_0^{-1}a_{i_1}$, to the next line reindexed all the sums, to the next line factored out $(-1)$, to the next line substituted the $b$ coefficients.






          share|cite|improve this answer











          $endgroup$



          Well $A = sum_{i=0}^infty a_ivarepsilon^i$ is invertible as formal power series in $varepsilon$ so long as $a_0$ is invertible. In this case if the inverse is $B = sum_{i=0}^infty b_ivarepsilon^i$ then by writing $sum_{i=0}^infty c_ivarepsilon^i = AB$, solving for the coefficients $c_i$ and setting $c_i = delta_{i,0}$ you will get coefficients. The first few terms for the inverse are:



          $$b_0 = a_0^{-1},$$
          $$b_1 = -a_0^{-1}a_1a_0^{-1}$$
          $$b_2 = a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} -a_0^{-1}a_2a_0^{-1}$$
          $$b_3 = - a_0^{-1}a_1a_0^{-1}a_1a_0^{-1}a_1a_0^{-1} + a_0^{-1}a_1a_0^{-1}a_2a_0^{-1} + a_0^{-1}a_2a_0^{-1}a_1a_0^{-1} - a_0^{-1}a_3a_0^{-1}$$



          and I imagine you see the pattern by now (the formula is a lot easier to read if $a_0 = 1$).



          However, this won't work for your function as stated because it appears the constant term is zero. Did you mean to write:



          $$M(x+varepsilon) = M_0(x) + M_1(x)varepsilon + M_2(x)varepsilon^2 + O(varepsilon^3)$$



          or do you really assume $M_0(x) = 0$?



          If $M_0 = 0$ then $M(x+varepsilon)$ is not invertible at $varepsilon = 0$, but you can find an inverse in the punctured neighborhood around $x$, i.e. with Laurent series. If you assume $M_1$ is invertible then you can do the same thing as above to get an inverse for $varepsilon^{-1}M(x+varepsilon)$ given by:
          $$M_1^{-1}(x)varepsilon^{-1} - M_1(x)^{-1}M_2(x)M_1(x)^{-1} + (cdots)varepsilon + (cdots)varepsilon^2 + O(varepsilon^3)$$
          where the linear and quadratic terms are as in $b_2,b_3$ above.



          ADDED:



          There is no proof anywhere for the coefficients, but here is how you get them.



          begin{align}
          1 &= (a_0+a_1varepsilon + a_2varepsilon^2 + cdots)(b_0+b_1varepsilon + b_2varepsilon^2 + cdots)\
          &= a_0b_0 + (a_1b_0 + a_0b_1)varepsilon + (a_2b_0 + a_1b_1 + a_0b_2)varepsilon^2 + (a_3b_0 + a_2b_1 + a_1b_2 + a_0b_3)varepsilon^3 + cdots
          end{align}



          Now:




          • Set $a_0b_0 = 1$ to get $b_0 = a_0^{-1}$.


          • All the higher terms have to be zero, so set $a_1b_0 + a_0b_1 = 0$ and substitute the value of $b_0$ to get $a_1a_0^{-1} + a_0b_1 = 0$. Now solve for $b_1$.


          • Plug $b_0,b_1$ into the next equation $a_2b_0 + a_1b_1 + a_0b_2 = 0$ and solve.


          • Keep going with this as long as you want.



          Or you can prove that the formula looks a certain way:



          $$b_n = sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}$$



          You have to see that this solution satisfies $sum_{i=0}^n a_ib_{n-i} = 0$ for $n geq 1$. It seems more or less clear, but I guess it would go something like this:



          begin{align}
          a_0b_n &= a_0bigg(sum_{n = i_1+cdots + i_r\1leq i_1,ldots, i_r} (-1)^ra_0^{-1}a_{i_1}a_0^{-1}a_{i_2}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{1leq i_1 leq n} a_{i_1}bigg(sum_{n - i_1 = i_2+cdots + i_r\1leq i_2,ldots, i_r} (-1)^ra_0^{-1}a_{i_2}a_0^{-1}a_{i_3}a_0^{-1}cdots a_0^{-1}a_{i_r}a_0^{-1}bigg)\
          &= sum_{j=1}^n a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k-1}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= sum_{j=1}^n(-1)a_{j}bigg(sum_{n - j = j_1+cdots + j_k\ 1leq j_1,ldots, j_k} (-1)^{k}a_0^{-1}a_{j_1}a_0^{-1}a_{j_2}a_0^{-1}cdots a_0^{-1}a_{j_k}a_0^{-1}bigg)\
          &= (-1)sum_{j=1}^n a_jb_{n-j}
          end{align}



          Done. Moving to the second line I factored out $a_0^{-1}a_{i_1}$, to the next line reindexed all the sums, to the next line factored out $(-1)$, to the next line substituted the $b$ coefficients.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Jan 11 at 16:07









          darij grinberg

          10.9k33163




          10.9k33163










          answered Jan 11 at 14:35









          BenBen

          3,736616




          3,736616












          • $begingroup$
            Could you please add a link to a proof for the coefficients' inverse formula?
            $endgroup$
            – usumdelphini
            Jan 11 at 14:48










          • $begingroup$
            @usumdelphini Sure, added.
            $endgroup$
            – Ben
            Jan 11 at 15:49










          • $begingroup$
            Thanks, can you please have a look at this as well? math.stackexchange.com/questions/3075846/…
            $endgroup$
            – usumdelphini
            Jan 16 at 15:15












          • $begingroup$
            @usumdelphini Since the collection of doubly infinite series no longer forms a ring (you can't multiply), this is not a formal algebraic problem in the same way. There may not be an inverse anymore, you actually have to think about convergence conditions on sums of coefficients, and this makes the problem much harder. I don't have anything to add there.
            $endgroup$
            – Ben
            Jan 16 at 15:36


















          • $begingroup$
            Could you please add a link to a proof for the coefficients' inverse formula?
            $endgroup$
            – usumdelphini
            Jan 11 at 14:48










          • $begingroup$
            @usumdelphini Sure, added.
            $endgroup$
            – Ben
            Jan 11 at 15:49










          • $begingroup$
            Thanks, can you please have a look at this as well? math.stackexchange.com/questions/3075846/…
            $endgroup$
            – usumdelphini
            Jan 16 at 15:15












          • $begingroup$
            @usumdelphini Since the collection of doubly infinite series no longer forms a ring (you can't multiply), this is not a formal algebraic problem in the same way. There may not be an inverse anymore, you actually have to think about convergence conditions on sums of coefficients, and this makes the problem much harder. I don't have anything to add there.
            $endgroup$
            – Ben
            Jan 16 at 15:36
















          $begingroup$
          Could you please add a link to a proof for the coefficients' inverse formula?
          $endgroup$
          – usumdelphini
          Jan 11 at 14:48




          $begingroup$
          Could you please add a link to a proof for the coefficients' inverse formula?
          $endgroup$
          – usumdelphini
          Jan 11 at 14:48












          $begingroup$
          @usumdelphini Sure, added.
          $endgroup$
          – Ben
          Jan 11 at 15:49




          $begingroup$
          @usumdelphini Sure, added.
          $endgroup$
          – Ben
          Jan 11 at 15:49












          $begingroup$
          Thanks, can you please have a look at this as well? math.stackexchange.com/questions/3075846/…
          $endgroup$
          – usumdelphini
          Jan 16 at 15:15






          $begingroup$
          Thanks, can you please have a look at this as well? math.stackexchange.com/questions/3075846/…
          $endgroup$
          – usumdelphini
          Jan 16 at 15:15














          $begingroup$
          @usumdelphini Since the collection of doubly infinite series no longer forms a ring (you can't multiply), this is not a formal algebraic problem in the same way. There may not be an inverse anymore, you actually have to think about convergence conditions on sums of coefficients, and this makes the problem much harder. I don't have anything to add there.
          $endgroup$
          – Ben
          Jan 16 at 15:36




          $begingroup$
          @usumdelphini Since the collection of doubly infinite series no longer forms a ring (you can't multiply), this is not a formal algebraic problem in the same way. There may not be an inverse anymore, you actually have to think about convergence conditions on sums of coefficients, and this makes the problem much harder. I don't have anything to add there.
          $endgroup$
          – Ben
          Jan 16 at 15:36


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3069794%2finverting-a-power-series-matrix%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          'app-layout' is not a known element: how to share Component with different Modules

          android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

          WPF add header to Image with URL pettitions [duplicate]