efficient algorithm for solving equation $sum_n a_n/(x - x_n ) = 1$












2












$begingroup$


Here $x_n $ are real and $a_n$ are positive, and we have a finite summation.



The picture is very clear.



But what numerical algorithm is stable and efficient? Supposed $x_n $ are ordered in the increasing order, then between $x_n $ and $x_{n+1}$ there must be a root. One can use the bisection method, but it is slow. Or one can try Newton's method, but it is not necessarily stable.










share|cite|improve this question









$endgroup$












  • $begingroup$
    Claude Leibovici is one of the greatest experts about this, let us try to summon him: @Claude Leibovici
    $endgroup$
    – Jack D'Aurizio
    Jan 18 at 12:36










  • $begingroup$
    It is now diner time. Be sure I shall write tomorrow morning. Cheers :-)
    $endgroup$
    – Claude Leibovici
    Jan 18 at 17:23










  • $begingroup$
    A trick I suggested once is to multiply $-1+sumfrac{a_n}{x-x_n}$, when locating the root between $x_k$ and $x_{k+1}$, by $(x-x_k)$ or $(x-x_k)(x-x_{k+1})$, in order to ensure convexity/concavity (hence stability of a suitably arranged Newton's method) in a neighbourhood of $frac{x_k+x_{k+1}}{2}$, or $x_{k}^+$, or $x_{k+1}^-$.
    $endgroup$
    – Jack D'Aurizio
    Jan 18 at 18:02












  • $begingroup$
    @ClaudeLeibovici Before answering... Just tell us what you’ve had for dinner Claude??? Friday dinner in Pau should be good!
    $endgroup$
    – mathcounterexamples.net
    Jan 18 at 22:39






  • 1




    $begingroup$
    @JackD'Aurizio. Answer posted as promised.
    $endgroup$
    – Claude Leibovici
    Jan 19 at 3:51
















2












$begingroup$


Here $x_n $ are real and $a_n$ are positive, and we have a finite summation.



The picture is very clear.



But what numerical algorithm is stable and efficient? Supposed $x_n $ are ordered in the increasing order, then between $x_n $ and $x_{n+1}$ there must be a root. One can use the bisection method, but it is slow. Or one can try Newton's method, but it is not necessarily stable.










share|cite|improve this question









$endgroup$












  • $begingroup$
    Claude Leibovici is one of the greatest experts about this, let us try to summon him: @Claude Leibovici
    $endgroup$
    – Jack D'Aurizio
    Jan 18 at 12:36










  • $begingroup$
    It is now diner time. Be sure I shall write tomorrow morning. Cheers :-)
    $endgroup$
    – Claude Leibovici
    Jan 18 at 17:23










  • $begingroup$
    A trick I suggested once is to multiply $-1+sumfrac{a_n}{x-x_n}$, when locating the root between $x_k$ and $x_{k+1}$, by $(x-x_k)$ or $(x-x_k)(x-x_{k+1})$, in order to ensure convexity/concavity (hence stability of a suitably arranged Newton's method) in a neighbourhood of $frac{x_k+x_{k+1}}{2}$, or $x_{k}^+$, or $x_{k+1}^-$.
    $endgroup$
    – Jack D'Aurizio
    Jan 18 at 18:02












  • $begingroup$
    @ClaudeLeibovici Before answering... Just tell us what you’ve had for dinner Claude??? Friday dinner in Pau should be good!
    $endgroup$
    – mathcounterexamples.net
    Jan 18 at 22:39






  • 1




    $begingroup$
    @JackD'Aurizio. Answer posted as promised.
    $endgroup$
    – Claude Leibovici
    Jan 19 at 3:51














2












2








2


1



$begingroup$


Here $x_n $ are real and $a_n$ are positive, and we have a finite summation.



The picture is very clear.



But what numerical algorithm is stable and efficient? Supposed $x_n $ are ordered in the increasing order, then between $x_n $ and $x_{n+1}$ there must be a root. One can use the bisection method, but it is slow. Or one can try Newton's method, but it is not necessarily stable.










share|cite|improve this question









$endgroup$




Here $x_n $ are real and $a_n$ are positive, and we have a finite summation.



The picture is very clear.



But what numerical algorithm is stable and efficient? Supposed $x_n $ are ordered in the increasing order, then between $x_n $ and $x_{n+1}$ there must be a root. One can use the bisection method, but it is slow. Or one can try Newton's method, but it is not necessarily stable.







numerical-methods






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 18 at 12:15









S. KohnS. Kohn

21817




21817












  • $begingroup$
    Claude Leibovici is one of the greatest experts about this, let us try to summon him: @Claude Leibovici
    $endgroup$
    – Jack D'Aurizio
    Jan 18 at 12:36










  • $begingroup$
    It is now diner time. Be sure I shall write tomorrow morning. Cheers :-)
    $endgroup$
    – Claude Leibovici
    Jan 18 at 17:23










  • $begingroup$
    A trick I suggested once is to multiply $-1+sumfrac{a_n}{x-x_n}$, when locating the root between $x_k$ and $x_{k+1}$, by $(x-x_k)$ or $(x-x_k)(x-x_{k+1})$, in order to ensure convexity/concavity (hence stability of a suitably arranged Newton's method) in a neighbourhood of $frac{x_k+x_{k+1}}{2}$, or $x_{k}^+$, or $x_{k+1}^-$.
    $endgroup$
    – Jack D'Aurizio
    Jan 18 at 18:02












  • $begingroup$
    @ClaudeLeibovici Before answering... Just tell us what you’ve had for dinner Claude??? Friday dinner in Pau should be good!
    $endgroup$
    – mathcounterexamples.net
    Jan 18 at 22:39






  • 1




    $begingroup$
    @JackD'Aurizio. Answer posted as promised.
    $endgroup$
    – Claude Leibovici
    Jan 19 at 3:51


















  • $begingroup$
    Claude Leibovici is one of the greatest experts about this, let us try to summon him: @Claude Leibovici
    $endgroup$
    – Jack D'Aurizio
    Jan 18 at 12:36










  • $begingroup$
    It is now diner time. Be sure I shall write tomorrow morning. Cheers :-)
    $endgroup$
    – Claude Leibovici
    Jan 18 at 17:23










  • $begingroup$
    A trick I suggested once is to multiply $-1+sumfrac{a_n}{x-x_n}$, when locating the root between $x_k$ and $x_{k+1}$, by $(x-x_k)$ or $(x-x_k)(x-x_{k+1})$, in order to ensure convexity/concavity (hence stability of a suitably arranged Newton's method) in a neighbourhood of $frac{x_k+x_{k+1}}{2}$, or $x_{k}^+$, or $x_{k+1}^-$.
    $endgroup$
    – Jack D'Aurizio
    Jan 18 at 18:02












  • $begingroup$
    @ClaudeLeibovici Before answering... Just tell us what you’ve had for dinner Claude??? Friday dinner in Pau should be good!
    $endgroup$
    – mathcounterexamples.net
    Jan 18 at 22:39






  • 1




    $begingroup$
    @JackD'Aurizio. Answer posted as promised.
    $endgroup$
    – Claude Leibovici
    Jan 19 at 3:51
















$begingroup$
Claude Leibovici is one of the greatest experts about this, let us try to summon him: @Claude Leibovici
$endgroup$
– Jack D'Aurizio
Jan 18 at 12:36




$begingroup$
Claude Leibovici is one of the greatest experts about this, let us try to summon him: @Claude Leibovici
$endgroup$
– Jack D'Aurizio
Jan 18 at 12:36












$begingroup$
It is now diner time. Be sure I shall write tomorrow morning. Cheers :-)
$endgroup$
– Claude Leibovici
Jan 18 at 17:23




$begingroup$
It is now diner time. Be sure I shall write tomorrow morning. Cheers :-)
$endgroup$
– Claude Leibovici
Jan 18 at 17:23












$begingroup$
A trick I suggested once is to multiply $-1+sumfrac{a_n}{x-x_n}$, when locating the root between $x_k$ and $x_{k+1}$, by $(x-x_k)$ or $(x-x_k)(x-x_{k+1})$, in order to ensure convexity/concavity (hence stability of a suitably arranged Newton's method) in a neighbourhood of $frac{x_k+x_{k+1}}{2}$, or $x_{k}^+$, or $x_{k+1}^-$.
$endgroup$
– Jack D'Aurizio
Jan 18 at 18:02






$begingroup$
A trick I suggested once is to multiply $-1+sumfrac{a_n}{x-x_n}$, when locating the root between $x_k$ and $x_{k+1}$, by $(x-x_k)$ or $(x-x_k)(x-x_{k+1})$, in order to ensure convexity/concavity (hence stability of a suitably arranged Newton's method) in a neighbourhood of $frac{x_k+x_{k+1}}{2}$, or $x_{k}^+$, or $x_{k+1}^-$.
$endgroup$
– Jack D'Aurizio
Jan 18 at 18:02














$begingroup$
@ClaudeLeibovici Before answering... Just tell us what you’ve had for dinner Claude??? Friday dinner in Pau should be good!
$endgroup$
– mathcounterexamples.net
Jan 18 at 22:39




$begingroup$
@ClaudeLeibovici Before answering... Just tell us what you’ve had for dinner Claude??? Friday dinner in Pau should be good!
$endgroup$
– mathcounterexamples.net
Jan 18 at 22:39




1




1




$begingroup$
@JackD'Aurizio. Answer posted as promised.
$endgroup$
– Claude Leibovici
Jan 19 at 3:51




$begingroup$
@JackD'Aurizio. Answer posted as promised.
$endgroup$
– Claude Leibovici
Jan 19 at 3:51










3 Answers
3






active

oldest

votes


















4












$begingroup$

This looks to me like the secular equation which is used in divide and conquer algorithm for the symmetric eigenvalue problem so it is widely studied and efficient and stable implementation should be available.
Here is an overview paper: A numerical comparison of methods for solving secular
equations



Here are some slides from Golub for secular equation.






share|cite|improve this answer









$endgroup$





















    4












    $begingroup$

    I do not know the context of your problem but we worked a lot over years the problem of the solutions of the so-called Underwood equation which appear in shortcut distillation problems. It uses to write
    $$sum_{i=1}^n frac{alpha_i, z_i}{alpha_i- theta}=1-q$$ where the $alpha_i> 0$ and $z_i >0$ and $n$ can be very large (potentially up to thousands) and $q$ is given.



    In chemical process design, this equation has to be solved zillions of times (optimization problems with hundreds of variables). Because of that, we needed very fast, stable and robust solutions.



    Our most recent work was published in $2014$ in this paper (you can also find it here) where we proposed rapid and robust solution methods using convex transformations. Beside, and this is a key point, for any root, we proposed simple and efficient starting guesses which,typically, make that very few iterations are required (this is illustrated in the first figure showing that the starting guess is almost the solution).



    I consider that the paper is sufficient clear and detailed (with several examples) for helping you. In the case you have any problem, feel free to contact me (my e-mail address is in my profile).



    Edit



    If you want something simpler, considering for example (easy to generalize)
    $$f(x)=sum_{i=1}^6 frac{a_i}{x- b_i}-1$$ for the root between $b_1$ and $b_{2}$ consider instead
    $$g_{(1,2)}(x)=(x-b_1)(x-b_2) f(x)$$ which is
    $$g_{(1,2)}(x)=a_1
    (x-b_2)+a_2 (x-b_1)-(x-b_1) (x-b_2)+$$
    $$(x-b_1) (x-b_2)
    left(frac{a_3}{x-b_3}+frac{a_4}{x-b_4}+frac{a_5}{x-b_5}+frac{a_6}{x-b_6}right)$$
    and then
    $$g_{(1,2)}(b_1)=a_1 (b_1-b_2)qquad text{and} qquad g_{(1,2)}(b_2)=-a_2 (b_1-b_2)$$ and now use for example subroutine rtsafe from "Numerical Recipes" that utilizes a combination of bisection and Newton-Raphson steps (this is required since, between the bounds $b_1$ and $b_2$, function $g_{(1,2)}(x)$ goes through an extremum); the code is here. This works quite well without any convergence problem (but it is much less efficient than what was proposed in our paper). As you can see, the simple idea is just to remove the asymptotes (this is the so-called Leibovici & Neoschil method which has been widely used for this class of problems during the last $26$ years).



    You could even restrict the search to the interval $(b_1,x_*)$ or $(x_*,b_2)$ where $x_*=frac{a_1b_2+a_2b_1}{a_1+a_2}$ obtained by linear interpolation (you just need to check the value of $g_{(1,2)}(x_*)$).






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      (+1) as deserved.
      $endgroup$
      – Jack D'Aurizio
      Jan 19 at 16:52










    • $begingroup$
      @JackD'Aurizio. if you have time to waste, would you accept to read the paper and be critical ? Thanks.
      $endgroup$
      – Claude Leibovici
      Jan 19 at 17:09










    • $begingroup$
      I will do it, sure. I was just wondering: has anyone ever proposed a hybrid between your method and methods for simultaneous root finding through companion matrices?
      $endgroup$
      – Jack D'Aurizio
      Jan 19 at 17:17










    • $begingroup$
      Also: do you recall the first time we discussed about this problem here on MSE? I am not managing to find that relevant thread.
      $endgroup$
      – Jack D'Aurizio
      Jan 19 at 17:21










    • $begingroup$
      @JackD'Aurizio. You are totally wrong ! Big difference : rhs=0. It is math.stackexchange.com/questions/543020/… . For sure, I remember. I spoke at noon about this answer from you to my coworker and coauthor of the paper. Cheers?
      $endgroup$
      – Claude Leibovici
      Jan 19 at 18:29



















    3












    $begingroup$

    Extended comment: This problem looks close to the central relation for the Durand-Kerner method: For a polynomial of degree $n$ and $n$ root approximations $z_1,...,z_n$ consider the partial fraction decomposition
    $$
    frac{p(x)}{prod_{j=1}^n(x-z_j)}=1-sumfrac{w_j}{x-z_j}.
    $$

    Then by multiplying with $x-z_m$ and setting $x=z_m$ one finds $$w_m=-frac{p(z_m)}{prod_{jne m}(z_m-z_j)},$$ and the next root approximations are $z_j'=z_j+w_j$.



    V. Pan published several papers/tech reports on fast computation of this iteration, acceleration of convergence beyond the Durand-Kerner, Aberth-Ehrlich methods, multi-pole expansion,... that make extensive use of the first equation.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3078167%2fefficient-algorithm-for-solving-equation-sum-n-a-n-x-x-n-1%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4












      $begingroup$

      This looks to me like the secular equation which is used in divide and conquer algorithm for the symmetric eigenvalue problem so it is widely studied and efficient and stable implementation should be available.
      Here is an overview paper: A numerical comparison of methods for solving secular
      equations



      Here are some slides from Golub for secular equation.






      share|cite|improve this answer









      $endgroup$


















        4












        $begingroup$

        This looks to me like the secular equation which is used in divide and conquer algorithm for the symmetric eigenvalue problem so it is widely studied and efficient and stable implementation should be available.
        Here is an overview paper: A numerical comparison of methods for solving secular
        equations



        Here are some slides from Golub for secular equation.






        share|cite|improve this answer









        $endgroup$
















          4












          4








          4





          $begingroup$

          This looks to me like the secular equation which is used in divide and conquer algorithm for the symmetric eigenvalue problem so it is widely studied and efficient and stable implementation should be available.
          Here is an overview paper: A numerical comparison of methods for solving secular
          equations



          Here are some slides from Golub for secular equation.






          share|cite|improve this answer









          $endgroup$



          This looks to me like the secular equation which is used in divide and conquer algorithm for the symmetric eigenvalue problem so it is widely studied and efficient and stable implementation should be available.
          Here is an overview paper: A numerical comparison of methods for solving secular
          equations



          Here are some slides from Golub for secular equation.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jan 19 at 4:33









          zimbra314zimbra314

          628312




          628312























              4












              $begingroup$

              I do not know the context of your problem but we worked a lot over years the problem of the solutions of the so-called Underwood equation which appear in shortcut distillation problems. It uses to write
              $$sum_{i=1}^n frac{alpha_i, z_i}{alpha_i- theta}=1-q$$ where the $alpha_i> 0$ and $z_i >0$ and $n$ can be very large (potentially up to thousands) and $q$ is given.



              In chemical process design, this equation has to be solved zillions of times (optimization problems with hundreds of variables). Because of that, we needed very fast, stable and robust solutions.



              Our most recent work was published in $2014$ in this paper (you can also find it here) where we proposed rapid and robust solution methods using convex transformations. Beside, and this is a key point, for any root, we proposed simple and efficient starting guesses which,typically, make that very few iterations are required (this is illustrated in the first figure showing that the starting guess is almost the solution).



              I consider that the paper is sufficient clear and detailed (with several examples) for helping you. In the case you have any problem, feel free to contact me (my e-mail address is in my profile).



              Edit



              If you want something simpler, considering for example (easy to generalize)
              $$f(x)=sum_{i=1}^6 frac{a_i}{x- b_i}-1$$ for the root between $b_1$ and $b_{2}$ consider instead
              $$g_{(1,2)}(x)=(x-b_1)(x-b_2) f(x)$$ which is
              $$g_{(1,2)}(x)=a_1
              (x-b_2)+a_2 (x-b_1)-(x-b_1) (x-b_2)+$$
              $$(x-b_1) (x-b_2)
              left(frac{a_3}{x-b_3}+frac{a_4}{x-b_4}+frac{a_5}{x-b_5}+frac{a_6}{x-b_6}right)$$
              and then
              $$g_{(1,2)}(b_1)=a_1 (b_1-b_2)qquad text{and} qquad g_{(1,2)}(b_2)=-a_2 (b_1-b_2)$$ and now use for example subroutine rtsafe from "Numerical Recipes" that utilizes a combination of bisection and Newton-Raphson steps (this is required since, between the bounds $b_1$ and $b_2$, function $g_{(1,2)}(x)$ goes through an extremum); the code is here. This works quite well without any convergence problem (but it is much less efficient than what was proposed in our paper). As you can see, the simple idea is just to remove the asymptotes (this is the so-called Leibovici & Neoschil method which has been widely used for this class of problems during the last $26$ years).



              You could even restrict the search to the interval $(b_1,x_*)$ or $(x_*,b_2)$ where $x_*=frac{a_1b_2+a_2b_1}{a_1+a_2}$ obtained by linear interpolation (you just need to check the value of $g_{(1,2)}(x_*)$).






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                (+1) as deserved.
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 16:52










              • $begingroup$
                @JackD'Aurizio. if you have time to waste, would you accept to read the paper and be critical ? Thanks.
                $endgroup$
                – Claude Leibovici
                Jan 19 at 17:09










              • $begingroup$
                I will do it, sure. I was just wondering: has anyone ever proposed a hybrid between your method and methods for simultaneous root finding through companion matrices?
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 17:17










              • $begingroup$
                Also: do you recall the first time we discussed about this problem here on MSE? I am not managing to find that relevant thread.
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 17:21










              • $begingroup$
                @JackD'Aurizio. You are totally wrong ! Big difference : rhs=0. It is math.stackexchange.com/questions/543020/… . For sure, I remember. I spoke at noon about this answer from you to my coworker and coauthor of the paper. Cheers?
                $endgroup$
                – Claude Leibovici
                Jan 19 at 18:29
















              4












              $begingroup$

              I do not know the context of your problem but we worked a lot over years the problem of the solutions of the so-called Underwood equation which appear in shortcut distillation problems. It uses to write
              $$sum_{i=1}^n frac{alpha_i, z_i}{alpha_i- theta}=1-q$$ where the $alpha_i> 0$ and $z_i >0$ and $n$ can be very large (potentially up to thousands) and $q$ is given.



              In chemical process design, this equation has to be solved zillions of times (optimization problems with hundreds of variables). Because of that, we needed very fast, stable and robust solutions.



              Our most recent work was published in $2014$ in this paper (you can also find it here) where we proposed rapid and robust solution methods using convex transformations. Beside, and this is a key point, for any root, we proposed simple and efficient starting guesses which,typically, make that very few iterations are required (this is illustrated in the first figure showing that the starting guess is almost the solution).



              I consider that the paper is sufficient clear and detailed (with several examples) for helping you. In the case you have any problem, feel free to contact me (my e-mail address is in my profile).



              Edit



              If you want something simpler, considering for example (easy to generalize)
              $$f(x)=sum_{i=1}^6 frac{a_i}{x- b_i}-1$$ for the root between $b_1$ and $b_{2}$ consider instead
              $$g_{(1,2)}(x)=(x-b_1)(x-b_2) f(x)$$ which is
              $$g_{(1,2)}(x)=a_1
              (x-b_2)+a_2 (x-b_1)-(x-b_1) (x-b_2)+$$
              $$(x-b_1) (x-b_2)
              left(frac{a_3}{x-b_3}+frac{a_4}{x-b_4}+frac{a_5}{x-b_5}+frac{a_6}{x-b_6}right)$$
              and then
              $$g_{(1,2)}(b_1)=a_1 (b_1-b_2)qquad text{and} qquad g_{(1,2)}(b_2)=-a_2 (b_1-b_2)$$ and now use for example subroutine rtsafe from "Numerical Recipes" that utilizes a combination of bisection and Newton-Raphson steps (this is required since, between the bounds $b_1$ and $b_2$, function $g_{(1,2)}(x)$ goes through an extremum); the code is here. This works quite well without any convergence problem (but it is much less efficient than what was proposed in our paper). As you can see, the simple idea is just to remove the asymptotes (this is the so-called Leibovici & Neoschil method which has been widely used for this class of problems during the last $26$ years).



              You could even restrict the search to the interval $(b_1,x_*)$ or $(x_*,b_2)$ where $x_*=frac{a_1b_2+a_2b_1}{a_1+a_2}$ obtained by linear interpolation (you just need to check the value of $g_{(1,2)}(x_*)$).






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                (+1) as deserved.
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 16:52










              • $begingroup$
                @JackD'Aurizio. if you have time to waste, would you accept to read the paper and be critical ? Thanks.
                $endgroup$
                – Claude Leibovici
                Jan 19 at 17:09










              • $begingroup$
                I will do it, sure. I was just wondering: has anyone ever proposed a hybrid between your method and methods for simultaneous root finding through companion matrices?
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 17:17










              • $begingroup$
                Also: do you recall the first time we discussed about this problem here on MSE? I am not managing to find that relevant thread.
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 17:21










              • $begingroup$
                @JackD'Aurizio. You are totally wrong ! Big difference : rhs=0. It is math.stackexchange.com/questions/543020/… . For sure, I remember. I spoke at noon about this answer from you to my coworker and coauthor of the paper. Cheers?
                $endgroup$
                – Claude Leibovici
                Jan 19 at 18:29














              4












              4








              4





              $begingroup$

              I do not know the context of your problem but we worked a lot over years the problem of the solutions of the so-called Underwood equation which appear in shortcut distillation problems. It uses to write
              $$sum_{i=1}^n frac{alpha_i, z_i}{alpha_i- theta}=1-q$$ where the $alpha_i> 0$ and $z_i >0$ and $n$ can be very large (potentially up to thousands) and $q$ is given.



              In chemical process design, this equation has to be solved zillions of times (optimization problems with hundreds of variables). Because of that, we needed very fast, stable and robust solutions.



              Our most recent work was published in $2014$ in this paper (you can also find it here) where we proposed rapid and robust solution methods using convex transformations. Beside, and this is a key point, for any root, we proposed simple and efficient starting guesses which,typically, make that very few iterations are required (this is illustrated in the first figure showing that the starting guess is almost the solution).



              I consider that the paper is sufficient clear and detailed (with several examples) for helping you. In the case you have any problem, feel free to contact me (my e-mail address is in my profile).



              Edit



              If you want something simpler, considering for example (easy to generalize)
              $$f(x)=sum_{i=1}^6 frac{a_i}{x- b_i}-1$$ for the root between $b_1$ and $b_{2}$ consider instead
              $$g_{(1,2)}(x)=(x-b_1)(x-b_2) f(x)$$ which is
              $$g_{(1,2)}(x)=a_1
              (x-b_2)+a_2 (x-b_1)-(x-b_1) (x-b_2)+$$
              $$(x-b_1) (x-b_2)
              left(frac{a_3}{x-b_3}+frac{a_4}{x-b_4}+frac{a_5}{x-b_5}+frac{a_6}{x-b_6}right)$$
              and then
              $$g_{(1,2)}(b_1)=a_1 (b_1-b_2)qquad text{and} qquad g_{(1,2)}(b_2)=-a_2 (b_1-b_2)$$ and now use for example subroutine rtsafe from "Numerical Recipes" that utilizes a combination of bisection and Newton-Raphson steps (this is required since, between the bounds $b_1$ and $b_2$, function $g_{(1,2)}(x)$ goes through an extremum); the code is here. This works quite well without any convergence problem (but it is much less efficient than what was proposed in our paper). As you can see, the simple idea is just to remove the asymptotes (this is the so-called Leibovici & Neoschil method which has been widely used for this class of problems during the last $26$ years).



              You could even restrict the search to the interval $(b_1,x_*)$ or $(x_*,b_2)$ where $x_*=frac{a_1b_2+a_2b_1}{a_1+a_2}$ obtained by linear interpolation (you just need to check the value of $g_{(1,2)}(x_*)$).






              share|cite|improve this answer











              $endgroup$



              I do not know the context of your problem but we worked a lot over years the problem of the solutions of the so-called Underwood equation which appear in shortcut distillation problems. It uses to write
              $$sum_{i=1}^n frac{alpha_i, z_i}{alpha_i- theta}=1-q$$ where the $alpha_i> 0$ and $z_i >0$ and $n$ can be very large (potentially up to thousands) and $q$ is given.



              In chemical process design, this equation has to be solved zillions of times (optimization problems with hundreds of variables). Because of that, we needed very fast, stable and robust solutions.



              Our most recent work was published in $2014$ in this paper (you can also find it here) where we proposed rapid and robust solution methods using convex transformations. Beside, and this is a key point, for any root, we proposed simple and efficient starting guesses which,typically, make that very few iterations are required (this is illustrated in the first figure showing that the starting guess is almost the solution).



              I consider that the paper is sufficient clear and detailed (with several examples) for helping you. In the case you have any problem, feel free to contact me (my e-mail address is in my profile).



              Edit



              If you want something simpler, considering for example (easy to generalize)
              $$f(x)=sum_{i=1}^6 frac{a_i}{x- b_i}-1$$ for the root between $b_1$ and $b_{2}$ consider instead
              $$g_{(1,2)}(x)=(x-b_1)(x-b_2) f(x)$$ which is
              $$g_{(1,2)}(x)=a_1
              (x-b_2)+a_2 (x-b_1)-(x-b_1) (x-b_2)+$$
              $$(x-b_1) (x-b_2)
              left(frac{a_3}{x-b_3}+frac{a_4}{x-b_4}+frac{a_5}{x-b_5}+frac{a_6}{x-b_6}right)$$
              and then
              $$g_{(1,2)}(b_1)=a_1 (b_1-b_2)qquad text{and} qquad g_{(1,2)}(b_2)=-a_2 (b_1-b_2)$$ and now use for example subroutine rtsafe from "Numerical Recipes" that utilizes a combination of bisection and Newton-Raphson steps (this is required since, between the bounds $b_1$ and $b_2$, function $g_{(1,2)}(x)$ goes through an extremum); the code is here. This works quite well without any convergence problem (but it is much less efficient than what was proposed in our paper). As you can see, the simple idea is just to remove the asymptotes (this is the so-called Leibovici & Neoschil method which has been widely used for this class of problems during the last $26$ years).



              You could even restrict the search to the interval $(b_1,x_*)$ or $(x_*,b_2)$ where $x_*=frac{a_1b_2+a_2b_1}{a_1+a_2}$ obtained by linear interpolation (you just need to check the value of $g_{(1,2)}(x_*)$).







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Jan 19 at 5:40

























              answered Jan 19 at 3:49









              Claude LeiboviciClaude Leibovici

              123k1157134




              123k1157134












              • $begingroup$
                (+1) as deserved.
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 16:52










              • $begingroup$
                @JackD'Aurizio. if you have time to waste, would you accept to read the paper and be critical ? Thanks.
                $endgroup$
                – Claude Leibovici
                Jan 19 at 17:09










              • $begingroup$
                I will do it, sure. I was just wondering: has anyone ever proposed a hybrid between your method and methods for simultaneous root finding through companion matrices?
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 17:17










              • $begingroup$
                Also: do you recall the first time we discussed about this problem here on MSE? I am not managing to find that relevant thread.
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 17:21










              • $begingroup$
                @JackD'Aurizio. You are totally wrong ! Big difference : rhs=0. It is math.stackexchange.com/questions/543020/… . For sure, I remember. I spoke at noon about this answer from you to my coworker and coauthor of the paper. Cheers?
                $endgroup$
                – Claude Leibovici
                Jan 19 at 18:29


















              • $begingroup$
                (+1) as deserved.
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 16:52










              • $begingroup$
                @JackD'Aurizio. if you have time to waste, would you accept to read the paper and be critical ? Thanks.
                $endgroup$
                – Claude Leibovici
                Jan 19 at 17:09










              • $begingroup$
                I will do it, sure. I was just wondering: has anyone ever proposed a hybrid between your method and methods for simultaneous root finding through companion matrices?
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 17:17










              • $begingroup$
                Also: do you recall the first time we discussed about this problem here on MSE? I am not managing to find that relevant thread.
                $endgroup$
                – Jack D'Aurizio
                Jan 19 at 17:21










              • $begingroup$
                @JackD'Aurizio. You are totally wrong ! Big difference : rhs=0. It is math.stackexchange.com/questions/543020/… . For sure, I remember. I spoke at noon about this answer from you to my coworker and coauthor of the paper. Cheers?
                $endgroup$
                – Claude Leibovici
                Jan 19 at 18:29
















              $begingroup$
              (+1) as deserved.
              $endgroup$
              – Jack D'Aurizio
              Jan 19 at 16:52




              $begingroup$
              (+1) as deserved.
              $endgroup$
              – Jack D'Aurizio
              Jan 19 at 16:52












              $begingroup$
              @JackD'Aurizio. if you have time to waste, would you accept to read the paper and be critical ? Thanks.
              $endgroup$
              – Claude Leibovici
              Jan 19 at 17:09




              $begingroup$
              @JackD'Aurizio. if you have time to waste, would you accept to read the paper and be critical ? Thanks.
              $endgroup$
              – Claude Leibovici
              Jan 19 at 17:09












              $begingroup$
              I will do it, sure. I was just wondering: has anyone ever proposed a hybrid between your method and methods for simultaneous root finding through companion matrices?
              $endgroup$
              – Jack D'Aurizio
              Jan 19 at 17:17




              $begingroup$
              I will do it, sure. I was just wondering: has anyone ever proposed a hybrid between your method and methods for simultaneous root finding through companion matrices?
              $endgroup$
              – Jack D'Aurizio
              Jan 19 at 17:17












              $begingroup$
              Also: do you recall the first time we discussed about this problem here on MSE? I am not managing to find that relevant thread.
              $endgroup$
              – Jack D'Aurizio
              Jan 19 at 17:21




              $begingroup$
              Also: do you recall the first time we discussed about this problem here on MSE? I am not managing to find that relevant thread.
              $endgroup$
              – Jack D'Aurizio
              Jan 19 at 17:21












              $begingroup$
              @JackD'Aurizio. You are totally wrong ! Big difference : rhs=0. It is math.stackexchange.com/questions/543020/… . For sure, I remember. I spoke at noon about this answer from you to my coworker and coauthor of the paper. Cheers?
              $endgroup$
              – Claude Leibovici
              Jan 19 at 18:29




              $begingroup$
              @JackD'Aurizio. You are totally wrong ! Big difference : rhs=0. It is math.stackexchange.com/questions/543020/… . For sure, I remember. I spoke at noon about this answer from you to my coworker and coauthor of the paper. Cheers?
              $endgroup$
              – Claude Leibovici
              Jan 19 at 18:29











              3












              $begingroup$

              Extended comment: This problem looks close to the central relation for the Durand-Kerner method: For a polynomial of degree $n$ and $n$ root approximations $z_1,...,z_n$ consider the partial fraction decomposition
              $$
              frac{p(x)}{prod_{j=1}^n(x-z_j)}=1-sumfrac{w_j}{x-z_j}.
              $$

              Then by multiplying with $x-z_m$ and setting $x=z_m$ one finds $$w_m=-frac{p(z_m)}{prod_{jne m}(z_m-z_j)},$$ and the next root approximations are $z_j'=z_j+w_j$.



              V. Pan published several papers/tech reports on fast computation of this iteration, acceleration of convergence beyond the Durand-Kerner, Aberth-Ehrlich methods, multi-pole expansion,... that make extensive use of the first equation.






              share|cite|improve this answer









              $endgroup$


















                3












                $begingroup$

                Extended comment: This problem looks close to the central relation for the Durand-Kerner method: For a polynomial of degree $n$ and $n$ root approximations $z_1,...,z_n$ consider the partial fraction decomposition
                $$
                frac{p(x)}{prod_{j=1}^n(x-z_j)}=1-sumfrac{w_j}{x-z_j}.
                $$

                Then by multiplying with $x-z_m$ and setting $x=z_m$ one finds $$w_m=-frac{p(z_m)}{prod_{jne m}(z_m-z_j)},$$ and the next root approximations are $z_j'=z_j+w_j$.



                V. Pan published several papers/tech reports on fast computation of this iteration, acceleration of convergence beyond the Durand-Kerner, Aberth-Ehrlich methods, multi-pole expansion,... that make extensive use of the first equation.






                share|cite|improve this answer









                $endgroup$
















                  3












                  3








                  3





                  $begingroup$

                  Extended comment: This problem looks close to the central relation for the Durand-Kerner method: For a polynomial of degree $n$ and $n$ root approximations $z_1,...,z_n$ consider the partial fraction decomposition
                  $$
                  frac{p(x)}{prod_{j=1}^n(x-z_j)}=1-sumfrac{w_j}{x-z_j}.
                  $$

                  Then by multiplying with $x-z_m$ and setting $x=z_m$ one finds $$w_m=-frac{p(z_m)}{prod_{jne m}(z_m-z_j)},$$ and the next root approximations are $z_j'=z_j+w_j$.



                  V. Pan published several papers/tech reports on fast computation of this iteration, acceleration of convergence beyond the Durand-Kerner, Aberth-Ehrlich methods, multi-pole expansion,... that make extensive use of the first equation.






                  share|cite|improve this answer









                  $endgroup$



                  Extended comment: This problem looks close to the central relation for the Durand-Kerner method: For a polynomial of degree $n$ and $n$ root approximations $z_1,...,z_n$ consider the partial fraction decomposition
                  $$
                  frac{p(x)}{prod_{j=1}^n(x-z_j)}=1-sumfrac{w_j}{x-z_j}.
                  $$

                  Then by multiplying with $x-z_m$ and setting $x=z_m$ one finds $$w_m=-frac{p(z_m)}{prod_{jne m}(z_m-z_j)},$$ and the next root approximations are $z_j'=z_j+w_j$.



                  V. Pan published several papers/tech reports on fast computation of this iteration, acceleration of convergence beyond the Durand-Kerner, Aberth-Ehrlich methods, multi-pole expansion,... that make extensive use of the first equation.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jan 19 at 8:31









                  LutzLLutzL

                  59.1k42056




                  59.1k42056






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3078167%2fefficient-algorithm-for-solving-equation-sum-n-a-n-x-x-n-1%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      MongoDB - Not Authorized To Execute Command

                      in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

                      Npm cannot find a required file even through it is in the searched directory