What's the motivation of Runge-Kutta method?












2












$begingroup$


I'm recently taking a ode course and learning Runge-Kutta method on solving ordinary differential equations. To be specific, the 4th order Runge-Kutta method on solving initial value problems of ordinary equations.



My instructor and the textbook told me the formula but didn't say about the thoughts behind the method. I wrote some code and found that Runge-Kutta method does perform better than Euler method. But I can't understand why.



Is anyone willing to give me a hand on how to get the formula of Runge-Kutta? Thanks!










share|cite|improve this question









$endgroup$












  • $begingroup$
    It 'does perform better than Euler method' because the RK4 method uses 4 stages to get an approximation as opposed to the Euler method which uses 1 stage. See here for a good description of the method. You can find the derivation in most numerical analysis books.
    $endgroup$
    – Mattos
    Nov 19 '17 at 10:47












  • $begingroup$
    math.stackexchange.com/questions/528856/… This question is duplicated! See this question.
    $endgroup$
    – Chongxu Ren
    Nov 19 '17 at 10:59


















2












$begingroup$


I'm recently taking a ode course and learning Runge-Kutta method on solving ordinary differential equations. To be specific, the 4th order Runge-Kutta method on solving initial value problems of ordinary equations.



My instructor and the textbook told me the formula but didn't say about the thoughts behind the method. I wrote some code and found that Runge-Kutta method does perform better than Euler method. But I can't understand why.



Is anyone willing to give me a hand on how to get the formula of Runge-Kutta? Thanks!










share|cite|improve this question









$endgroup$












  • $begingroup$
    It 'does perform better than Euler method' because the RK4 method uses 4 stages to get an approximation as opposed to the Euler method which uses 1 stage. See here for a good description of the method. You can find the derivation in most numerical analysis books.
    $endgroup$
    – Mattos
    Nov 19 '17 at 10:47












  • $begingroup$
    math.stackexchange.com/questions/528856/… This question is duplicated! See this question.
    $endgroup$
    – Chongxu Ren
    Nov 19 '17 at 10:59
















2












2








2


3



$begingroup$


I'm recently taking a ode course and learning Runge-Kutta method on solving ordinary differential equations. To be specific, the 4th order Runge-Kutta method on solving initial value problems of ordinary equations.



My instructor and the textbook told me the formula but didn't say about the thoughts behind the method. I wrote some code and found that Runge-Kutta method does perform better than Euler method. But I can't understand why.



Is anyone willing to give me a hand on how to get the formula of Runge-Kutta? Thanks!










share|cite|improve this question









$endgroup$




I'm recently taking a ode course and learning Runge-Kutta method on solving ordinary differential equations. To be specific, the 4th order Runge-Kutta method on solving initial value problems of ordinary equations.



My instructor and the textbook told me the formula but didn't say about the thoughts behind the method. I wrote some code and found that Runge-Kutta method does perform better than Euler method. But I can't understand why.



Is anyone willing to give me a hand on how to get the formula of Runge-Kutta? Thanks!







ordinary-differential-equations






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 19 '17 at 10:41









Chongxu RenChongxu Ren

1529




1529












  • $begingroup$
    It 'does perform better than Euler method' because the RK4 method uses 4 stages to get an approximation as opposed to the Euler method which uses 1 stage. See here for a good description of the method. You can find the derivation in most numerical analysis books.
    $endgroup$
    – Mattos
    Nov 19 '17 at 10:47












  • $begingroup$
    math.stackexchange.com/questions/528856/… This question is duplicated! See this question.
    $endgroup$
    – Chongxu Ren
    Nov 19 '17 at 10:59




















  • $begingroup$
    It 'does perform better than Euler method' because the RK4 method uses 4 stages to get an approximation as opposed to the Euler method which uses 1 stage. See here for a good description of the method. You can find the derivation in most numerical analysis books.
    $endgroup$
    – Mattos
    Nov 19 '17 at 10:47












  • $begingroup$
    math.stackexchange.com/questions/528856/… This question is duplicated! See this question.
    $endgroup$
    – Chongxu Ren
    Nov 19 '17 at 10:59


















$begingroup$
It 'does perform better than Euler method' because the RK4 method uses 4 stages to get an approximation as opposed to the Euler method which uses 1 stage. See here for a good description of the method. You can find the derivation in most numerical analysis books.
$endgroup$
– Mattos
Nov 19 '17 at 10:47






$begingroup$
It 'does perform better than Euler method' because the RK4 method uses 4 stages to get an approximation as opposed to the Euler method which uses 1 stage. See here for a good description of the method. You can find the derivation in most numerical analysis books.
$endgroup$
– Mattos
Nov 19 '17 at 10:47














$begingroup$
math.stackexchange.com/questions/528856/… This question is duplicated! See this question.
$endgroup$
– Chongxu Ren
Nov 19 '17 at 10:59






$begingroup$
math.stackexchange.com/questions/528856/… This question is duplicated! See this question.
$endgroup$
– Chongxu Ren
Nov 19 '17 at 10:59












1 Answer
1






active

oldest

votes


















3












$begingroup$

On the history



See Butcher: A History of the Runge-Kutta method



In summary, people (Nystroem, Runge, Heun, Kutta,...) at the end of the 19th century experimented with success in generalizing the methods of numerical integration of functions in one variable $$int_a^bf(x)dx,$$ like the Gauss, trapezoidal, midpoint and Simpson methods, to the solution of differential equations, which have an integral form $$y(x)=y_0+int_{x_0}^x f(s,y(s)),ds.$$





Carl Runge in 1895[1] came up with ("by some curious inductive process" - "auf einem eigentümlich induktiven Wege" wrote Heun 5 years later) the 4-stage 3rd order method
begin{align}
k_1&=f(x,y)Δx,\
k_2&=f(x+tfrac12Δx,y+tfrac12k_1)Δx\
k_3&=f(x+Δx,y+k_1)Δx\
k_4&=f(x+Δx,y+k_3)Δx\
y_{+1}&=y+tfrac16(k_1+4k_2+k_4)
end{align}



[1] "Über die numerische Auflösung von Differentialgleichungen", Math. Ann. 46, p. 167-178





Inspired by this Karl Heun in 1900[2] explored methods of the type
$$
left.begin{aligned}k^i_m &= f(x+ε^i_m,y+ε^i_mk^{i+1}_m)Δx,~~ i=1,..,s,\ k^{s+1}_m&=f(x,y)Δxend{aligned}right},~~ m=1,..,n\
y_{+1}=y+sum_{m=1}^nalpha_mf(x+ε^0_mΔx,y+ε^0_mk^1_m)Δx
$$

He computed the order conditions by Taylor expansion and constructed methods of this type up to order 4, however the only today recognizable Runge-Kutta methods among them are the order-2 Heun-trapezoidal method and the order 3 Heun method.



[2] "Neue Methode zur approximativen Integration der Differentialgleichungen einer unabhängigen Veränderlichen", Z. f. Math. u. Phys. 45, p. 23-38





Wilhelm Kutta in his publication one year later in 1901[3] considered the scheme of Heun wasteful in the number of function evaluations and introduced what is today known as explicit Runge-Kutta methods, where each new function evaluation potentially contains all previous values in the $y$ update.
begin{align}
k_1&=f(x,y)Δx,\
k_m&=f(x+c_mΔx, y+a_{m,1}k_1+...+a_{m,s-1}k_{s-1})Δx,&& m=2,...,s\[0.5em]
y_{+1}&=y+b_1k_1+...+b_sk_s
end{align}

He computed order conditions and presented methods up to order $5$ in parametrization and examples. He especially noted the 3/8 method for its symmetry and small error term and the "classical" RK4 method for its simplicity in using always only the last function value in the $y$ updates.



[3] "Beitrag zur näherungsweisen Lösung totaler Differentialgleichungen", Z. f. Math. u. Phys. 46, p. 435-453





On the order dependence of the performance



The Euler method has global error order 1. Which means that to get an error level of $10^{-8}$ (on well-behaved example problems) you will need a step size of $h=10^{-8}$. Over the interval $[0,1]$ this requires $10^8$ steps with $10^8$ function evaluations.



The classical RK4 method has error order 4. To get an error level of $10^{-8}$ you will thus need a step size of $h=0.01$. Over the interval $[0,1]$ this requires $100$ steps with $400$ function evaluations.



If you decrease the step by a factor of $10$ to $h=0.001$, the RK4 method will need $1000$ steps with $4000$ function evaluations to get an error level of $10^{-12}$. This is still much less effort than used in the Euler example above with a much better result.



Using double precision floating point numbers you will not get a much better result with any method using a fixed step size, as smaller step sizes result in an accumulating floating point noise that dominates the error of the method.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2527302%2fwhats-the-motivation-of-runge-kutta-method%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    On the history



    See Butcher: A History of the Runge-Kutta method



    In summary, people (Nystroem, Runge, Heun, Kutta,...) at the end of the 19th century experimented with success in generalizing the methods of numerical integration of functions in one variable $$int_a^bf(x)dx,$$ like the Gauss, trapezoidal, midpoint and Simpson methods, to the solution of differential equations, which have an integral form $$y(x)=y_0+int_{x_0}^x f(s,y(s)),ds.$$





    Carl Runge in 1895[1] came up with ("by some curious inductive process" - "auf einem eigentümlich induktiven Wege" wrote Heun 5 years later) the 4-stage 3rd order method
    begin{align}
    k_1&=f(x,y)Δx,\
    k_2&=f(x+tfrac12Δx,y+tfrac12k_1)Δx\
    k_3&=f(x+Δx,y+k_1)Δx\
    k_4&=f(x+Δx,y+k_3)Δx\
    y_{+1}&=y+tfrac16(k_1+4k_2+k_4)
    end{align}



    [1] "Über die numerische Auflösung von Differentialgleichungen", Math. Ann. 46, p. 167-178





    Inspired by this Karl Heun in 1900[2] explored methods of the type
    $$
    left.begin{aligned}k^i_m &= f(x+ε^i_m,y+ε^i_mk^{i+1}_m)Δx,~~ i=1,..,s,\ k^{s+1}_m&=f(x,y)Δxend{aligned}right},~~ m=1,..,n\
    y_{+1}=y+sum_{m=1}^nalpha_mf(x+ε^0_mΔx,y+ε^0_mk^1_m)Δx
    $$

    He computed the order conditions by Taylor expansion and constructed methods of this type up to order 4, however the only today recognizable Runge-Kutta methods among them are the order-2 Heun-trapezoidal method and the order 3 Heun method.



    [2] "Neue Methode zur approximativen Integration der Differentialgleichungen einer unabhängigen Veränderlichen", Z. f. Math. u. Phys. 45, p. 23-38





    Wilhelm Kutta in his publication one year later in 1901[3] considered the scheme of Heun wasteful in the number of function evaluations and introduced what is today known as explicit Runge-Kutta methods, where each new function evaluation potentially contains all previous values in the $y$ update.
    begin{align}
    k_1&=f(x,y)Δx,\
    k_m&=f(x+c_mΔx, y+a_{m,1}k_1+...+a_{m,s-1}k_{s-1})Δx,&& m=2,...,s\[0.5em]
    y_{+1}&=y+b_1k_1+...+b_sk_s
    end{align}

    He computed order conditions and presented methods up to order $5$ in parametrization and examples. He especially noted the 3/8 method for its symmetry and small error term and the "classical" RK4 method for its simplicity in using always only the last function value in the $y$ updates.



    [3] "Beitrag zur näherungsweisen Lösung totaler Differentialgleichungen", Z. f. Math. u. Phys. 46, p. 435-453





    On the order dependence of the performance



    The Euler method has global error order 1. Which means that to get an error level of $10^{-8}$ (on well-behaved example problems) you will need a step size of $h=10^{-8}$. Over the interval $[0,1]$ this requires $10^8$ steps with $10^8$ function evaluations.



    The classical RK4 method has error order 4. To get an error level of $10^{-8}$ you will thus need a step size of $h=0.01$. Over the interval $[0,1]$ this requires $100$ steps with $400$ function evaluations.



    If you decrease the step by a factor of $10$ to $h=0.001$, the RK4 method will need $1000$ steps with $4000$ function evaluations to get an error level of $10^{-12}$. This is still much less effort than used in the Euler example above with a much better result.



    Using double precision floating point numbers you will not get a much better result with any method using a fixed step size, as smaller step sizes result in an accumulating floating point noise that dominates the error of the method.






    share|cite|improve this answer











    $endgroup$


















      3












      $begingroup$

      On the history



      See Butcher: A History of the Runge-Kutta method



      In summary, people (Nystroem, Runge, Heun, Kutta,...) at the end of the 19th century experimented with success in generalizing the methods of numerical integration of functions in one variable $$int_a^bf(x)dx,$$ like the Gauss, trapezoidal, midpoint and Simpson methods, to the solution of differential equations, which have an integral form $$y(x)=y_0+int_{x_0}^x f(s,y(s)),ds.$$





      Carl Runge in 1895[1] came up with ("by some curious inductive process" - "auf einem eigentümlich induktiven Wege" wrote Heun 5 years later) the 4-stage 3rd order method
      begin{align}
      k_1&=f(x,y)Δx,\
      k_2&=f(x+tfrac12Δx,y+tfrac12k_1)Δx\
      k_3&=f(x+Δx,y+k_1)Δx\
      k_4&=f(x+Δx,y+k_3)Δx\
      y_{+1}&=y+tfrac16(k_1+4k_2+k_4)
      end{align}



      [1] "Über die numerische Auflösung von Differentialgleichungen", Math. Ann. 46, p. 167-178





      Inspired by this Karl Heun in 1900[2] explored methods of the type
      $$
      left.begin{aligned}k^i_m &= f(x+ε^i_m,y+ε^i_mk^{i+1}_m)Δx,~~ i=1,..,s,\ k^{s+1}_m&=f(x,y)Δxend{aligned}right},~~ m=1,..,n\
      y_{+1}=y+sum_{m=1}^nalpha_mf(x+ε^0_mΔx,y+ε^0_mk^1_m)Δx
      $$

      He computed the order conditions by Taylor expansion and constructed methods of this type up to order 4, however the only today recognizable Runge-Kutta methods among them are the order-2 Heun-trapezoidal method and the order 3 Heun method.



      [2] "Neue Methode zur approximativen Integration der Differentialgleichungen einer unabhängigen Veränderlichen", Z. f. Math. u. Phys. 45, p. 23-38





      Wilhelm Kutta in his publication one year later in 1901[3] considered the scheme of Heun wasteful in the number of function evaluations and introduced what is today known as explicit Runge-Kutta methods, where each new function evaluation potentially contains all previous values in the $y$ update.
      begin{align}
      k_1&=f(x,y)Δx,\
      k_m&=f(x+c_mΔx, y+a_{m,1}k_1+...+a_{m,s-1}k_{s-1})Δx,&& m=2,...,s\[0.5em]
      y_{+1}&=y+b_1k_1+...+b_sk_s
      end{align}

      He computed order conditions and presented methods up to order $5$ in parametrization and examples. He especially noted the 3/8 method for its symmetry and small error term and the "classical" RK4 method for its simplicity in using always only the last function value in the $y$ updates.



      [3] "Beitrag zur näherungsweisen Lösung totaler Differentialgleichungen", Z. f. Math. u. Phys. 46, p. 435-453





      On the order dependence of the performance



      The Euler method has global error order 1. Which means that to get an error level of $10^{-8}$ (on well-behaved example problems) you will need a step size of $h=10^{-8}$. Over the interval $[0,1]$ this requires $10^8$ steps with $10^8$ function evaluations.



      The classical RK4 method has error order 4. To get an error level of $10^{-8}$ you will thus need a step size of $h=0.01$. Over the interval $[0,1]$ this requires $100$ steps with $400$ function evaluations.



      If you decrease the step by a factor of $10$ to $h=0.001$, the RK4 method will need $1000$ steps with $4000$ function evaluations to get an error level of $10^{-12}$. This is still much less effort than used in the Euler example above with a much better result.



      Using double precision floating point numbers you will not get a much better result with any method using a fixed step size, as smaller step sizes result in an accumulating floating point noise that dominates the error of the method.






      share|cite|improve this answer











      $endgroup$
















        3












        3








        3





        $begingroup$

        On the history



        See Butcher: A History of the Runge-Kutta method



        In summary, people (Nystroem, Runge, Heun, Kutta,...) at the end of the 19th century experimented with success in generalizing the methods of numerical integration of functions in one variable $$int_a^bf(x)dx,$$ like the Gauss, trapezoidal, midpoint and Simpson methods, to the solution of differential equations, which have an integral form $$y(x)=y_0+int_{x_0}^x f(s,y(s)),ds.$$





        Carl Runge in 1895[1] came up with ("by some curious inductive process" - "auf einem eigentümlich induktiven Wege" wrote Heun 5 years later) the 4-stage 3rd order method
        begin{align}
        k_1&=f(x,y)Δx,\
        k_2&=f(x+tfrac12Δx,y+tfrac12k_1)Δx\
        k_3&=f(x+Δx,y+k_1)Δx\
        k_4&=f(x+Δx,y+k_3)Δx\
        y_{+1}&=y+tfrac16(k_1+4k_2+k_4)
        end{align}



        [1] "Über die numerische Auflösung von Differentialgleichungen", Math. Ann. 46, p. 167-178





        Inspired by this Karl Heun in 1900[2] explored methods of the type
        $$
        left.begin{aligned}k^i_m &= f(x+ε^i_m,y+ε^i_mk^{i+1}_m)Δx,~~ i=1,..,s,\ k^{s+1}_m&=f(x,y)Δxend{aligned}right},~~ m=1,..,n\
        y_{+1}=y+sum_{m=1}^nalpha_mf(x+ε^0_mΔx,y+ε^0_mk^1_m)Δx
        $$

        He computed the order conditions by Taylor expansion and constructed methods of this type up to order 4, however the only today recognizable Runge-Kutta methods among them are the order-2 Heun-trapezoidal method and the order 3 Heun method.



        [2] "Neue Methode zur approximativen Integration der Differentialgleichungen einer unabhängigen Veränderlichen", Z. f. Math. u. Phys. 45, p. 23-38





        Wilhelm Kutta in his publication one year later in 1901[3] considered the scheme of Heun wasteful in the number of function evaluations and introduced what is today known as explicit Runge-Kutta methods, where each new function evaluation potentially contains all previous values in the $y$ update.
        begin{align}
        k_1&=f(x,y)Δx,\
        k_m&=f(x+c_mΔx, y+a_{m,1}k_1+...+a_{m,s-1}k_{s-1})Δx,&& m=2,...,s\[0.5em]
        y_{+1}&=y+b_1k_1+...+b_sk_s
        end{align}

        He computed order conditions and presented methods up to order $5$ in parametrization and examples. He especially noted the 3/8 method for its symmetry and small error term and the "classical" RK4 method for its simplicity in using always only the last function value in the $y$ updates.



        [3] "Beitrag zur näherungsweisen Lösung totaler Differentialgleichungen", Z. f. Math. u. Phys. 46, p. 435-453





        On the order dependence of the performance



        The Euler method has global error order 1. Which means that to get an error level of $10^{-8}$ (on well-behaved example problems) you will need a step size of $h=10^{-8}$. Over the interval $[0,1]$ this requires $10^8$ steps with $10^8$ function evaluations.



        The classical RK4 method has error order 4. To get an error level of $10^{-8}$ you will thus need a step size of $h=0.01$. Over the interval $[0,1]$ this requires $100$ steps with $400$ function evaluations.



        If you decrease the step by a factor of $10$ to $h=0.001$, the RK4 method will need $1000$ steps with $4000$ function evaluations to get an error level of $10^{-12}$. This is still much less effort than used in the Euler example above with a much better result.



        Using double precision floating point numbers you will not get a much better result with any method using a fixed step size, as smaller step sizes result in an accumulating floating point noise that dominates the error of the method.






        share|cite|improve this answer











        $endgroup$



        On the history



        See Butcher: A History of the Runge-Kutta method



        In summary, people (Nystroem, Runge, Heun, Kutta,...) at the end of the 19th century experimented with success in generalizing the methods of numerical integration of functions in one variable $$int_a^bf(x)dx,$$ like the Gauss, trapezoidal, midpoint and Simpson methods, to the solution of differential equations, which have an integral form $$y(x)=y_0+int_{x_0}^x f(s,y(s)),ds.$$





        Carl Runge in 1895[1] came up with ("by some curious inductive process" - "auf einem eigentümlich induktiven Wege" wrote Heun 5 years later) the 4-stage 3rd order method
        begin{align}
        k_1&=f(x,y)Δx,\
        k_2&=f(x+tfrac12Δx,y+tfrac12k_1)Δx\
        k_3&=f(x+Δx,y+k_1)Δx\
        k_4&=f(x+Δx,y+k_3)Δx\
        y_{+1}&=y+tfrac16(k_1+4k_2+k_4)
        end{align}



        [1] "Über die numerische Auflösung von Differentialgleichungen", Math. Ann. 46, p. 167-178





        Inspired by this Karl Heun in 1900[2] explored methods of the type
        $$
        left.begin{aligned}k^i_m &= f(x+ε^i_m,y+ε^i_mk^{i+1}_m)Δx,~~ i=1,..,s,\ k^{s+1}_m&=f(x,y)Δxend{aligned}right},~~ m=1,..,n\
        y_{+1}=y+sum_{m=1}^nalpha_mf(x+ε^0_mΔx,y+ε^0_mk^1_m)Δx
        $$

        He computed the order conditions by Taylor expansion and constructed methods of this type up to order 4, however the only today recognizable Runge-Kutta methods among them are the order-2 Heun-trapezoidal method and the order 3 Heun method.



        [2] "Neue Methode zur approximativen Integration der Differentialgleichungen einer unabhängigen Veränderlichen", Z. f. Math. u. Phys. 45, p. 23-38





        Wilhelm Kutta in his publication one year later in 1901[3] considered the scheme of Heun wasteful in the number of function evaluations and introduced what is today known as explicit Runge-Kutta methods, where each new function evaluation potentially contains all previous values in the $y$ update.
        begin{align}
        k_1&=f(x,y)Δx,\
        k_m&=f(x+c_mΔx, y+a_{m,1}k_1+...+a_{m,s-1}k_{s-1})Δx,&& m=2,...,s\[0.5em]
        y_{+1}&=y+b_1k_1+...+b_sk_s
        end{align}

        He computed order conditions and presented methods up to order $5$ in parametrization and examples. He especially noted the 3/8 method for its symmetry and small error term and the "classical" RK4 method for its simplicity in using always only the last function value in the $y$ updates.



        [3] "Beitrag zur näherungsweisen Lösung totaler Differentialgleichungen", Z. f. Math. u. Phys. 46, p. 435-453





        On the order dependence of the performance



        The Euler method has global error order 1. Which means that to get an error level of $10^{-8}$ (on well-behaved example problems) you will need a step size of $h=10^{-8}$. Over the interval $[0,1]$ this requires $10^8$ steps with $10^8$ function evaluations.



        The classical RK4 method has error order 4. To get an error level of $10^{-8}$ you will thus need a step size of $h=0.01$. Over the interval $[0,1]$ this requires $100$ steps with $400$ function evaluations.



        If you decrease the step by a factor of $10$ to $h=0.001$, the RK4 method will need $1000$ steps with $4000$ function evaluations to get an error level of $10^{-12}$. This is still much less effort than used in the Euler example above with a much better result.



        Using double precision floating point numbers you will not get a much better result with any method using a fixed step size, as smaller step sizes result in an accumulating floating point noise that dominates the error of the method.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Jan 3 at 16:29

























        answered Nov 19 '17 at 10:52









        LutzLLutzL

        57.2k42054




        57.2k42054






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2527302%2fwhats-the-motivation-of-runge-kutta-method%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            MongoDB - Not Authorized To Execute Command

            How to fix TextFormField cause rebuild widget in Flutter

            in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith