Empirical error proof Runge-Kutta algorithm when not knowing exact solution












0












$begingroup$


I'm implementing a RK solver for calculating the solution to the Lorenz system:



begin{equation}
begin{cases}
x'(t) = sigma(y-x) \
y'(t) = rx-y-z \
z'(t) = xy-bz
end{cases}
end{equation}



The implemented RK method is of order 3 with some random Butcher tableau (but satisfies the conditions neccesary for it to have global error of order $mathcal{O}(h^k)$, for $k$ the order of the method and local truncation error of $mathcal{O}(h^{k+1})$).



One of the questions I'm being asked is to prove empirically that my implemented RK achieves these orders of error.



My first attempt was to estimate the local error ($delta_L$) as:
$$delta_L(t_n) = x_n(t_n)-x_n(t_n-1)$$
Where $x_n$ is the numerical solution of $x$ and $t_n$ is the time step. Following this approach, I get a very oscillatory error, ranging from $-2.5$ to 2.5 with mean $0.0012$, suggesting that the way I calculate the error must not be appropiate.










share|cite|improve this question









$endgroup$












  • $begingroup$
    What does $x_n(t_n - 1)$ mean?
    $endgroup$
    – caverac
    Jan 1 at 10:55










  • $begingroup$
    @caverac the value of $x_n$ at the previous time step.
    $endgroup$
    – Jorge
    Jan 1 at 10:59










  • $begingroup$
    So you mean $delta_L(n) = x_n - x_{n-1}$? if this is the way you are calculating it, then you are basically getting $simdot{x}(t_n)delta t = sigma(y_n - x_n)delta t$
    $endgroup$
    – caverac
    Jan 1 at 11:00












  • $begingroup$
    That's why the solution oscillates. To test the accuracy of your integrator, you should use a system for which you know the solution and calculate the difference between what you expect and what you calculate
    $endgroup$
    – caverac
    Jan 1 at 11:06










  • $begingroup$
    For instance, you can use the harmonic oscillator $dot x=y$, $dot y=-x$ with solution $x(t)=sin t$, $y(t)=cos t$. // Why use a random method and not some nice one like the order 3 method found by Heun in 1900 and named by Kutta as Heun's method one year later?
    $endgroup$
    – LutzL
    Jan 1 at 18:30
















0












$begingroup$


I'm implementing a RK solver for calculating the solution to the Lorenz system:



begin{equation}
begin{cases}
x'(t) = sigma(y-x) \
y'(t) = rx-y-z \
z'(t) = xy-bz
end{cases}
end{equation}



The implemented RK method is of order 3 with some random Butcher tableau (but satisfies the conditions neccesary for it to have global error of order $mathcal{O}(h^k)$, for $k$ the order of the method and local truncation error of $mathcal{O}(h^{k+1})$).



One of the questions I'm being asked is to prove empirically that my implemented RK achieves these orders of error.



My first attempt was to estimate the local error ($delta_L$) as:
$$delta_L(t_n) = x_n(t_n)-x_n(t_n-1)$$
Where $x_n$ is the numerical solution of $x$ and $t_n$ is the time step. Following this approach, I get a very oscillatory error, ranging from $-2.5$ to 2.5 with mean $0.0012$, suggesting that the way I calculate the error must not be appropiate.










share|cite|improve this question









$endgroup$












  • $begingroup$
    What does $x_n(t_n - 1)$ mean?
    $endgroup$
    – caverac
    Jan 1 at 10:55










  • $begingroup$
    @caverac the value of $x_n$ at the previous time step.
    $endgroup$
    – Jorge
    Jan 1 at 10:59










  • $begingroup$
    So you mean $delta_L(n) = x_n - x_{n-1}$? if this is the way you are calculating it, then you are basically getting $simdot{x}(t_n)delta t = sigma(y_n - x_n)delta t$
    $endgroup$
    – caverac
    Jan 1 at 11:00












  • $begingroup$
    That's why the solution oscillates. To test the accuracy of your integrator, you should use a system for which you know the solution and calculate the difference between what you expect and what you calculate
    $endgroup$
    – caverac
    Jan 1 at 11:06










  • $begingroup$
    For instance, you can use the harmonic oscillator $dot x=y$, $dot y=-x$ with solution $x(t)=sin t$, $y(t)=cos t$. // Why use a random method and not some nice one like the order 3 method found by Heun in 1900 and named by Kutta as Heun's method one year later?
    $endgroup$
    – LutzL
    Jan 1 at 18:30














0












0








0





$begingroup$


I'm implementing a RK solver for calculating the solution to the Lorenz system:



begin{equation}
begin{cases}
x'(t) = sigma(y-x) \
y'(t) = rx-y-z \
z'(t) = xy-bz
end{cases}
end{equation}



The implemented RK method is of order 3 with some random Butcher tableau (but satisfies the conditions neccesary for it to have global error of order $mathcal{O}(h^k)$, for $k$ the order of the method and local truncation error of $mathcal{O}(h^{k+1})$).



One of the questions I'm being asked is to prove empirically that my implemented RK achieves these orders of error.



My first attempt was to estimate the local error ($delta_L$) as:
$$delta_L(t_n) = x_n(t_n)-x_n(t_n-1)$$
Where $x_n$ is the numerical solution of $x$ and $t_n$ is the time step. Following this approach, I get a very oscillatory error, ranging from $-2.5$ to 2.5 with mean $0.0012$, suggesting that the way I calculate the error must not be appropiate.










share|cite|improve this question









$endgroup$




I'm implementing a RK solver for calculating the solution to the Lorenz system:



begin{equation}
begin{cases}
x'(t) = sigma(y-x) \
y'(t) = rx-y-z \
z'(t) = xy-bz
end{cases}
end{equation}



The implemented RK method is of order 3 with some random Butcher tableau (but satisfies the conditions neccesary for it to have global error of order $mathcal{O}(h^k)$, for $k$ the order of the method and local truncation error of $mathcal{O}(h^{k+1})$).



One of the questions I'm being asked is to prove empirically that my implemented RK achieves these orders of error.



My first attempt was to estimate the local error ($delta_L$) as:
$$delta_L(t_n) = x_n(t_n)-x_n(t_n-1)$$
Where $x_n$ is the numerical solution of $x$ and $t_n$ is the time step. Following this approach, I get a very oscillatory error, ranging from $-2.5$ to 2.5 with mean $0.0012$, suggesting that the way I calculate the error must not be appropiate.







numerical-methods runge-kutta-methods






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 1 at 10:50









Jorge Jorge

1




1












  • $begingroup$
    What does $x_n(t_n - 1)$ mean?
    $endgroup$
    – caverac
    Jan 1 at 10:55










  • $begingroup$
    @caverac the value of $x_n$ at the previous time step.
    $endgroup$
    – Jorge
    Jan 1 at 10:59










  • $begingroup$
    So you mean $delta_L(n) = x_n - x_{n-1}$? if this is the way you are calculating it, then you are basically getting $simdot{x}(t_n)delta t = sigma(y_n - x_n)delta t$
    $endgroup$
    – caverac
    Jan 1 at 11:00












  • $begingroup$
    That's why the solution oscillates. To test the accuracy of your integrator, you should use a system for which you know the solution and calculate the difference between what you expect and what you calculate
    $endgroup$
    – caverac
    Jan 1 at 11:06










  • $begingroup$
    For instance, you can use the harmonic oscillator $dot x=y$, $dot y=-x$ with solution $x(t)=sin t$, $y(t)=cos t$. // Why use a random method and not some nice one like the order 3 method found by Heun in 1900 and named by Kutta as Heun's method one year later?
    $endgroup$
    – LutzL
    Jan 1 at 18:30


















  • $begingroup$
    What does $x_n(t_n - 1)$ mean?
    $endgroup$
    – caverac
    Jan 1 at 10:55










  • $begingroup$
    @caverac the value of $x_n$ at the previous time step.
    $endgroup$
    – Jorge
    Jan 1 at 10:59










  • $begingroup$
    So you mean $delta_L(n) = x_n - x_{n-1}$? if this is the way you are calculating it, then you are basically getting $simdot{x}(t_n)delta t = sigma(y_n - x_n)delta t$
    $endgroup$
    – caverac
    Jan 1 at 11:00












  • $begingroup$
    That's why the solution oscillates. To test the accuracy of your integrator, you should use a system for which you know the solution and calculate the difference between what you expect and what you calculate
    $endgroup$
    – caverac
    Jan 1 at 11:06










  • $begingroup$
    For instance, you can use the harmonic oscillator $dot x=y$, $dot y=-x$ with solution $x(t)=sin t$, $y(t)=cos t$. // Why use a random method and not some nice one like the order 3 method found by Heun in 1900 and named by Kutta as Heun's method one year later?
    $endgroup$
    – LutzL
    Jan 1 at 18:30
















$begingroup$
What does $x_n(t_n - 1)$ mean?
$endgroup$
– caverac
Jan 1 at 10:55




$begingroup$
What does $x_n(t_n - 1)$ mean?
$endgroup$
– caverac
Jan 1 at 10:55












$begingroup$
@caverac the value of $x_n$ at the previous time step.
$endgroup$
– Jorge
Jan 1 at 10:59




$begingroup$
@caverac the value of $x_n$ at the previous time step.
$endgroup$
– Jorge
Jan 1 at 10:59












$begingroup$
So you mean $delta_L(n) = x_n - x_{n-1}$? if this is the way you are calculating it, then you are basically getting $simdot{x}(t_n)delta t = sigma(y_n - x_n)delta t$
$endgroup$
– caverac
Jan 1 at 11:00






$begingroup$
So you mean $delta_L(n) = x_n - x_{n-1}$? if this is the way you are calculating it, then you are basically getting $simdot{x}(t_n)delta t = sigma(y_n - x_n)delta t$
$endgroup$
– caverac
Jan 1 at 11:00














$begingroup$
That's why the solution oscillates. To test the accuracy of your integrator, you should use a system for which you know the solution and calculate the difference between what you expect and what you calculate
$endgroup$
– caverac
Jan 1 at 11:06




$begingroup$
That's why the solution oscillates. To test the accuracy of your integrator, you should use a system for which you know the solution and calculate the difference between what you expect and what you calculate
$endgroup$
– caverac
Jan 1 at 11:06












$begingroup$
For instance, you can use the harmonic oscillator $dot x=y$, $dot y=-x$ with solution $x(t)=sin t$, $y(t)=cos t$. // Why use a random method and not some nice one like the order 3 method found by Heun in 1900 and named by Kutta as Heun's method one year later?
$endgroup$
– LutzL
Jan 1 at 18:30




$begingroup$
For instance, you can use the harmonic oscillator $dot x=y$, $dot y=-x$ with solution $x(t)=sin t$, $y(t)=cos t$. // Why use a random method and not some nice one like the order 3 method found by Heun in 1900 and named by Kutta as Heun's method one year later?
$endgroup$
– LutzL
Jan 1 at 18:30










1 Answer
1






active

oldest

votes


















1












$begingroup$

If I were to test the Heun 3rd order method
begin{array}{l|lll}
0&\
frac13&frac13\
frac23&0&frac23\
hline
&frac14&0&frac34
end{array}



with the code



def Heun3integrate(f, t, y0):
dim = len(y0);
y = np.zeros([len(t),dim]);
y[0,:] = y0;
k = np.zeros([4,dim]);
for i in range(0,len(t)-1):
h = t[i+1]-t[i];
k[0,:] = f(t[i], y[i]);
k[1,:] = f(t[i]+h/3, y[i]+h/3*k[0]);
k[2,:] = f(t[i]+2*h/3, y[i]+2*h/3*k[1]);
y[i+1,:] = y[i]+h/4*(k[0]+3*k[2]);
return y


I would use the idea of manufactured solutions to generate a test problem with known exact solution, like
$$
y''+sin(y) = -sin(t)+sin(sin(t)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$



def mms_ode(t,y): return np.array([ y[1], sin(sin(t))-sin(t)-sin(y[0]) ])


to get a plot for the error in $y$. To compare a variety of step sizes, scale the error by $h^{-3}$ so that the plot in principle shows the leading error coefficients. If the single graphs appear to be converging to a single function, this confirms the third order. If the third order assumption were wrong one would expect wildly differing scales due to the wrong exponent of $h$.



for h in [1.0, 0.5, 0.1, 0.05, 0.01][::-1]:
t = np.arange(0,20,h);
y = Heun3integrate(mms_ode,t,[0,1])
plt.plot(t, (y[:,0]-np.sin(t))/h**3, 'o', ms=2, label = "h=%.4f"%h);
plt.grid(); plt.legend(); plt.show()


Et voi-la



enter image description here






share|cite|improve this answer









$endgroup$













  • $begingroup$
    I really like this approach and I will probably implement it for this project! Thanks for elaborating your answer !!
    $endgroup$
    – Jorge
    Jan 2 at 0:49











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3058387%2fempirical-error-proof-runge-kutta-algorithm-when-not-knowing-exact-solution%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

If I were to test the Heun 3rd order method
begin{array}{l|lll}
0&\
frac13&frac13\
frac23&0&frac23\
hline
&frac14&0&frac34
end{array}



with the code



def Heun3integrate(f, t, y0):
dim = len(y0);
y = np.zeros([len(t),dim]);
y[0,:] = y0;
k = np.zeros([4,dim]);
for i in range(0,len(t)-1):
h = t[i+1]-t[i];
k[0,:] = f(t[i], y[i]);
k[1,:] = f(t[i]+h/3, y[i]+h/3*k[0]);
k[2,:] = f(t[i]+2*h/3, y[i]+2*h/3*k[1]);
y[i+1,:] = y[i]+h/4*(k[0]+3*k[2]);
return y


I would use the idea of manufactured solutions to generate a test problem with known exact solution, like
$$
y''+sin(y) = -sin(t)+sin(sin(t)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$



def mms_ode(t,y): return np.array([ y[1], sin(sin(t))-sin(t)-sin(y[0]) ])


to get a plot for the error in $y$. To compare a variety of step sizes, scale the error by $h^{-3}$ so that the plot in principle shows the leading error coefficients. If the single graphs appear to be converging to a single function, this confirms the third order. If the third order assumption were wrong one would expect wildly differing scales due to the wrong exponent of $h$.



for h in [1.0, 0.5, 0.1, 0.05, 0.01][::-1]:
t = np.arange(0,20,h);
y = Heun3integrate(mms_ode,t,[0,1])
plt.plot(t, (y[:,0]-np.sin(t))/h**3, 'o', ms=2, label = "h=%.4f"%h);
plt.grid(); plt.legend(); plt.show()


Et voi-la



enter image description here






share|cite|improve this answer









$endgroup$













  • $begingroup$
    I really like this approach and I will probably implement it for this project! Thanks for elaborating your answer !!
    $endgroup$
    – Jorge
    Jan 2 at 0:49
















1












$begingroup$

If I were to test the Heun 3rd order method
begin{array}{l|lll}
0&\
frac13&frac13\
frac23&0&frac23\
hline
&frac14&0&frac34
end{array}



with the code



def Heun3integrate(f, t, y0):
dim = len(y0);
y = np.zeros([len(t),dim]);
y[0,:] = y0;
k = np.zeros([4,dim]);
for i in range(0,len(t)-1):
h = t[i+1]-t[i];
k[0,:] = f(t[i], y[i]);
k[1,:] = f(t[i]+h/3, y[i]+h/3*k[0]);
k[2,:] = f(t[i]+2*h/3, y[i]+2*h/3*k[1]);
y[i+1,:] = y[i]+h/4*(k[0]+3*k[2]);
return y


I would use the idea of manufactured solutions to generate a test problem with known exact solution, like
$$
y''+sin(y) = -sin(t)+sin(sin(t)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$



def mms_ode(t,y): return np.array([ y[1], sin(sin(t))-sin(t)-sin(y[0]) ])


to get a plot for the error in $y$. To compare a variety of step sizes, scale the error by $h^{-3}$ so that the plot in principle shows the leading error coefficients. If the single graphs appear to be converging to a single function, this confirms the third order. If the third order assumption were wrong one would expect wildly differing scales due to the wrong exponent of $h$.



for h in [1.0, 0.5, 0.1, 0.05, 0.01][::-1]:
t = np.arange(0,20,h);
y = Heun3integrate(mms_ode,t,[0,1])
plt.plot(t, (y[:,0]-np.sin(t))/h**3, 'o', ms=2, label = "h=%.4f"%h);
plt.grid(); plt.legend(); plt.show()


Et voi-la



enter image description here






share|cite|improve this answer









$endgroup$













  • $begingroup$
    I really like this approach and I will probably implement it for this project! Thanks for elaborating your answer !!
    $endgroup$
    – Jorge
    Jan 2 at 0:49














1












1








1





$begingroup$

If I were to test the Heun 3rd order method
begin{array}{l|lll}
0&\
frac13&frac13\
frac23&0&frac23\
hline
&frac14&0&frac34
end{array}



with the code



def Heun3integrate(f, t, y0):
dim = len(y0);
y = np.zeros([len(t),dim]);
y[0,:] = y0;
k = np.zeros([4,dim]);
for i in range(0,len(t)-1):
h = t[i+1]-t[i];
k[0,:] = f(t[i], y[i]);
k[1,:] = f(t[i]+h/3, y[i]+h/3*k[0]);
k[2,:] = f(t[i]+2*h/3, y[i]+2*h/3*k[1]);
y[i+1,:] = y[i]+h/4*(k[0]+3*k[2]);
return y


I would use the idea of manufactured solutions to generate a test problem with known exact solution, like
$$
y''+sin(y) = -sin(t)+sin(sin(t)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$



def mms_ode(t,y): return np.array([ y[1], sin(sin(t))-sin(t)-sin(y[0]) ])


to get a plot for the error in $y$. To compare a variety of step sizes, scale the error by $h^{-3}$ so that the plot in principle shows the leading error coefficients. If the single graphs appear to be converging to a single function, this confirms the third order. If the third order assumption were wrong one would expect wildly differing scales due to the wrong exponent of $h$.



for h in [1.0, 0.5, 0.1, 0.05, 0.01][::-1]:
t = np.arange(0,20,h);
y = Heun3integrate(mms_ode,t,[0,1])
plt.plot(t, (y[:,0]-np.sin(t))/h**3, 'o', ms=2, label = "h=%.4f"%h);
plt.grid(); plt.legend(); plt.show()


Et voi-la



enter image description here






share|cite|improve this answer









$endgroup$



If I were to test the Heun 3rd order method
begin{array}{l|lll}
0&\
frac13&frac13\
frac23&0&frac23\
hline
&frac14&0&frac34
end{array}



with the code



def Heun3integrate(f, t, y0):
dim = len(y0);
y = np.zeros([len(t),dim]);
y[0,:] = y0;
k = np.zeros([4,dim]);
for i in range(0,len(t)-1):
h = t[i+1]-t[i];
k[0,:] = f(t[i], y[i]);
k[1,:] = f(t[i]+h/3, y[i]+h/3*k[0]);
k[2,:] = f(t[i]+2*h/3, y[i]+2*h/3*k[1]);
y[i+1,:] = y[i]+h/4*(k[0]+3*k[2]);
return y


I would use the idea of manufactured solutions to generate a test problem with known exact solution, like
$$
y''+sin(y) = -sin(t)+sin(sin(t)), ~~ y(0)=sin(0)=0,~ y'(0)=cos(0)=1
$$



def mms_ode(t,y): return np.array([ y[1], sin(sin(t))-sin(t)-sin(y[0]) ])


to get a plot for the error in $y$. To compare a variety of step sizes, scale the error by $h^{-3}$ so that the plot in principle shows the leading error coefficients. If the single graphs appear to be converging to a single function, this confirms the third order. If the third order assumption were wrong one would expect wildly differing scales due to the wrong exponent of $h$.



for h in [1.0, 0.5, 0.1, 0.05, 0.01][::-1]:
t = np.arange(0,20,h);
y = Heun3integrate(mms_ode,t,[0,1])
plt.plot(t, (y[:,0]-np.sin(t))/h**3, 'o', ms=2, label = "h=%.4f"%h);
plt.grid(); plt.legend(); plt.show()


Et voi-la



enter image description here







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jan 1 at 22:50









LutzLLutzL

56.7k42054




56.7k42054












  • $begingroup$
    I really like this approach and I will probably implement it for this project! Thanks for elaborating your answer !!
    $endgroup$
    – Jorge
    Jan 2 at 0:49


















  • $begingroup$
    I really like this approach and I will probably implement it for this project! Thanks for elaborating your answer !!
    $endgroup$
    – Jorge
    Jan 2 at 0:49
















$begingroup$
I really like this approach and I will probably implement it for this project! Thanks for elaborating your answer !!
$endgroup$
– Jorge
Jan 2 at 0:49




$begingroup$
I really like this approach and I will probably implement it for this project! Thanks for elaborating your answer !!
$endgroup$
– Jorge
Jan 2 at 0:49


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3058387%2fempirical-error-proof-runge-kutta-algorithm-when-not-knowing-exact-solution%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

Npm cannot find a required file even through it is in the searched directory