Numerical solution of Hamilton-Jacobi-Bellman equation with no boundary conditions












0












$begingroup$


I have a HJB equation that arises from a stochastic optimization problem.



$$u+apartial_{x}V+bpartial_{y}V+frac{sigma^{2}}{2}partial_{xx}V-rho V=0$$



Where $V(x,y)$ is the unkown function and $u$, $a$, $b$ are possibly functions of $x$, $y$ and $V$, and $sigma$ and $rho$ are constants. Moreover, $x$ is can be any real number, while $nin[0,1].$



I am trying to implement a upwind scheme numerically, approximating the derivatives $partial_{x}V$ with forward differences when $a$ is positive and backward differences otherwise. Similarly, I am using forward differences for $partial_{y}V$ iff $b>0$. For the second derivative, I am using central differences.



My question is: what should I do at the extreme of grid points? I tried forcing backward or forward differences, but the solution seems unstable and wrong, with some very small or very large derivatives near the boundaries of the grid.



I know that probably this equation has an unique viscosity solution (and that solution is the value function). But is there some finite difference scheme that is known to converge to the viscosity solution even when we do not have any boundary conditions?










share|cite|improve this question











$endgroup$












  • $begingroup$
    In short, no. You need to deduce some boundary condition at $infty$ in order to solve numerically. Then impose this on the boundary of your computational domain. Sometimes some insight from the stochastic optimization problem can help.
    $endgroup$
    – Jeff
    Jan 13 at 3:44










  • $begingroup$
    But isn't there any result that guarantees the solution will converge to the value function far away from the boundaries of the grid even if I input the wrong boundaries?
    $endgroup$
    – Pcw.
    Jan 13 at 12:31










  • $begingroup$
    Unfortunately, no. You can get different solutions for each different boundary condition you place on your computational grid, no matter how far out. Just consider a simple ODE $u + u' + u'' = 0$ on the whole real line. The unique solution is $u=0$, but if you restrict to an interval $[-R,R]$ and set some arbitrary boundary condition at $x=pm R$ you can get almost anything nonzero you like. Of course, here, $u'(pm R)=0$ or $u(pm R) = 0$ give the right solution, but it's not so clear in general what to choose.
    $endgroup$
    – Jeff
    Jan 14 at 1:17


















0












$begingroup$


I have a HJB equation that arises from a stochastic optimization problem.



$$u+apartial_{x}V+bpartial_{y}V+frac{sigma^{2}}{2}partial_{xx}V-rho V=0$$



Where $V(x,y)$ is the unkown function and $u$, $a$, $b$ are possibly functions of $x$, $y$ and $V$, and $sigma$ and $rho$ are constants. Moreover, $x$ is can be any real number, while $nin[0,1].$



I am trying to implement a upwind scheme numerically, approximating the derivatives $partial_{x}V$ with forward differences when $a$ is positive and backward differences otherwise. Similarly, I am using forward differences for $partial_{y}V$ iff $b>0$. For the second derivative, I am using central differences.



My question is: what should I do at the extreme of grid points? I tried forcing backward or forward differences, but the solution seems unstable and wrong, with some very small or very large derivatives near the boundaries of the grid.



I know that probably this equation has an unique viscosity solution (and that solution is the value function). But is there some finite difference scheme that is known to converge to the viscosity solution even when we do not have any boundary conditions?










share|cite|improve this question











$endgroup$












  • $begingroup$
    In short, no. You need to deduce some boundary condition at $infty$ in order to solve numerically. Then impose this on the boundary of your computational domain. Sometimes some insight from the stochastic optimization problem can help.
    $endgroup$
    – Jeff
    Jan 13 at 3:44










  • $begingroup$
    But isn't there any result that guarantees the solution will converge to the value function far away from the boundaries of the grid even if I input the wrong boundaries?
    $endgroup$
    – Pcw.
    Jan 13 at 12:31










  • $begingroup$
    Unfortunately, no. You can get different solutions for each different boundary condition you place on your computational grid, no matter how far out. Just consider a simple ODE $u + u' + u'' = 0$ on the whole real line. The unique solution is $u=0$, but if you restrict to an interval $[-R,R]$ and set some arbitrary boundary condition at $x=pm R$ you can get almost anything nonzero you like. Of course, here, $u'(pm R)=0$ or $u(pm R) = 0$ give the right solution, but it's not so clear in general what to choose.
    $endgroup$
    – Jeff
    Jan 14 at 1:17
















0












0








0





$begingroup$


I have a HJB equation that arises from a stochastic optimization problem.



$$u+apartial_{x}V+bpartial_{y}V+frac{sigma^{2}}{2}partial_{xx}V-rho V=0$$



Where $V(x,y)$ is the unkown function and $u$, $a$, $b$ are possibly functions of $x$, $y$ and $V$, and $sigma$ and $rho$ are constants. Moreover, $x$ is can be any real number, while $nin[0,1].$



I am trying to implement a upwind scheme numerically, approximating the derivatives $partial_{x}V$ with forward differences when $a$ is positive and backward differences otherwise. Similarly, I am using forward differences for $partial_{y}V$ iff $b>0$. For the second derivative, I am using central differences.



My question is: what should I do at the extreme of grid points? I tried forcing backward or forward differences, but the solution seems unstable and wrong, with some very small or very large derivatives near the boundaries of the grid.



I know that probably this equation has an unique viscosity solution (and that solution is the value function). But is there some finite difference scheme that is known to converge to the viscosity solution even when we do not have any boundary conditions?










share|cite|improve this question











$endgroup$




I have a HJB equation that arises from a stochastic optimization problem.



$$u+apartial_{x}V+bpartial_{y}V+frac{sigma^{2}}{2}partial_{xx}V-rho V=0$$



Where $V(x,y)$ is the unkown function and $u$, $a$, $b$ are possibly functions of $x$, $y$ and $V$, and $sigma$ and $rho$ are constants. Moreover, $x$ is can be any real number, while $nin[0,1].$



I am trying to implement a upwind scheme numerically, approximating the derivatives $partial_{x}V$ with forward differences when $a$ is positive and backward differences otherwise. Similarly, I am using forward differences for $partial_{y}V$ iff $b>0$. For the second derivative, I am using central differences.



My question is: what should I do at the extreme of grid points? I tried forcing backward or forward differences, but the solution seems unstable and wrong, with some very small or very large derivatives near the boundaries of the grid.



I know that probably this equation has an unique viscosity solution (and that solution is the value function). But is there some finite difference scheme that is known to converge to the viscosity solution even when we do not have any boundary conditions?







pde numerical-methods finite-differences hamilton-jacobi-equation






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 4 at 7:09









Dylan

12.5k31026




12.5k31026










asked Jan 4 at 0:29









Pcw.Pcw.

1008




1008












  • $begingroup$
    In short, no. You need to deduce some boundary condition at $infty$ in order to solve numerically. Then impose this on the boundary of your computational domain. Sometimes some insight from the stochastic optimization problem can help.
    $endgroup$
    – Jeff
    Jan 13 at 3:44










  • $begingroup$
    But isn't there any result that guarantees the solution will converge to the value function far away from the boundaries of the grid even if I input the wrong boundaries?
    $endgroup$
    – Pcw.
    Jan 13 at 12:31










  • $begingroup$
    Unfortunately, no. You can get different solutions for each different boundary condition you place on your computational grid, no matter how far out. Just consider a simple ODE $u + u' + u'' = 0$ on the whole real line. The unique solution is $u=0$, but if you restrict to an interval $[-R,R]$ and set some arbitrary boundary condition at $x=pm R$ you can get almost anything nonzero you like. Of course, here, $u'(pm R)=0$ or $u(pm R) = 0$ give the right solution, but it's not so clear in general what to choose.
    $endgroup$
    – Jeff
    Jan 14 at 1:17




















  • $begingroup$
    In short, no. You need to deduce some boundary condition at $infty$ in order to solve numerically. Then impose this on the boundary of your computational domain. Sometimes some insight from the stochastic optimization problem can help.
    $endgroup$
    – Jeff
    Jan 13 at 3:44










  • $begingroup$
    But isn't there any result that guarantees the solution will converge to the value function far away from the boundaries of the grid even if I input the wrong boundaries?
    $endgroup$
    – Pcw.
    Jan 13 at 12:31










  • $begingroup$
    Unfortunately, no. You can get different solutions for each different boundary condition you place on your computational grid, no matter how far out. Just consider a simple ODE $u + u' + u'' = 0$ on the whole real line. The unique solution is $u=0$, but if you restrict to an interval $[-R,R]$ and set some arbitrary boundary condition at $x=pm R$ you can get almost anything nonzero you like. Of course, here, $u'(pm R)=0$ or $u(pm R) = 0$ give the right solution, but it's not so clear in general what to choose.
    $endgroup$
    – Jeff
    Jan 14 at 1:17


















$begingroup$
In short, no. You need to deduce some boundary condition at $infty$ in order to solve numerically. Then impose this on the boundary of your computational domain. Sometimes some insight from the stochastic optimization problem can help.
$endgroup$
– Jeff
Jan 13 at 3:44




$begingroup$
In short, no. You need to deduce some boundary condition at $infty$ in order to solve numerically. Then impose this on the boundary of your computational domain. Sometimes some insight from the stochastic optimization problem can help.
$endgroup$
– Jeff
Jan 13 at 3:44












$begingroup$
But isn't there any result that guarantees the solution will converge to the value function far away from the boundaries of the grid even if I input the wrong boundaries?
$endgroup$
– Pcw.
Jan 13 at 12:31




$begingroup$
But isn't there any result that guarantees the solution will converge to the value function far away from the boundaries of the grid even if I input the wrong boundaries?
$endgroup$
– Pcw.
Jan 13 at 12:31












$begingroup$
Unfortunately, no. You can get different solutions for each different boundary condition you place on your computational grid, no matter how far out. Just consider a simple ODE $u + u' + u'' = 0$ on the whole real line. The unique solution is $u=0$, but if you restrict to an interval $[-R,R]$ and set some arbitrary boundary condition at $x=pm R$ you can get almost anything nonzero you like. Of course, here, $u'(pm R)=0$ or $u(pm R) = 0$ give the right solution, but it's not so clear in general what to choose.
$endgroup$
– Jeff
Jan 14 at 1:17






$begingroup$
Unfortunately, no. You can get different solutions for each different boundary condition you place on your computational grid, no matter how far out. Just consider a simple ODE $u + u' + u'' = 0$ on the whole real line. The unique solution is $u=0$, but if you restrict to an interval $[-R,R]$ and set some arbitrary boundary condition at $x=pm R$ you can get almost anything nonzero you like. Of course, here, $u'(pm R)=0$ or $u(pm R) = 0$ give the right solution, but it's not so clear in general what to choose.
$endgroup$
– Jeff
Jan 14 at 1:17












0






active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3061184%2fnumerical-solution-of-hamilton-jacobi-bellman-equation-with-no-boundary-conditio%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3061184%2fnumerical-solution-of-hamilton-jacobi-bellman-equation-with-no-boundary-conditio%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

Npm cannot find a required file even through it is in the searched directory