Calculation of variance of complicated random variable ( white noise discretization )












0












$begingroup$


so I have been doing some state estimation and in one part of my work it is necessary to discretize a continous time differential equation with white noise. I understood the discretization process for differntial equation ( deterministic part ) but I do not understand the last equation in deriving the covariance of discretized white noise.



Assuming that A is some matrix and that $x(t)$ is some differentiable vector-valued function mapping into $mathbb{R}$ function. Further assuming that $w(t,omega)$ is some continous time wide-sense stationary stohastic process such that $w:mathbb{R}timesOmega to mathbb{R}^4$ has mean value zero and autocorrelation of all random variables indexed by this process is delta function ( basically said $w(t,omega)$ is considered white noise ).Further random variables indexed by this process have covaraince matrix Q.Let now :
$$dot{x}(t) = Ax(t) + w(t)$$



Now using the procedure as described in Derivation of discretization on wikipedia I obtain the following equation:
$$x[k+1]=e^{AT_s}x[k] + int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl$$
$$u[k]=int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl$$



where $T_s$ is some discretization constant and $$x[k]=x(kT_s)$$
Now covariances of random variables indexed by $u[k]$ should be :
$$mathbb{E}[u[k]*u^{T}[k]]=mathbb{E}[int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl * int_{kT_s}^{(k+1)T_s}w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dt]$$
And I am unable to calculate any further. Wherever I look they seem to use the following identity which I would like to be able to justify but cant:



$$mathbb{E}[int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl * int_{kT_s}^{(k+1)T_s}w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dt]=mathbb{E}[int_{kT_s}^{(k+1)T_s}int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dldt]$$



Could someone justify this for me,or are engineers playing sneaky mathematics again?



EDIT: I understand that in current presentation this is not rigorous and that the differential equation above mathematically has no meaning.Sadly I am not knowledgable in area of Ito Intgration and Stohastic differential equations,so answers avoiding reference to such would be more useful. Material references are welcome










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    If you have some integral $int_a^b u(s),ds$ and some other function $v$, then $v(t)int_a^b u(s),ds=int_a^b v(t)u(s),ds$, as the $t$ is unrelated to the integrand and thus $v(t)$ acts as a constant. Nothing more happens at this step. The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. There you need some Fubini-type result, using compactness of the time intervals and the finiteness and positivity of the probability measure.
    $endgroup$
    – LutzL
    Jan 14 at 10:10












  • $begingroup$
    Sometimes I wonder is it the eyes or the brain that does not see. Thank you a milion. It would be great if you could develop your comment into full answer so that I can accept it.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:16


















0












$begingroup$


so I have been doing some state estimation and in one part of my work it is necessary to discretize a continous time differential equation with white noise. I understood the discretization process for differntial equation ( deterministic part ) but I do not understand the last equation in deriving the covariance of discretized white noise.



Assuming that A is some matrix and that $x(t)$ is some differentiable vector-valued function mapping into $mathbb{R}$ function. Further assuming that $w(t,omega)$ is some continous time wide-sense stationary stohastic process such that $w:mathbb{R}timesOmega to mathbb{R}^4$ has mean value zero and autocorrelation of all random variables indexed by this process is delta function ( basically said $w(t,omega)$ is considered white noise ).Further random variables indexed by this process have covaraince matrix Q.Let now :
$$dot{x}(t) = Ax(t) + w(t)$$



Now using the procedure as described in Derivation of discretization on wikipedia I obtain the following equation:
$$x[k+1]=e^{AT_s}x[k] + int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl$$
$$u[k]=int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl$$



where $T_s$ is some discretization constant and $$x[k]=x(kT_s)$$
Now covariances of random variables indexed by $u[k]$ should be :
$$mathbb{E}[u[k]*u^{T}[k]]=mathbb{E}[int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl * int_{kT_s}^{(k+1)T_s}w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dt]$$
And I am unable to calculate any further. Wherever I look they seem to use the following identity which I would like to be able to justify but cant:



$$mathbb{E}[int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl * int_{kT_s}^{(k+1)T_s}w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dt]=mathbb{E}[int_{kT_s}^{(k+1)T_s}int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dldt]$$



Could someone justify this for me,or are engineers playing sneaky mathematics again?



EDIT: I understand that in current presentation this is not rigorous and that the differential equation above mathematically has no meaning.Sadly I am not knowledgable in area of Ito Intgration and Stohastic differential equations,so answers avoiding reference to such would be more useful. Material references are welcome










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    If you have some integral $int_a^b u(s),ds$ and some other function $v$, then $v(t)int_a^b u(s),ds=int_a^b v(t)u(s),ds$, as the $t$ is unrelated to the integrand and thus $v(t)$ acts as a constant. Nothing more happens at this step. The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. There you need some Fubini-type result, using compactness of the time intervals and the finiteness and positivity of the probability measure.
    $endgroup$
    – LutzL
    Jan 14 at 10:10












  • $begingroup$
    Sometimes I wonder is it the eyes or the brain that does not see. Thank you a milion. It would be great if you could develop your comment into full answer so that I can accept it.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:16
















0












0








0


1



$begingroup$


so I have been doing some state estimation and in one part of my work it is necessary to discretize a continous time differential equation with white noise. I understood the discretization process for differntial equation ( deterministic part ) but I do not understand the last equation in deriving the covariance of discretized white noise.



Assuming that A is some matrix and that $x(t)$ is some differentiable vector-valued function mapping into $mathbb{R}$ function. Further assuming that $w(t,omega)$ is some continous time wide-sense stationary stohastic process such that $w:mathbb{R}timesOmega to mathbb{R}^4$ has mean value zero and autocorrelation of all random variables indexed by this process is delta function ( basically said $w(t,omega)$ is considered white noise ).Further random variables indexed by this process have covaraince matrix Q.Let now :
$$dot{x}(t) = Ax(t) + w(t)$$



Now using the procedure as described in Derivation of discretization on wikipedia I obtain the following equation:
$$x[k+1]=e^{AT_s}x[k] + int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl$$
$$u[k]=int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl$$



where $T_s$ is some discretization constant and $$x[k]=x(kT_s)$$
Now covariances of random variables indexed by $u[k]$ should be :
$$mathbb{E}[u[k]*u^{T}[k]]=mathbb{E}[int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl * int_{kT_s}^{(k+1)T_s}w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dt]$$
And I am unable to calculate any further. Wherever I look they seem to use the following identity which I would like to be able to justify but cant:



$$mathbb{E}[int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl * int_{kT_s}^{(k+1)T_s}w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dt]=mathbb{E}[int_{kT_s}^{(k+1)T_s}int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dldt]$$



Could someone justify this for me,or are engineers playing sneaky mathematics again?



EDIT: I understand that in current presentation this is not rigorous and that the differential equation above mathematically has no meaning.Sadly I am not knowledgable in area of Ito Intgration and Stohastic differential equations,so answers avoiding reference to such would be more useful. Material references are welcome










share|cite|improve this question











$endgroup$




so I have been doing some state estimation and in one part of my work it is necessary to discretize a continous time differential equation with white noise. I understood the discretization process for differntial equation ( deterministic part ) but I do not understand the last equation in deriving the covariance of discretized white noise.



Assuming that A is some matrix and that $x(t)$ is some differentiable vector-valued function mapping into $mathbb{R}$ function. Further assuming that $w(t,omega)$ is some continous time wide-sense stationary stohastic process such that $w:mathbb{R}timesOmega to mathbb{R}^4$ has mean value zero and autocorrelation of all random variables indexed by this process is delta function ( basically said $w(t,omega)$ is considered white noise ).Further random variables indexed by this process have covaraince matrix Q.Let now :
$$dot{x}(t) = Ax(t) + w(t)$$



Now using the procedure as described in Derivation of discretization on wikipedia I obtain the following equation:
$$x[k+1]=e^{AT_s}x[k] + int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl$$
$$u[k]=int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl$$



where $T_s$ is some discretization constant and $$x[k]=x(kT_s)$$
Now covariances of random variables indexed by $u[k]$ should be :
$$mathbb{E}[u[k]*u^{T}[k]]=mathbb{E}[int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl * int_{kT_s}^{(k+1)T_s}w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dt]$$
And I am unable to calculate any further. Wherever I look they seem to use the following identity which I would like to be able to justify but cant:



$$mathbb{E}[int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)dl * int_{kT_s}^{(k+1)T_s}w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dt]=mathbb{E}[int_{kT_s}^{(k+1)T_s}int_{kT_s}^{(k+1)T_s}e^{A[(k+1)T_s -l]}w(l)w^{T}(t)e^{A^{T}[(k+1)T_s -t]}dldt]$$



Could someone justify this for me,or are engineers playing sneaky mathematics again?



EDIT: I understand that in current presentation this is not rigorous and that the differential equation above mathematically has no meaning.Sadly I am not knowledgable in area of Ito Intgration and Stohastic differential equations,so answers avoiding reference to such would be more useful. Material references are welcome







probability stochastic-processes covariance expected-value sde






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 14 at 10:14







TheCoolDrop

















asked Jan 14 at 9:33









TheCoolDropTheCoolDrop

16210




16210








  • 1




    $begingroup$
    If you have some integral $int_a^b u(s),ds$ and some other function $v$, then $v(t)int_a^b u(s),ds=int_a^b v(t)u(s),ds$, as the $t$ is unrelated to the integrand and thus $v(t)$ acts as a constant. Nothing more happens at this step. The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. There you need some Fubini-type result, using compactness of the time intervals and the finiteness and positivity of the probability measure.
    $endgroup$
    – LutzL
    Jan 14 at 10:10












  • $begingroup$
    Sometimes I wonder is it the eyes or the brain that does not see. Thank you a milion. It would be great if you could develop your comment into full answer so that I can accept it.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:16
















  • 1




    $begingroup$
    If you have some integral $int_a^b u(s),ds$ and some other function $v$, then $v(t)int_a^b u(s),ds=int_a^b v(t)u(s),ds$, as the $t$ is unrelated to the integrand and thus $v(t)$ acts as a constant. Nothing more happens at this step. The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. There you need some Fubini-type result, using compactness of the time intervals and the finiteness and positivity of the probability measure.
    $endgroup$
    – LutzL
    Jan 14 at 10:10












  • $begingroup$
    Sometimes I wonder is it the eyes or the brain that does not see. Thank you a milion. It would be great if you could develop your comment into full answer so that I can accept it.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:16










1




1




$begingroup$
If you have some integral $int_a^b u(s),ds$ and some other function $v$, then $v(t)int_a^b u(s),ds=int_a^b v(t)u(s),ds$, as the $t$ is unrelated to the integrand and thus $v(t)$ acts as a constant. Nothing more happens at this step. The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. There you need some Fubini-type result, using compactness of the time intervals and the finiteness and positivity of the probability measure.
$endgroup$
– LutzL
Jan 14 at 10:10






$begingroup$
If you have some integral $int_a^b u(s),ds$ and some other function $v$, then $v(t)int_a^b u(s),ds=int_a^b v(t)u(s),ds$, as the $t$ is unrelated to the integrand and thus $v(t)$ acts as a constant. Nothing more happens at this step. The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. There you need some Fubini-type result, using compactness of the time intervals and the finiteness and positivity of the probability measure.
$endgroup$
– LutzL
Jan 14 at 10:10














$begingroup$
Sometimes I wonder is it the eyes or the brain that does not see. Thank you a milion. It would be great if you could develop your comment into full answer so that I can accept it.
$endgroup$
– TheCoolDrop
Jan 14 at 10:16






$begingroup$
Sometimes I wonder is it the eyes or the brain that does not see. Thank you a milion. It would be great if you could develop your comment into full answer so that I can accept it.
$endgroup$
– TheCoolDrop
Jan 14 at 10:16












1 Answer
1






active

oldest

votes


















1












$begingroup$

At this point you have the product of two definite integrals that are constants to each other, so you can calculate
begin{align}
int_a^b u(s),dscdot int_a^bv(t),dt
&=int_a^b u(s)cdot left[int_a^bv(t),dtright],ds
end{align}

Now $u(s)$ is a constant relative to the integral of $v$, so that one can insert this constant into the inner integral
begin{align}
...&=int_a^bleft[int_a^b u(s)cdot v(t),dtright],ds
end{align}



The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. The probability argument is usually ommited, putting it back in gives
begin{align}
{Bbb E}left[int_a^bint_a^b u(s)cdot v(t),dt,dsright]
&=int_Ωleft[int_a^bint_a^b u(omega,s)cdot v(ω,t),dt,dsright]dP(ω)
end{align}



There you need some Fubini-type result, using compactness of the time intervals and continuity in time of the integrands to obtain boundedness for each $ωinΩ$, and the finiteness and positivity of the probability measure to get the absolute boundedness of the joint integration over $[a,b]^2times Ω$. Then
begin{align}
...
&=int_a^bint_a^b int_Ωleft[u(omega,s)cdot v(ω,t)right]dP(ω),dt,ds
=int_a^bint_a^b {Bbb E}left[u(s)cdot v(t)right],dt,ds.
end{align}






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Sadly I am not on that level of measure theory. Besides solving my problem I hope in year or two I will understand completly what you meant, even though I am using the last line of your result intuitively, argumenting that this somehow comes from linearity of expected value operator.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:48










  • $begingroup$
    Yes, linearity is the basis. The problem is that you change the order of limits, Fubini is an extension of Riemann's re-ordering theorem that allows for instance the formation of the Cauchy product of series.
    $endgroup$
    – LutzL
    Jan 14 at 11:03











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3073028%2fcalculation-of-variance-of-complicated-random-variable-white-noise-discretizat%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

At this point you have the product of two definite integrals that are constants to each other, so you can calculate
begin{align}
int_a^b u(s),dscdot int_a^bv(t),dt
&=int_a^b u(s)cdot left[int_a^bv(t),dtright],ds
end{align}

Now $u(s)$ is a constant relative to the integral of $v$, so that one can insert this constant into the inner integral
begin{align}
...&=int_a^bleft[int_a^b u(s)cdot v(t),dtright],ds
end{align}



The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. The probability argument is usually ommited, putting it back in gives
begin{align}
{Bbb E}left[int_a^bint_a^b u(s)cdot v(t),dt,dsright]
&=int_Ωleft[int_a^bint_a^b u(omega,s)cdot v(ω,t),dt,dsright]dP(ω)
end{align}



There you need some Fubini-type result, using compactness of the time intervals and continuity in time of the integrands to obtain boundedness for each $ωinΩ$, and the finiteness and positivity of the probability measure to get the absolute boundedness of the joint integration over $[a,b]^2times Ω$. Then
begin{align}
...
&=int_a^bint_a^b int_Ωleft[u(omega,s)cdot v(ω,t)right]dP(ω),dt,ds
=int_a^bint_a^b {Bbb E}left[u(s)cdot v(t)right],dt,ds.
end{align}






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Sadly I am not on that level of measure theory. Besides solving my problem I hope in year or two I will understand completly what you meant, even though I am using the last line of your result intuitively, argumenting that this somehow comes from linearity of expected value operator.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:48










  • $begingroup$
    Yes, linearity is the basis. The problem is that you change the order of limits, Fubini is an extension of Riemann's re-ordering theorem that allows for instance the formation of the Cauchy product of series.
    $endgroup$
    – LutzL
    Jan 14 at 11:03
















1












$begingroup$

At this point you have the product of two definite integrals that are constants to each other, so you can calculate
begin{align}
int_a^b u(s),dscdot int_a^bv(t),dt
&=int_a^b u(s)cdot left[int_a^bv(t),dtright],ds
end{align}

Now $u(s)$ is a constant relative to the integral of $v$, so that one can insert this constant into the inner integral
begin{align}
...&=int_a^bleft[int_a^b u(s)cdot v(t),dtright],ds
end{align}



The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. The probability argument is usually ommited, putting it back in gives
begin{align}
{Bbb E}left[int_a^bint_a^b u(s)cdot v(t),dt,dsright]
&=int_Ωleft[int_a^bint_a^b u(omega,s)cdot v(ω,t),dt,dsright]dP(ω)
end{align}



There you need some Fubini-type result, using compactness of the time intervals and continuity in time of the integrands to obtain boundedness for each $ωinΩ$, and the finiteness and positivity of the probability measure to get the absolute boundedness of the joint integration over $[a,b]^2times Ω$. Then
begin{align}
...
&=int_a^bint_a^b int_Ωleft[u(omega,s)cdot v(ω,t)right]dP(ω),dt,ds
=int_a^bint_a^b {Bbb E}left[u(s)cdot v(t)right],dt,ds.
end{align}






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Sadly I am not on that level of measure theory. Besides solving my problem I hope in year or two I will understand completly what you meant, even though I am using the last line of your result intuitively, argumenting that this somehow comes from linearity of expected value operator.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:48










  • $begingroup$
    Yes, linearity is the basis. The problem is that you change the order of limits, Fubini is an extension of Riemann's re-ordering theorem that allows for instance the formation of the Cauchy product of series.
    $endgroup$
    – LutzL
    Jan 14 at 11:03














1












1








1





$begingroup$

At this point you have the product of two definite integrals that are constants to each other, so you can calculate
begin{align}
int_a^b u(s),dscdot int_a^bv(t),dt
&=int_a^b u(s)cdot left[int_a^bv(t),dtright],ds
end{align}

Now $u(s)$ is a constant relative to the integral of $v$, so that one can insert this constant into the inner integral
begin{align}
...&=int_a^bleft[int_a^b u(s)cdot v(t),dtright],ds
end{align}



The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. The probability argument is usually ommited, putting it back in gives
begin{align}
{Bbb E}left[int_a^bint_a^b u(s)cdot v(t),dt,dsright]
&=int_Ωleft[int_a^bint_a^b u(omega,s)cdot v(ω,t),dt,dsright]dP(ω)
end{align}



There you need some Fubini-type result, using compactness of the time intervals and continuity in time of the integrands to obtain boundedness for each $ωinΩ$, and the finiteness and positivity of the probability measure to get the absolute boundedness of the joint integration over $[a,b]^2times Ω$. Then
begin{align}
...
&=int_a^bint_a^b int_Ωleft[u(omega,s)cdot v(ω,t)right]dP(ω),dt,ds
=int_a^bint_a^b {Bbb E}left[u(s)cdot v(t)right],dt,ds.
end{align}






share|cite|improve this answer









$endgroup$



At this point you have the product of two definite integrals that are constants to each other, so you can calculate
begin{align}
int_a^b u(s),dscdot int_a^bv(t),dt
&=int_a^b u(s)cdot left[int_a^bv(t),dtright],ds
end{align}

Now $u(s)$ is a constant relative to the integral of $v$, so that one can insert this constant into the inner integral
begin{align}
...&=int_a^bleft[int_a^b u(s)cdot v(t),dtright],ds
end{align}



The interesting part is in the next step where you exchange the integral over the probability space of the expectation value with the two time integrations. The probability argument is usually ommited, putting it back in gives
begin{align}
{Bbb E}left[int_a^bint_a^b u(s)cdot v(t),dt,dsright]
&=int_Ωleft[int_a^bint_a^b u(omega,s)cdot v(ω,t),dt,dsright]dP(ω)
end{align}



There you need some Fubini-type result, using compactness of the time intervals and continuity in time of the integrands to obtain boundedness for each $ωinΩ$, and the finiteness and positivity of the probability measure to get the absolute boundedness of the joint integration over $[a,b]^2times Ω$. Then
begin{align}
...
&=int_a^bint_a^b int_Ωleft[u(omega,s)cdot v(ω,t)right]dP(ω),dt,ds
=int_a^bint_a^b {Bbb E}left[u(s)cdot v(t)right],dt,ds.
end{align}







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jan 14 at 10:37









LutzLLutzL

58.7k42055




58.7k42055












  • $begingroup$
    Sadly I am not on that level of measure theory. Besides solving my problem I hope in year or two I will understand completly what you meant, even though I am using the last line of your result intuitively, argumenting that this somehow comes from linearity of expected value operator.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:48










  • $begingroup$
    Yes, linearity is the basis. The problem is that you change the order of limits, Fubini is an extension of Riemann's re-ordering theorem that allows for instance the formation of the Cauchy product of series.
    $endgroup$
    – LutzL
    Jan 14 at 11:03


















  • $begingroup$
    Sadly I am not on that level of measure theory. Besides solving my problem I hope in year or two I will understand completly what you meant, even though I am using the last line of your result intuitively, argumenting that this somehow comes from linearity of expected value operator.
    $endgroup$
    – TheCoolDrop
    Jan 14 at 10:48










  • $begingroup$
    Yes, linearity is the basis. The problem is that you change the order of limits, Fubini is an extension of Riemann's re-ordering theorem that allows for instance the formation of the Cauchy product of series.
    $endgroup$
    – LutzL
    Jan 14 at 11:03
















$begingroup$
Sadly I am not on that level of measure theory. Besides solving my problem I hope in year or two I will understand completly what you meant, even though I am using the last line of your result intuitively, argumenting that this somehow comes from linearity of expected value operator.
$endgroup$
– TheCoolDrop
Jan 14 at 10:48




$begingroup$
Sadly I am not on that level of measure theory. Besides solving my problem I hope in year or two I will understand completly what you meant, even though I am using the last line of your result intuitively, argumenting that this somehow comes from linearity of expected value operator.
$endgroup$
– TheCoolDrop
Jan 14 at 10:48












$begingroup$
Yes, linearity is the basis. The problem is that you change the order of limits, Fubini is an extension of Riemann's re-ordering theorem that allows for instance the formation of the Cauchy product of series.
$endgroup$
– LutzL
Jan 14 at 11:03




$begingroup$
Yes, linearity is the basis. The problem is that you change the order of limits, Fubini is an extension of Riemann's re-ordering theorem that allows for instance the formation of the Cauchy product of series.
$endgroup$
– LutzL
Jan 14 at 11:03


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3073028%2fcalculation-of-variance-of-complicated-random-variable-white-noise-discretizat%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

MongoDB - Not Authorized To Execute Command

How to fix TextFormField cause rebuild widget in Flutter

in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith