Convergence result for approximation error - stationary AR(1) and finite order MA
$begingroup$
I am currently struggling with a result pertaining to the finite order MA approximation of a simple AR$,(,1,)$ process defined on a double sided time-index set $,T=mathbb{Z}$. I would be very grateful if someone could help me understand the requirements on sample size and approximation order to establish the convergence result stated in $(,*,)$ below.
In particular, let a stationary AR$,(,1,)$ process be written as
$$phantom{qquadtext{with },,|phi|<1}x_t=sum_{s,=,0}^infty,phi^{,s}varepsilon_{t-s}qquadtext{with },,|phi|<1$$
and approximated by an $m$-th order MA
$$x_t^m=sum_{s,=,0}^{m-1},phi^{,s}varepsilon_{t-s}$$
for some given $m,in,mathbb{N}$. Moreover, I consider a Lipschitz function $g$ such that
$$|,g(,x,)-g(,y,),|,leq,K,|,x-y,|$$
for some Lipschitz constant $K.$
Ultimately, I would like to show (and understand) that for a sample $big(,x_t,:,t=1,,...,,n,big)$ and a corresponding sample $big(,x_t^m,:,t=1,,...,,n,big)$
begin{equation}left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,=,o_p(,1,)tag{$,*,$}end{equation}
I suppose that, by Lipschitz continuity, I can write
$$left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,leq,dfrac{1}{n},sum_{t=1}^nK,big|,x_t,-,x_t^m,big|.$$
Then, seeing as
$$x_t-x_t^m,=,sum_{s,=,0}^infty,phi^{,s}varepsilon_{t-s}-sum_{s,=,0}^{m-1},phi^{,s}varepsilon_{t-s},=,sum_{s,=,m}^infty,phi^{,s}varepsilon_{t-s}=phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s}$$it would seem to follow that
$$left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,leq,dfrac{K}{n},sum_{t=1}^n,left|,phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s},right|.$$
If I were to argue in a recklessly cavalier manner, I'd say that it's sort of obvious that as $ntoinfty$ it follows that $n^{-1}Kto0$ while $|phi|<1$ and $varepsilon_t=O_p(,1,)$ ensures that as $mtoinfty$
$$left|,phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s},right|to0$$
which is also true when summing $ntoinfty$ such terms.
While I am well aware that I made a couple of huge mistakes with the above line of reasoning, however, I do have a hard time pinpointing those mistakes exactly $,$-$,$ for example, I see that with $s$ running up to $infty$ and letting $mtoinfty$ I'd finally end up with a term that has $phi^{infty-infty}$ which is not defined. My supposition at this point would be that $m$ need to run to $infty$ at a slower rate than $n.$
I'd very much appreciate, if someone could walk through the reasoning required to establish that the absolute difference of sample averages in $(,*,)$ is $o_p(,1,)$ $,,$-$,,$ especially when it comes to the right way (in terms of order and rate) to let $ntoinfty$ and $mtoinfty$. I'd suspect that
there is a case to be made for $m$ to be a function of $n$, thus ensuring that $m$ tends to $infty$ sufficiently slowly.
Thank you so very much.
Best wishes,
Jon
probability limits convergence time-series
$endgroup$
add a comment |
$begingroup$
I am currently struggling with a result pertaining to the finite order MA approximation of a simple AR$,(,1,)$ process defined on a double sided time-index set $,T=mathbb{Z}$. I would be very grateful if someone could help me understand the requirements on sample size and approximation order to establish the convergence result stated in $(,*,)$ below.
In particular, let a stationary AR$,(,1,)$ process be written as
$$phantom{qquadtext{with },,|phi|<1}x_t=sum_{s,=,0}^infty,phi^{,s}varepsilon_{t-s}qquadtext{with },,|phi|<1$$
and approximated by an $m$-th order MA
$$x_t^m=sum_{s,=,0}^{m-1},phi^{,s}varepsilon_{t-s}$$
for some given $m,in,mathbb{N}$. Moreover, I consider a Lipschitz function $g$ such that
$$|,g(,x,)-g(,y,),|,leq,K,|,x-y,|$$
for some Lipschitz constant $K.$
Ultimately, I would like to show (and understand) that for a sample $big(,x_t,:,t=1,,...,,n,big)$ and a corresponding sample $big(,x_t^m,:,t=1,,...,,n,big)$
begin{equation}left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,=,o_p(,1,)tag{$,*,$}end{equation}
I suppose that, by Lipschitz continuity, I can write
$$left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,leq,dfrac{1}{n},sum_{t=1}^nK,big|,x_t,-,x_t^m,big|.$$
Then, seeing as
$$x_t-x_t^m,=,sum_{s,=,0}^infty,phi^{,s}varepsilon_{t-s}-sum_{s,=,0}^{m-1},phi^{,s}varepsilon_{t-s},=,sum_{s,=,m}^infty,phi^{,s}varepsilon_{t-s}=phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s}$$it would seem to follow that
$$left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,leq,dfrac{K}{n},sum_{t=1}^n,left|,phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s},right|.$$
If I were to argue in a recklessly cavalier manner, I'd say that it's sort of obvious that as $ntoinfty$ it follows that $n^{-1}Kto0$ while $|phi|<1$ and $varepsilon_t=O_p(,1,)$ ensures that as $mtoinfty$
$$left|,phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s},right|to0$$
which is also true when summing $ntoinfty$ such terms.
While I am well aware that I made a couple of huge mistakes with the above line of reasoning, however, I do have a hard time pinpointing those mistakes exactly $,$-$,$ for example, I see that with $s$ running up to $infty$ and letting $mtoinfty$ I'd finally end up with a term that has $phi^{infty-infty}$ which is not defined. My supposition at this point would be that $m$ need to run to $infty$ at a slower rate than $n.$
I'd very much appreciate, if someone could walk through the reasoning required to establish that the absolute difference of sample averages in $(,*,)$ is $o_p(,1,)$ $,,$-$,,$ especially when it comes to the right way (in terms of order and rate) to let $ntoinfty$ and $mtoinfty$. I'd suspect that
there is a case to be made for $m$ to be a function of $n$, thus ensuring that $m$ tends to $infty$ sufficiently slowly.
Thank you so very much.
Best wishes,
Jon
probability limits convergence time-series
$endgroup$
add a comment |
$begingroup$
I am currently struggling with a result pertaining to the finite order MA approximation of a simple AR$,(,1,)$ process defined on a double sided time-index set $,T=mathbb{Z}$. I would be very grateful if someone could help me understand the requirements on sample size and approximation order to establish the convergence result stated in $(,*,)$ below.
In particular, let a stationary AR$,(,1,)$ process be written as
$$phantom{qquadtext{with },,|phi|<1}x_t=sum_{s,=,0}^infty,phi^{,s}varepsilon_{t-s}qquadtext{with },,|phi|<1$$
and approximated by an $m$-th order MA
$$x_t^m=sum_{s,=,0}^{m-1},phi^{,s}varepsilon_{t-s}$$
for some given $m,in,mathbb{N}$. Moreover, I consider a Lipschitz function $g$ such that
$$|,g(,x,)-g(,y,),|,leq,K,|,x-y,|$$
for some Lipschitz constant $K.$
Ultimately, I would like to show (and understand) that for a sample $big(,x_t,:,t=1,,...,,n,big)$ and a corresponding sample $big(,x_t^m,:,t=1,,...,,n,big)$
begin{equation}left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,=,o_p(,1,)tag{$,*,$}end{equation}
I suppose that, by Lipschitz continuity, I can write
$$left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,leq,dfrac{1}{n},sum_{t=1}^nK,big|,x_t,-,x_t^m,big|.$$
Then, seeing as
$$x_t-x_t^m,=,sum_{s,=,0}^infty,phi^{,s}varepsilon_{t-s}-sum_{s,=,0}^{m-1},phi^{,s}varepsilon_{t-s},=,sum_{s,=,m}^infty,phi^{,s}varepsilon_{t-s}=phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s}$$it would seem to follow that
$$left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,leq,dfrac{K}{n},sum_{t=1}^n,left|,phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s},right|.$$
If I were to argue in a recklessly cavalier manner, I'd say that it's sort of obvious that as $ntoinfty$ it follows that $n^{-1}Kto0$ while $|phi|<1$ and $varepsilon_t=O_p(,1,)$ ensures that as $mtoinfty$
$$left|,phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s},right|to0$$
which is also true when summing $ntoinfty$ such terms.
While I am well aware that I made a couple of huge mistakes with the above line of reasoning, however, I do have a hard time pinpointing those mistakes exactly $,$-$,$ for example, I see that with $s$ running up to $infty$ and letting $mtoinfty$ I'd finally end up with a term that has $phi^{infty-infty}$ which is not defined. My supposition at this point would be that $m$ need to run to $infty$ at a slower rate than $n.$
I'd very much appreciate, if someone could walk through the reasoning required to establish that the absolute difference of sample averages in $(,*,)$ is $o_p(,1,)$ $,,$-$,,$ especially when it comes to the right way (in terms of order and rate) to let $ntoinfty$ and $mtoinfty$. I'd suspect that
there is a case to be made for $m$ to be a function of $n$, thus ensuring that $m$ tends to $infty$ sufficiently slowly.
Thank you so very much.
Best wishes,
Jon
probability limits convergence time-series
$endgroup$
I am currently struggling with a result pertaining to the finite order MA approximation of a simple AR$,(,1,)$ process defined on a double sided time-index set $,T=mathbb{Z}$. I would be very grateful if someone could help me understand the requirements on sample size and approximation order to establish the convergence result stated in $(,*,)$ below.
In particular, let a stationary AR$,(,1,)$ process be written as
$$phantom{qquadtext{with },,|phi|<1}x_t=sum_{s,=,0}^infty,phi^{,s}varepsilon_{t-s}qquadtext{with },,|phi|<1$$
and approximated by an $m$-th order MA
$$x_t^m=sum_{s,=,0}^{m-1},phi^{,s}varepsilon_{t-s}$$
for some given $m,in,mathbb{N}$. Moreover, I consider a Lipschitz function $g$ such that
$$|,g(,x,)-g(,y,),|,leq,K,|,x-y,|$$
for some Lipschitz constant $K.$
Ultimately, I would like to show (and understand) that for a sample $big(,x_t,:,t=1,,...,,n,big)$ and a corresponding sample $big(,x_t^m,:,t=1,,...,,n,big)$
begin{equation}left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,=,o_p(,1,)tag{$,*,$}end{equation}
I suppose that, by Lipschitz continuity, I can write
$$left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,leq,dfrac{1}{n},sum_{t=1}^nK,big|,x_t,-,x_t^m,big|.$$
Then, seeing as
$$x_t-x_t^m,=,sum_{s,=,0}^infty,phi^{,s}varepsilon_{t-s}-sum_{s,=,0}^{m-1},phi^{,s}varepsilon_{t-s},=,sum_{s,=,m}^infty,phi^{,s}varepsilon_{t-s}=phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s}$$it would seem to follow that
$$left|,dfrac{1}{n},sum_{t=1}^n,g(,x_t,),-,dfrac{1}{n},sum_{t=1}^n,g(,x_t^m,),right|,leq,dfrac{K}{n},sum_{t=1}^n,left|,phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s},right|.$$
If I were to argue in a recklessly cavalier manner, I'd say that it's sort of obvious that as $ntoinfty$ it follows that $n^{-1}Kto0$ while $|phi|<1$ and $varepsilon_t=O_p(,1,)$ ensures that as $mtoinfty$
$$left|,phi^{,m}sum_{s,=,m}^infty,phi^{,s-m}varepsilon_{t-s},right|to0$$
which is also true when summing $ntoinfty$ such terms.
While I am well aware that I made a couple of huge mistakes with the above line of reasoning, however, I do have a hard time pinpointing those mistakes exactly $,$-$,$ for example, I see that with $s$ running up to $infty$ and letting $mtoinfty$ I'd finally end up with a term that has $phi^{infty-infty}$ which is not defined. My supposition at this point would be that $m$ need to run to $infty$ at a slower rate than $n.$
I'd very much appreciate, if someone could walk through the reasoning required to establish that the absolute difference of sample averages in $(,*,)$ is $o_p(,1,)$ $,,$-$,,$ especially when it comes to the right way (in terms of order and rate) to let $ntoinfty$ and $mtoinfty$. I'd suspect that
there is a case to be made for $m$ to be a function of $n$, thus ensuring that $m$ tends to $infty$ sufficiently slowly.
Thank you so very much.
Best wishes,
Jon
probability limits convergence time-series
probability limits convergence time-series
asked Jan 30 at 21:18
J.BeckJ.Beck
657
657
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3094128%2fconvergence-result-for-approximation-error-stationary-ar1-and-finite-order-m%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3094128%2fconvergence-result-for-approximation-error-stationary-ar1-and-finite-order-m%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown