How to numerically test a limsup? (Example : numerical simulation of the law of iterated logarithm)
$begingroup$
I have a random walk $S_n$ (the increments are Bernoulli $pm 1$ with probability $1/2$ each). I'd like to test numerically the Law of iterated logarithm:
$$limsup_{n rightarrow infty} underbrace{frac{S_n}{sqrt{2 n (log log n)}}}_{Y_n} = 1, qquad rm{a.s.}$$
My attemps have failed (see this question) since, when you do a numerical simulation, you can never evaluate this quantity that would be required for the $limsup$ evaluation (because the computer memory is not infinite...):
$$Z_k=sup_{ell geq k}Y_{ell}$$
but only:
$$Y_{k,n}=max_{kleq ell leq n}Y_ell$$
Question: how can you do a simulation that showcases that the $limsup$ is $1$? (and have a plot showing a convergence to 1, in the contrary of this failed attempt).
Sidenote: in my case, the increments are not exactly independent, but close to it. I'd like to numerically test if a law-of-iterated-logarithm-like result holds. But for now, I would already be more than happy if I could get a numerical evidence of the standard law in the standard case where increments are independent.
Sidenote2: code for failed attempt:
import numpy as np
import matplotlib.pyplot as plt
N = 10*1000*1000
B = 2 * np.random.binomial(1, 0.5, N) - 1 # N independent +1/-1 each of them with probability 1/2
B = np.cumsum(B) # random walk
plt.plot(B); plt.show()
C = B / np.sqrt(2 * np.arange(N) * np.log(np.log(np.arange(N))))
M = np.maximum.accumulate(C[::-1])[::-1] # limsup, see http://stackoverflow.com/questions/35149843/running-max-limsup-in-numpy-what-optimization
plt.plot(M); plt.show()
probability probability-theory random-variables random-walk simulation
$endgroup$
add a comment |
$begingroup$
I have a random walk $S_n$ (the increments are Bernoulli $pm 1$ with probability $1/2$ each). I'd like to test numerically the Law of iterated logarithm:
$$limsup_{n rightarrow infty} underbrace{frac{S_n}{sqrt{2 n (log log n)}}}_{Y_n} = 1, qquad rm{a.s.}$$
My attemps have failed (see this question) since, when you do a numerical simulation, you can never evaluate this quantity that would be required for the $limsup$ evaluation (because the computer memory is not infinite...):
$$Z_k=sup_{ell geq k}Y_{ell}$$
but only:
$$Y_{k,n}=max_{kleq ell leq n}Y_ell$$
Question: how can you do a simulation that showcases that the $limsup$ is $1$? (and have a plot showing a convergence to 1, in the contrary of this failed attempt).
Sidenote: in my case, the increments are not exactly independent, but close to it. I'd like to numerically test if a law-of-iterated-logarithm-like result holds. But for now, I would already be more than happy if I could get a numerical evidence of the standard law in the standard case where increments are independent.
Sidenote2: code for failed attempt:
import numpy as np
import matplotlib.pyplot as plt
N = 10*1000*1000
B = 2 * np.random.binomial(1, 0.5, N) - 1 # N independent +1/-1 each of them with probability 1/2
B = np.cumsum(B) # random walk
plt.plot(B); plt.show()
C = B / np.sqrt(2 * np.arange(N) * np.log(np.log(np.arange(N))))
M = np.maximum.accumulate(C[::-1])[::-1] # limsup, see http://stackoverflow.com/questions/35149843/running-max-limsup-in-numpy-what-optimization
plt.plot(M); plt.show()
probability probability-theory random-variables random-walk simulation
$endgroup$
$begingroup$
You can "test" the theorem by reproducing the illustrations with iterated logarithms in the linked Wikipedia article. If we had $limsup > 1$, we would exepct curves exceeding the green bounds more frequently on the right side of the graph. If we had $limsup < 1$, we would expect sparse areas inside the green curves, with curves entering those areas most rarely on the right side of the graph. Of course these tests are not conclusive, but they illustrate the theorem well.
$endgroup$
– Matt F.
Feb 1 at 5:24
add a comment |
$begingroup$
I have a random walk $S_n$ (the increments are Bernoulli $pm 1$ with probability $1/2$ each). I'd like to test numerically the Law of iterated logarithm:
$$limsup_{n rightarrow infty} underbrace{frac{S_n}{sqrt{2 n (log log n)}}}_{Y_n} = 1, qquad rm{a.s.}$$
My attemps have failed (see this question) since, when you do a numerical simulation, you can never evaluate this quantity that would be required for the $limsup$ evaluation (because the computer memory is not infinite...):
$$Z_k=sup_{ell geq k}Y_{ell}$$
but only:
$$Y_{k,n}=max_{kleq ell leq n}Y_ell$$
Question: how can you do a simulation that showcases that the $limsup$ is $1$? (and have a plot showing a convergence to 1, in the contrary of this failed attempt).
Sidenote: in my case, the increments are not exactly independent, but close to it. I'd like to numerically test if a law-of-iterated-logarithm-like result holds. But for now, I would already be more than happy if I could get a numerical evidence of the standard law in the standard case where increments are independent.
Sidenote2: code for failed attempt:
import numpy as np
import matplotlib.pyplot as plt
N = 10*1000*1000
B = 2 * np.random.binomial(1, 0.5, N) - 1 # N independent +1/-1 each of them with probability 1/2
B = np.cumsum(B) # random walk
plt.plot(B); plt.show()
C = B / np.sqrt(2 * np.arange(N) * np.log(np.log(np.arange(N))))
M = np.maximum.accumulate(C[::-1])[::-1] # limsup, see http://stackoverflow.com/questions/35149843/running-max-limsup-in-numpy-what-optimization
plt.plot(M); plt.show()
probability probability-theory random-variables random-walk simulation
$endgroup$
I have a random walk $S_n$ (the increments are Bernoulli $pm 1$ with probability $1/2$ each). I'd like to test numerically the Law of iterated logarithm:
$$limsup_{n rightarrow infty} underbrace{frac{S_n}{sqrt{2 n (log log n)}}}_{Y_n} = 1, qquad rm{a.s.}$$
My attemps have failed (see this question) since, when you do a numerical simulation, you can never evaluate this quantity that would be required for the $limsup$ evaluation (because the computer memory is not infinite...):
$$Z_k=sup_{ell geq k}Y_{ell}$$
but only:
$$Y_{k,n}=max_{kleq ell leq n}Y_ell$$
Question: how can you do a simulation that showcases that the $limsup$ is $1$? (and have a plot showing a convergence to 1, in the contrary of this failed attempt).
Sidenote: in my case, the increments are not exactly independent, but close to it. I'd like to numerically test if a law-of-iterated-logarithm-like result holds. But for now, I would already be more than happy if I could get a numerical evidence of the standard law in the standard case where increments are independent.
Sidenote2: code for failed attempt:
import numpy as np
import matplotlib.pyplot as plt
N = 10*1000*1000
B = 2 * np.random.binomial(1, 0.5, N) - 1 # N independent +1/-1 each of them with probability 1/2
B = np.cumsum(B) # random walk
plt.plot(B); plt.show()
C = B / np.sqrt(2 * np.arange(N) * np.log(np.log(np.arange(N))))
M = np.maximum.accumulate(C[::-1])[::-1] # limsup, see http://stackoverflow.com/questions/35149843/running-max-limsup-in-numpy-what-optimization
plt.plot(M); plt.show()
probability probability-theory random-variables random-walk simulation
probability probability-theory random-variables random-walk simulation
edited Jan 29 at 11:46
Basj
asked Feb 2 '16 at 22:00
BasjBasj
4121529
4121529
$begingroup$
You can "test" the theorem by reproducing the illustrations with iterated logarithms in the linked Wikipedia article. If we had $limsup > 1$, we would exepct curves exceeding the green bounds more frequently on the right side of the graph. If we had $limsup < 1$, we would expect sparse areas inside the green curves, with curves entering those areas most rarely on the right side of the graph. Of course these tests are not conclusive, but they illustrate the theorem well.
$endgroup$
– Matt F.
Feb 1 at 5:24
add a comment |
$begingroup$
You can "test" the theorem by reproducing the illustrations with iterated logarithms in the linked Wikipedia article. If we had $limsup > 1$, we would exepct curves exceeding the green bounds more frequently on the right side of the graph. If we had $limsup < 1$, we would expect sparse areas inside the green curves, with curves entering those areas most rarely on the right side of the graph. Of course these tests are not conclusive, but they illustrate the theorem well.
$endgroup$
– Matt F.
Feb 1 at 5:24
$begingroup$
You can "test" the theorem by reproducing the illustrations with iterated logarithms in the linked Wikipedia article. If we had $limsup > 1$, we would exepct curves exceeding the green bounds more frequently on the right side of the graph. If we had $limsup < 1$, we would expect sparse areas inside the green curves, with curves entering those areas most rarely on the right side of the graph. Of course these tests are not conclusive, but they illustrate the theorem well.
$endgroup$
– Matt F.
Feb 1 at 5:24
$begingroup$
You can "test" the theorem by reproducing the illustrations with iterated logarithms in the linked Wikipedia article. If we had $limsup > 1$, we would exepct curves exceeding the green bounds more frequently on the right side of the graph. If we had $limsup < 1$, we would expect sparse areas inside the green curves, with curves entering those areas most rarely on the right side of the graph. Of course these tests are not conclusive, but they illustrate the theorem well.
$endgroup$
– Matt F.
Feb 1 at 5:24
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The good news is that for a given value of $n$, the walk value at $n$ only affects lower values of our sequence. Which we don't care about, our interest is in the limiting behaviour. There should be no need to store the sequence.
The problem is $f(n) = frac{S_n}{sqrt{2 n times log(log(n))}}$ doesn't converge, so the graphical proof of a line tending to a value is going to involve some other property of $S_n$,other than a smooth function will be necessary. As you point out the sup cannot be this property. It can't be calculated.
That doesn't mean we can't have a stab at a numerical demonstration, and if we have infinity time, maybe even something stronger.
For example, it would suffice to show
$forall epsilon > 0, forall k: exists n > k, text{ s.t. } f(n) > 1 - epsilon$
and
$forall epsilon > 0, exists k, text{ s.t. } forall n > k : f(n) < 1 + epsilon$
This is not easily directly testable as is.
For example, I have run this for around a billion with just any old prng. Even with $epsilon = 0.5$ there was no pattern that made this clear.
However, if we knew the relation between $k$ and $epsilon$ we could try to prove it by counting the in the above 2 relations.
If we didn't know this relation we could try something like successively halving $epsilon$, starting at $0.5$. For each value of $epsilon$ we can 'show' the first equation holds by counting the instances $f(n) exceeds $1 - epsilon$ and checking that for the next 1000 times this happens the gap is not increasing on average. We can 'show' the second even less well but the ratio of these 'hits' to those of eqn 1 should tend to zero.
There are some practical issues though:
These numbers are going to get huge. In practice there is no way to do this with finite memory as the floating point maths size is going to grow uncontrollably.
The PRNG will have to be perfect for this to work. In practice this is not possible. eventually you will reach the period, at which point all the theory (and practice) disappears and gets replaced by something a lot more linear...
$endgroup$
add a comment |
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1637989%2fhow-to-numerically-test-a-limsup-example-numerical-simulation-of-the-law-of%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The good news is that for a given value of $n$, the walk value at $n$ only affects lower values of our sequence. Which we don't care about, our interest is in the limiting behaviour. There should be no need to store the sequence.
The problem is $f(n) = frac{S_n}{sqrt{2 n times log(log(n))}}$ doesn't converge, so the graphical proof of a line tending to a value is going to involve some other property of $S_n$,other than a smooth function will be necessary. As you point out the sup cannot be this property. It can't be calculated.
That doesn't mean we can't have a stab at a numerical demonstration, and if we have infinity time, maybe even something stronger.
For example, it would suffice to show
$forall epsilon > 0, forall k: exists n > k, text{ s.t. } f(n) > 1 - epsilon$
and
$forall epsilon > 0, exists k, text{ s.t. } forall n > k : f(n) < 1 + epsilon$
This is not easily directly testable as is.
For example, I have run this for around a billion with just any old prng. Even with $epsilon = 0.5$ there was no pattern that made this clear.
However, if we knew the relation between $k$ and $epsilon$ we could try to prove it by counting the in the above 2 relations.
If we didn't know this relation we could try something like successively halving $epsilon$, starting at $0.5$. For each value of $epsilon$ we can 'show' the first equation holds by counting the instances $f(n) exceeds $1 - epsilon$ and checking that for the next 1000 times this happens the gap is not increasing on average. We can 'show' the second even less well but the ratio of these 'hits' to those of eqn 1 should tend to zero.
There are some practical issues though:
These numbers are going to get huge. In practice there is no way to do this with finite memory as the floating point maths size is going to grow uncontrollably.
The PRNG will have to be perfect for this to work. In practice this is not possible. eventually you will reach the period, at which point all the theory (and practice) disappears and gets replaced by something a lot more linear...
$endgroup$
add a comment |
$begingroup$
The good news is that for a given value of $n$, the walk value at $n$ only affects lower values of our sequence. Which we don't care about, our interest is in the limiting behaviour. There should be no need to store the sequence.
The problem is $f(n) = frac{S_n}{sqrt{2 n times log(log(n))}}$ doesn't converge, so the graphical proof of a line tending to a value is going to involve some other property of $S_n$,other than a smooth function will be necessary. As you point out the sup cannot be this property. It can't be calculated.
That doesn't mean we can't have a stab at a numerical demonstration, and if we have infinity time, maybe even something stronger.
For example, it would suffice to show
$forall epsilon > 0, forall k: exists n > k, text{ s.t. } f(n) > 1 - epsilon$
and
$forall epsilon > 0, exists k, text{ s.t. } forall n > k : f(n) < 1 + epsilon$
This is not easily directly testable as is.
For example, I have run this for around a billion with just any old prng. Even with $epsilon = 0.5$ there was no pattern that made this clear.
However, if we knew the relation between $k$ and $epsilon$ we could try to prove it by counting the in the above 2 relations.
If we didn't know this relation we could try something like successively halving $epsilon$, starting at $0.5$. For each value of $epsilon$ we can 'show' the first equation holds by counting the instances $f(n) exceeds $1 - epsilon$ and checking that for the next 1000 times this happens the gap is not increasing on average. We can 'show' the second even less well but the ratio of these 'hits' to those of eqn 1 should tend to zero.
There are some practical issues though:
These numbers are going to get huge. In practice there is no way to do this with finite memory as the floating point maths size is going to grow uncontrollably.
The PRNG will have to be perfect for this to work. In practice this is not possible. eventually you will reach the period, at which point all the theory (and practice) disappears and gets replaced by something a lot more linear...
$endgroup$
add a comment |
$begingroup$
The good news is that for a given value of $n$, the walk value at $n$ only affects lower values of our sequence. Which we don't care about, our interest is in the limiting behaviour. There should be no need to store the sequence.
The problem is $f(n) = frac{S_n}{sqrt{2 n times log(log(n))}}$ doesn't converge, so the graphical proof of a line tending to a value is going to involve some other property of $S_n$,other than a smooth function will be necessary. As you point out the sup cannot be this property. It can't be calculated.
That doesn't mean we can't have a stab at a numerical demonstration, and if we have infinity time, maybe even something stronger.
For example, it would suffice to show
$forall epsilon > 0, forall k: exists n > k, text{ s.t. } f(n) > 1 - epsilon$
and
$forall epsilon > 0, exists k, text{ s.t. } forall n > k : f(n) < 1 + epsilon$
This is not easily directly testable as is.
For example, I have run this for around a billion with just any old prng. Even with $epsilon = 0.5$ there was no pattern that made this clear.
However, if we knew the relation between $k$ and $epsilon$ we could try to prove it by counting the in the above 2 relations.
If we didn't know this relation we could try something like successively halving $epsilon$, starting at $0.5$. For each value of $epsilon$ we can 'show' the first equation holds by counting the instances $f(n) exceeds $1 - epsilon$ and checking that for the next 1000 times this happens the gap is not increasing on average. We can 'show' the second even less well but the ratio of these 'hits' to those of eqn 1 should tend to zero.
There are some practical issues though:
These numbers are going to get huge. In practice there is no way to do this with finite memory as the floating point maths size is going to grow uncontrollably.
The PRNG will have to be perfect for this to work. In practice this is not possible. eventually you will reach the period, at which point all the theory (and practice) disappears and gets replaced by something a lot more linear...
$endgroup$
The good news is that for a given value of $n$, the walk value at $n$ only affects lower values of our sequence. Which we don't care about, our interest is in the limiting behaviour. There should be no need to store the sequence.
The problem is $f(n) = frac{S_n}{sqrt{2 n times log(log(n))}}$ doesn't converge, so the graphical proof of a line tending to a value is going to involve some other property of $S_n$,other than a smooth function will be necessary. As you point out the sup cannot be this property. It can't be calculated.
That doesn't mean we can't have a stab at a numerical demonstration, and if we have infinity time, maybe even something stronger.
For example, it would suffice to show
$forall epsilon > 0, forall k: exists n > k, text{ s.t. } f(n) > 1 - epsilon$
and
$forall epsilon > 0, exists k, text{ s.t. } forall n > k : f(n) < 1 + epsilon$
This is not easily directly testable as is.
For example, I have run this for around a billion with just any old prng. Even with $epsilon = 0.5$ there was no pattern that made this clear.
However, if we knew the relation between $k$ and $epsilon$ we could try to prove it by counting the in the above 2 relations.
If we didn't know this relation we could try something like successively halving $epsilon$, starting at $0.5$. For each value of $epsilon$ we can 'show' the first equation holds by counting the instances $f(n) exceeds $1 - epsilon$ and checking that for the next 1000 times this happens the gap is not increasing on average. We can 'show' the second even less well but the ratio of these 'hits' to those of eqn 1 should tend to zero.
There are some practical issues though:
These numbers are going to get huge. In practice there is no way to do this with finite memory as the floating point maths size is going to grow uncontrollably.
The PRNG will have to be perfect for this to work. In practice this is not possible. eventually you will reach the period, at which point all the theory (and practice) disappears and gets replaced by something a lot more linear...
answered Feb 5 at 11:56
drjpizzledrjpizzle
1712
1712
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1637989%2fhow-to-numerically-test-a-limsup-example-numerical-simulation-of-the-law-of%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
You can "test" the theorem by reproducing the illustrations with iterated logarithms in the linked Wikipedia article. If we had $limsup > 1$, we would exepct curves exceeding the green bounds more frequently on the right side of the graph. If we had $limsup < 1$, we would expect sparse areas inside the green curves, with curves entering those areas most rarely on the right side of the graph. Of course these tests are not conclusive, but they illustrate the theorem well.
$endgroup$
– Matt F.
Feb 1 at 5:24