Race of the wealthy gamblers: How do I get this closed form?
$begingroup$
UPDATE:
If you can prove the following identity:
$$sumlimits_{t=0}^p left(frac{{2t+1 choose t}}{2^{2t+1}}right)^2 frac{1}{t+2} = -1 + frac{(2+p)}{2^{4p+4}}{2p+3 choose p+1}^2$$
Then this is good enough to solve this question and get my gratitude as well as the 50 point bounty. I got this identity from Mathematica. See my answer below for details.
My question relates to an infinite summation and it's very elegant closed form. The expression is the solution to a nice problem which I'll get into as well. Here is the summation:
Let:
$$a_t = left({2t+1 choose t} - {2t+1 choose t-1}right)frac{1}{2^{2+2t}}$$
and,
$$b_t = left({2t+2 choose t} - {2t+2 choose t-1}right)frac{1}{2^{3+2t}}$$
For the very first terms of these sequences, ${n choose -1} = 0$ for all $n$.
And the summation:
$$S = sum_{t=0}^{infty} left(1-sum_{l=0}^{t-1}b_lright) a_t = 1- sum_{t=0}^{infty} left(sum_{l=0}^{t-1}b_lright) a_t = 7-frac{20}{pi} tag{1}$$
I know the expression above is correct (verified with a python program), but have no idea how to prove this and would like to at least see how I might approach it.
Now, why do I care about this summation? It is the solution to the following problem:
Imagine two wealthy gamblers start tossing their own separate fair coins, winning 1$ on heads and losing 1$ on tails. Both start at 0$ and have infinite bank balances. The first one wants to get to 2$ and the second one wants to get to 3$. What is the probability that the first one will reach his goal before the second one?
One way to solve this is to consider the probability that a gambler reaches exactly $k$ dollars for the first time on toss $n$. If he has $t$ tails, then he needs $t+k$ heads. So, $n=2t+k$ (note if k=2$, he can only reach the target on even tosses and if k=3$, he can only reach it on odd tosses). This probability turns out to be:
$$left({k+2t-1 choose t} - {k+2t-1 choose t-1}right) frac{1}{2^{k+t}} frac{1}{2^t}$$
Now, let $A_n$ be the probability that the 2$ targeting gambler wins on toss $n$ and $A$ be the probability that he wins. Then we have $A = bigcuplimits_{n=1}^{infty}A_n$ and so, $P(A) = sumlimits_{n=0}^infty P(A_n)$. Now, for the 2$ targeting gambler to win on the $n$th toss, two things should happen:
- He should reach his target on the $n$th toss for some even $n$.
- His competitor, the 3$ gambler should not reach his target on any toss upto $n-1$ (since he can only reach his target on odd tosses).
Putting all of this together, you can see that the probability that the 2$ gambler wins is given by equation (1) above. I have put together some python code that approximates $S$ by going upto a large number of tosses. A Reddit user pointed out the closed form for which he used a slightly different but related approach and Mathematica. Now, how do I prove that the summation above has the closed form mentioned $(7-frac{20}{pi})$?
EDIT:
Here is a short python snippet that demonstrates the summation in equation (1) above.
a_t = np.array([(comb(2*t+1,t)-comb(2*t+1,t-1))/2**(2*t+2) for t in range(500)])
b_t = np.array([(comb(2*t+2,t)-comb(2*t+2,t-1))/2**(2*t+3) for t in range(500)])
b_sum = 1-np.concatenate(([0],np.cumsum(b_t)))
s = sum(a_t*b_sum[:500])
print(7-20/np.pi-s)
Also, here is the Mathematica snippet that shows the result (thanks to @SteveKass for helping with that):
probability sequences-and-series summation markov-chains
$endgroup$
|
show 10 more comments
$begingroup$
UPDATE:
If you can prove the following identity:
$$sumlimits_{t=0}^p left(frac{{2t+1 choose t}}{2^{2t+1}}right)^2 frac{1}{t+2} = -1 + frac{(2+p)}{2^{4p+4}}{2p+3 choose p+1}^2$$
Then this is good enough to solve this question and get my gratitude as well as the 50 point bounty. I got this identity from Mathematica. See my answer below for details.
My question relates to an infinite summation and it's very elegant closed form. The expression is the solution to a nice problem which I'll get into as well. Here is the summation:
Let:
$$a_t = left({2t+1 choose t} - {2t+1 choose t-1}right)frac{1}{2^{2+2t}}$$
and,
$$b_t = left({2t+2 choose t} - {2t+2 choose t-1}right)frac{1}{2^{3+2t}}$$
For the very first terms of these sequences, ${n choose -1} = 0$ for all $n$.
And the summation:
$$S = sum_{t=0}^{infty} left(1-sum_{l=0}^{t-1}b_lright) a_t = 1- sum_{t=0}^{infty} left(sum_{l=0}^{t-1}b_lright) a_t = 7-frac{20}{pi} tag{1}$$
I know the expression above is correct (verified with a python program), but have no idea how to prove this and would like to at least see how I might approach it.
Now, why do I care about this summation? It is the solution to the following problem:
Imagine two wealthy gamblers start tossing their own separate fair coins, winning 1$ on heads and losing 1$ on tails. Both start at 0$ and have infinite bank balances. The first one wants to get to 2$ and the second one wants to get to 3$. What is the probability that the first one will reach his goal before the second one?
One way to solve this is to consider the probability that a gambler reaches exactly $k$ dollars for the first time on toss $n$. If he has $t$ tails, then he needs $t+k$ heads. So, $n=2t+k$ (note if k=2$, he can only reach the target on even tosses and if k=3$, he can only reach it on odd tosses). This probability turns out to be:
$$left({k+2t-1 choose t} - {k+2t-1 choose t-1}right) frac{1}{2^{k+t}} frac{1}{2^t}$$
Now, let $A_n$ be the probability that the 2$ targeting gambler wins on toss $n$ and $A$ be the probability that he wins. Then we have $A = bigcuplimits_{n=1}^{infty}A_n$ and so, $P(A) = sumlimits_{n=0}^infty P(A_n)$. Now, for the 2$ targeting gambler to win on the $n$th toss, two things should happen:
- He should reach his target on the $n$th toss for some even $n$.
- His competitor, the 3$ gambler should not reach his target on any toss upto $n-1$ (since he can only reach his target on odd tosses).
Putting all of this together, you can see that the probability that the 2$ gambler wins is given by equation (1) above. I have put together some python code that approximates $S$ by going upto a large number of tosses. A Reddit user pointed out the closed form for which he used a slightly different but related approach and Mathematica. Now, how do I prove that the summation above has the closed form mentioned $(7-frac{20}{pi})$?
EDIT:
Here is a short python snippet that demonstrates the summation in equation (1) above.
a_t = np.array([(comb(2*t+1,t)-comb(2*t+1,t-1))/2**(2*t+2) for t in range(500)])
b_t = np.array([(comb(2*t+2,t)-comb(2*t+2,t-1))/2**(2*t+3) for t in range(500)])
b_sum = 1-np.concatenate(([0],np.cumsum(b_t)))
s = sum(a_t*b_sum[:500])
print(7-20/np.pi-s)
Also, here is the Mathematica snippet that shows the result (thanks to @SteveKass for helping with that):
probability sequences-and-series summation markov-chains
$endgroup$
1
$begingroup$
This game can be considered a random walk in the plane beginning at $(-2,-3)$, with diagonal steps $(pm1,pm1)$. It ends when the walk hits a coordinate axis, and you want the probability of escape from the first quadrant at $(0,k)$ with $k>0$. There is a fair bit of literature on escape probabilities for random walks (not my specialty!), and maybe this framing is helpful. I found one paper mentioning the diagonal-steps walk at digitalcommons.wku.edu/cgi/… Maybe it or some references in it will be useful.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:36
2
$begingroup$
Where did that strange expression $7-dfrac{20}{pi}$ come from? Is there a theoretical basis for it, or is it just numerically close to your computed value?
$endgroup$
– TonyK
Dec 22 '18 at 23:43
2
$begingroup$
Sorry - too late to edit the comment, but the starting point should be $(2,3)$.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:43
2
$begingroup$
I had posted an answer, but it was to a misreading of the question. Deleted now. To clarify so that others don't make the same mistake, the two gamblers are not gambling against each other, but instead independently gambling against the house. It's a two-dimensional problem.
$endgroup$
– jmerry
Dec 23 '18 at 0:17
2
$begingroup$
Actually, it might not help so much, because it has $pi$ in the numerator, not the denominator, but hopefully you’ll get there.
$endgroup$
– Steve Kass
Dec 23 '18 at 2:42
|
show 10 more comments
$begingroup$
UPDATE:
If you can prove the following identity:
$$sumlimits_{t=0}^p left(frac{{2t+1 choose t}}{2^{2t+1}}right)^2 frac{1}{t+2} = -1 + frac{(2+p)}{2^{4p+4}}{2p+3 choose p+1}^2$$
Then this is good enough to solve this question and get my gratitude as well as the 50 point bounty. I got this identity from Mathematica. See my answer below for details.
My question relates to an infinite summation and it's very elegant closed form. The expression is the solution to a nice problem which I'll get into as well. Here is the summation:
Let:
$$a_t = left({2t+1 choose t} - {2t+1 choose t-1}right)frac{1}{2^{2+2t}}$$
and,
$$b_t = left({2t+2 choose t} - {2t+2 choose t-1}right)frac{1}{2^{3+2t}}$$
For the very first terms of these sequences, ${n choose -1} = 0$ for all $n$.
And the summation:
$$S = sum_{t=0}^{infty} left(1-sum_{l=0}^{t-1}b_lright) a_t = 1- sum_{t=0}^{infty} left(sum_{l=0}^{t-1}b_lright) a_t = 7-frac{20}{pi} tag{1}$$
I know the expression above is correct (verified with a python program), but have no idea how to prove this and would like to at least see how I might approach it.
Now, why do I care about this summation? It is the solution to the following problem:
Imagine two wealthy gamblers start tossing their own separate fair coins, winning 1$ on heads and losing 1$ on tails. Both start at 0$ and have infinite bank balances. The first one wants to get to 2$ and the second one wants to get to 3$. What is the probability that the first one will reach his goal before the second one?
One way to solve this is to consider the probability that a gambler reaches exactly $k$ dollars for the first time on toss $n$. If he has $t$ tails, then he needs $t+k$ heads. So, $n=2t+k$ (note if k=2$, he can only reach the target on even tosses and if k=3$, he can only reach it on odd tosses). This probability turns out to be:
$$left({k+2t-1 choose t} - {k+2t-1 choose t-1}right) frac{1}{2^{k+t}} frac{1}{2^t}$$
Now, let $A_n$ be the probability that the 2$ targeting gambler wins on toss $n$ and $A$ be the probability that he wins. Then we have $A = bigcuplimits_{n=1}^{infty}A_n$ and so, $P(A) = sumlimits_{n=0}^infty P(A_n)$. Now, for the 2$ targeting gambler to win on the $n$th toss, two things should happen:
- He should reach his target on the $n$th toss for some even $n$.
- His competitor, the 3$ gambler should not reach his target on any toss upto $n-1$ (since he can only reach his target on odd tosses).
Putting all of this together, you can see that the probability that the 2$ gambler wins is given by equation (1) above. I have put together some python code that approximates $S$ by going upto a large number of tosses. A Reddit user pointed out the closed form for which he used a slightly different but related approach and Mathematica. Now, how do I prove that the summation above has the closed form mentioned $(7-frac{20}{pi})$?
EDIT:
Here is a short python snippet that demonstrates the summation in equation (1) above.
a_t = np.array([(comb(2*t+1,t)-comb(2*t+1,t-1))/2**(2*t+2) for t in range(500)])
b_t = np.array([(comb(2*t+2,t)-comb(2*t+2,t-1))/2**(2*t+3) for t in range(500)])
b_sum = 1-np.concatenate(([0],np.cumsum(b_t)))
s = sum(a_t*b_sum[:500])
print(7-20/np.pi-s)
Also, here is the Mathematica snippet that shows the result (thanks to @SteveKass for helping with that):
probability sequences-and-series summation markov-chains
$endgroup$
UPDATE:
If you can prove the following identity:
$$sumlimits_{t=0}^p left(frac{{2t+1 choose t}}{2^{2t+1}}right)^2 frac{1}{t+2} = -1 + frac{(2+p)}{2^{4p+4}}{2p+3 choose p+1}^2$$
Then this is good enough to solve this question and get my gratitude as well as the 50 point bounty. I got this identity from Mathematica. See my answer below for details.
My question relates to an infinite summation and it's very elegant closed form. The expression is the solution to a nice problem which I'll get into as well. Here is the summation:
Let:
$$a_t = left({2t+1 choose t} - {2t+1 choose t-1}right)frac{1}{2^{2+2t}}$$
and,
$$b_t = left({2t+2 choose t} - {2t+2 choose t-1}right)frac{1}{2^{3+2t}}$$
For the very first terms of these sequences, ${n choose -1} = 0$ for all $n$.
And the summation:
$$S = sum_{t=0}^{infty} left(1-sum_{l=0}^{t-1}b_lright) a_t = 1- sum_{t=0}^{infty} left(sum_{l=0}^{t-1}b_lright) a_t = 7-frac{20}{pi} tag{1}$$
I know the expression above is correct (verified with a python program), but have no idea how to prove this and would like to at least see how I might approach it.
Now, why do I care about this summation? It is the solution to the following problem:
Imagine two wealthy gamblers start tossing their own separate fair coins, winning 1$ on heads and losing 1$ on tails. Both start at 0$ and have infinite bank balances. The first one wants to get to 2$ and the second one wants to get to 3$. What is the probability that the first one will reach his goal before the second one?
One way to solve this is to consider the probability that a gambler reaches exactly $k$ dollars for the first time on toss $n$. If he has $t$ tails, then he needs $t+k$ heads. So, $n=2t+k$ (note if k=2$, he can only reach the target on even tosses and if k=3$, he can only reach it on odd tosses). This probability turns out to be:
$$left({k+2t-1 choose t} - {k+2t-1 choose t-1}right) frac{1}{2^{k+t}} frac{1}{2^t}$$
Now, let $A_n$ be the probability that the 2$ targeting gambler wins on toss $n$ and $A$ be the probability that he wins. Then we have $A = bigcuplimits_{n=1}^{infty}A_n$ and so, $P(A) = sumlimits_{n=0}^infty P(A_n)$. Now, for the 2$ targeting gambler to win on the $n$th toss, two things should happen:
- He should reach his target on the $n$th toss for some even $n$.
- His competitor, the 3$ gambler should not reach his target on any toss upto $n-1$ (since he can only reach his target on odd tosses).
Putting all of this together, you can see that the probability that the 2$ gambler wins is given by equation (1) above. I have put together some python code that approximates $S$ by going upto a large number of tosses. A Reddit user pointed out the closed form for which he used a slightly different but related approach and Mathematica. Now, how do I prove that the summation above has the closed form mentioned $(7-frac{20}{pi})$?
EDIT:
Here is a short python snippet that demonstrates the summation in equation (1) above.
a_t = np.array([(comb(2*t+1,t)-comb(2*t+1,t-1))/2**(2*t+2) for t in range(500)])
b_t = np.array([(comb(2*t+2,t)-comb(2*t+2,t-1))/2**(2*t+3) for t in range(500)])
b_sum = 1-np.concatenate(([0],np.cumsum(b_t)))
s = sum(a_t*b_sum[:500])
print(7-20/np.pi-s)
Also, here is the Mathematica snippet that shows the result (thanks to @SteveKass for helping with that):
probability sequences-and-series summation markov-chains
probability sequences-and-series summation markov-chains
edited Dec 27 '18 at 8:36
Rohit Pandey
asked Dec 22 '18 at 22:02


Rohit PandeyRohit Pandey
1,2301021
1,2301021
1
$begingroup$
This game can be considered a random walk in the plane beginning at $(-2,-3)$, with diagonal steps $(pm1,pm1)$. It ends when the walk hits a coordinate axis, and you want the probability of escape from the first quadrant at $(0,k)$ with $k>0$. There is a fair bit of literature on escape probabilities for random walks (not my specialty!), and maybe this framing is helpful. I found one paper mentioning the diagonal-steps walk at digitalcommons.wku.edu/cgi/… Maybe it or some references in it will be useful.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:36
2
$begingroup$
Where did that strange expression $7-dfrac{20}{pi}$ come from? Is there a theoretical basis for it, or is it just numerically close to your computed value?
$endgroup$
– TonyK
Dec 22 '18 at 23:43
2
$begingroup$
Sorry - too late to edit the comment, but the starting point should be $(2,3)$.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:43
2
$begingroup$
I had posted an answer, but it was to a misreading of the question. Deleted now. To clarify so that others don't make the same mistake, the two gamblers are not gambling against each other, but instead independently gambling against the house. It's a two-dimensional problem.
$endgroup$
– jmerry
Dec 23 '18 at 0:17
2
$begingroup$
Actually, it might not help so much, because it has $pi$ in the numerator, not the denominator, but hopefully you’ll get there.
$endgroup$
– Steve Kass
Dec 23 '18 at 2:42
|
show 10 more comments
1
$begingroup$
This game can be considered a random walk in the plane beginning at $(-2,-3)$, with diagonal steps $(pm1,pm1)$. It ends when the walk hits a coordinate axis, and you want the probability of escape from the first quadrant at $(0,k)$ with $k>0$. There is a fair bit of literature on escape probabilities for random walks (not my specialty!), and maybe this framing is helpful. I found one paper mentioning the diagonal-steps walk at digitalcommons.wku.edu/cgi/… Maybe it or some references in it will be useful.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:36
2
$begingroup$
Where did that strange expression $7-dfrac{20}{pi}$ come from? Is there a theoretical basis for it, or is it just numerically close to your computed value?
$endgroup$
– TonyK
Dec 22 '18 at 23:43
2
$begingroup$
Sorry - too late to edit the comment, but the starting point should be $(2,3)$.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:43
2
$begingroup$
I had posted an answer, but it was to a misreading of the question. Deleted now. To clarify so that others don't make the same mistake, the two gamblers are not gambling against each other, but instead independently gambling against the house. It's a two-dimensional problem.
$endgroup$
– jmerry
Dec 23 '18 at 0:17
2
$begingroup$
Actually, it might not help so much, because it has $pi$ in the numerator, not the denominator, but hopefully you’ll get there.
$endgroup$
– Steve Kass
Dec 23 '18 at 2:42
1
1
$begingroup$
This game can be considered a random walk in the plane beginning at $(-2,-3)$, with diagonal steps $(pm1,pm1)$. It ends when the walk hits a coordinate axis, and you want the probability of escape from the first quadrant at $(0,k)$ with $k>0$. There is a fair bit of literature on escape probabilities for random walks (not my specialty!), and maybe this framing is helpful. I found one paper mentioning the diagonal-steps walk at digitalcommons.wku.edu/cgi/… Maybe it or some references in it will be useful.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:36
$begingroup$
This game can be considered a random walk in the plane beginning at $(-2,-3)$, with diagonal steps $(pm1,pm1)$. It ends when the walk hits a coordinate axis, and you want the probability of escape from the first quadrant at $(0,k)$ with $k>0$. There is a fair bit of literature on escape probabilities for random walks (not my specialty!), and maybe this framing is helpful. I found one paper mentioning the diagonal-steps walk at digitalcommons.wku.edu/cgi/… Maybe it or some references in it will be useful.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:36
2
2
$begingroup$
Where did that strange expression $7-dfrac{20}{pi}$ come from? Is there a theoretical basis for it, or is it just numerically close to your computed value?
$endgroup$
– TonyK
Dec 22 '18 at 23:43
$begingroup$
Where did that strange expression $7-dfrac{20}{pi}$ come from? Is there a theoretical basis for it, or is it just numerically close to your computed value?
$endgroup$
– TonyK
Dec 22 '18 at 23:43
2
2
$begingroup$
Sorry - too late to edit the comment, but the starting point should be $(2,3)$.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:43
$begingroup$
Sorry - too late to edit the comment, but the starting point should be $(2,3)$.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:43
2
2
$begingroup$
I had posted an answer, but it was to a misreading of the question. Deleted now. To clarify so that others don't make the same mistake, the two gamblers are not gambling against each other, but instead independently gambling against the house. It's a two-dimensional problem.
$endgroup$
– jmerry
Dec 23 '18 at 0:17
$begingroup$
I had posted an answer, but it was to a misreading of the question. Deleted now. To clarify so that others don't make the same mistake, the two gamblers are not gambling against each other, but instead independently gambling against the house. It's a two-dimensional problem.
$endgroup$
– jmerry
Dec 23 '18 at 0:17
2
2
$begingroup$
Actually, it might not help so much, because it has $pi$ in the numerator, not the denominator, but hopefully you’ll get there.
$endgroup$
– Steve Kass
Dec 23 '18 at 2:42
$begingroup$
Actually, it might not help so much, because it has $pi$ in the numerator, not the denominator, but hopefully you’ll get there.
$endgroup$
– Steve Kass
Dec 23 '18 at 2:42
|
show 10 more comments
6 Answers
6
active
oldest
votes
$begingroup$
Let $a_t triangleq left( frac{begin{pmatrix} 2t+1 \ t end{pmatrix}}{2^{2t+1}} right)^2 frac{1}{t+2}$. We need $S_p triangleq sumlimits_{t=0}^p a_t$ in closed form.
First, note that
$$frac{a_{t+1}}{a_t} = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ begin{pmatrix} 2t+3 \ t+1 end{pmatrix} }{ begin{pmatrix} 2t+1 \ t end{pmatrix} } right)^2 = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ (2t+3)(2t+2) }{ (t+1)(t+2) } right)^2 = frac{(t+3/2)^2}{(t+2)(t+3)} tag 1$$
Using $(1)$ repeatedly starting with $a_0 = 1/8$, we get:
$$a_t = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} ldots frac{(t+1/2)^2}{(t+1).(t+2)} text{ for $tgeq 1$} tag 2$$
Using $(2)$, we have
$$S_0 = a_0 = frac{1}{8} = frac{9}{8} - 1 = bbox[yellow]{frac{1}{8}.3^2 - 1}$$
$$S_1 = a_1 + a_0 = a_1 + S_0 = frac{1}{8}.frac{(3/2)^2}{2.3} + frac{3^2}{8} - 1 $$
$$ = bbox[yellow]{frac{1}{8}.frac{(3/2)^2}{2.3}. 5^2 -1}$$
$$S_2 = a_2 + S_1 = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} + frac{5^2}{8}.frac{(3/2)^2}{2.3} -1 $$ $$= bbox[yellow]{frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4}. 7^2 - 1}$$
The general pattern we observe is (can be proved formally via induction)
$$
begin{align}
S_p
&= frac{1}{8}. frac{(3/2)^2}{2.3}. frac{(5/2)^2}{3.4} ldots frac{((2p+1)/2)^2}{(p+1)(p+2)}.(2p+3)^2 -1 \
&= frac{1}{16}. frac{(3/2)^2}{3^2}. frac{(5/2)^2}{4^2} ldots frac{((2p+1)/2)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{1}{2^{2p+4}}. frac{3^2}{3^2}. frac{5^2}{4^2} ldots frac{(2p+1)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{2.3.4 ldots (p+1)(p+2)}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{(p+2)!}.frac{(p+1)! 2^{p+1}}{(p+1)! 2^{p+1}}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{(2p+3)!}{(p+2)!(p+1)!}.frac{1}{ 2^{p}}right]^2 -1 \
&= frac{p+2}{2^{4p+4}} begin{pmatrix} 2p+3 \ p+1 end{pmatrix}^2 -1 \
end{align}
$$
which is what we want.
$endgroup$
1
$begingroup$
Nice solution. (+1)
$endgroup$
– Markus Scheuer
Dec 27 '18 at 23:26
add a comment |
$begingroup$
This answer is based upon the Gosper algorithm. It can also be used to solve structural similar identities (see (4) below). We follow closely Summa Summarum by M.E. Larsen.
We set
begin{align*}
color{blue}{A(p):=sum_{t=0}^pfrac{1}{t+2}binom{2t+1}{t}^2frac{1}{2^{4t+2}}}tag{1}
end{align*}
We rewrite $A(p)$ as follows
begin{align*}
A(p)&=sum_{t=0}^pa_t=a_0+a_1+a_2+cdots+a_p\
&=a_0+a_0frac{a_1}{a_0}+a_0frac{a_1a_2}{a_0a_1}+cdots+a_0frac{a_1a_2cdots a_p}{a_0a_1cdots a_{p-1}}\
&=a_0sum_{t=0}^pprod_{j=0}^{t-1}frac{a_{j+1}}{a_j}tag{2}
end{align*}
We obtain $a_0=frac{1}{8}$ and
begin{align*}
frac{a_{j+1}}{a_j}&=frac{j+2}{j+3}cdotfrac{binom{2j+3}{j+1}^2}{binom{2j+1}{j}^2}cdotfrac{2^{4j+6}}{2^{4j+2}}=frac{(2j+3)^2}{4(j+2)(j+3)}\
&=frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}
end{align*}
In the following we use the falling factorial notation $x^{underline{t}}=x(x-1)(x-2)cdots(x-t+1)$.
From (1) and (2) we get
begin{align*}
A(p)&=frac{1}{8}sum_{t=0}^{p}prod_{j=0}^{t-1}frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}\
&=frac{1}{8}sum_{t=0}^pfrac{left(-frac{3}{2}right)^{underline{t}}left(-frac{3}{2}right)^{underline{t}}}{(-2)^{underline{t}}(-3)^{underline{t}}}tag{3}
end{align*}
We consider Corollary 6.2 of Summa Summarum which states for $a,b,c,dinmathbb{C}$ with $a+b=c+d$
begin{align*}
sum_{t=0}^pfrac{a^{underline{t}}b^{underline{t}}}{(c-1)^{underline{t}}(d-1)^{underline{t}}}
=frac{1}{(a-c)(b-c)}left(frac{a^{underline{p+1}}b^{underline{p+1}}}{(c-1)^{underline{p}}(d-1)^{underline{p}}}-cdright)tag{4}
end{align*}
We can apply Corollary 6.2 since in (3) we have $a=b=-frac{3}{2}, c=-1,d=-2$ so that $a+b=c+d$. We get
begin{align*}
color{blue}{A(p)}&=frac{1}{8}cdotfrac{1}{left(-frac{1}{2}right)left(-frac{1}{2}right)}
left(frac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-(-1)(-2)right)\
&=frac{1}{2}cdotfrac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-1tag{5}\
&=frac{1}{2}left(frac{(2p+3)!}{2^{2p+2}(p+1)!}right)^2cdotfrac{1}{(p+1)!frac{1}{2}(p+2)!}-1\
&,,color{blue}{=frac{p+2}{2^{4p+4}}binom{2p+3}{p+1}^2-1}
end{align*}
and the claim follows.
Comment:
- In (5) we use
begin{align*}
left(-frac{3}{2}right)^{underline{p+1}}&=left(-frac{3}{2}right)left(-frac{5}{2}right)cdotsleft(-frac{3}{2}-pright)\
&=frac{(-1)^{p+1}}{2^{p+1}}(2p+3)!!=frac{(-1)^{p+1}}{2^{p+1}}cdotfrac{(2p+3)!}{(2p+2)!!}=frac{(-1)^{p+1}(2p+3)!}{2^{p+1}2^{p+1}(p+1)!}\
&=frac{(-1)^{p+1}(2p+3)!}{2^{2p+2}(p+1)!}\
(-2)^{underline{p}}&=(-2)(-3)cdots(-2-(p+1))=(-1)^p(p+1)!\
(-3)^{underline{p}}&=(-3)(-4)cdots(-3-(p+1))=(-1)^pfrac{1}{2}(p+2)!
end{align*}
$endgroup$
$begingroup$
Great to know about Gosper algorithm and that there are several related methods for (in)definite sums of hypergeometric terms!
$endgroup$
– apsad
Dec 28 '18 at 6:12
$begingroup$
@apsad: Yes, I also think this technique is valuable. You are right the calculations in the book are in terms of indefinite sums, which I tried to hide in my post. The author also explicitly tried to avoid hypergeometric sums contrary to some other famous books around this theme.
$endgroup$
– Markus Scheuer
Dec 28 '18 at 7:44
$begingroup$
Thank, very nice proof (+1). Similar in spirit to the one by @apsad, but uses the nice identity to make things super clean. Looked up corollary 6.2 in the book (web.math.ku.dk/~mel/larsen.pdf) and it has a $delta k$ added. Is that because those are indefinite summations? Will need to go back several chapters to figure out what exactly it means.
$endgroup$
– Rohit Pandey
Dec 28 '18 at 9:25
$begingroup$
@RohitPandey:You're welcome. Yes, the corollary uses indefinite summation. The connection with definite sums is given in (1.10). Thanks for the link to the book. :-)
$endgroup$
– Markus Scheuer
Dec 28 '18 at 10:39
add a comment |
$begingroup$
As Steve Kass commented, there is a problem somewhere.
Using a CAS, what I obtained after simplifications is
$$sum_{l=1}^{t-1}b_l=frac{7}{8}+frac{ (t+3) (3 t+4) }{3
(t+1),2^{2 (t+1)}}left(binom{2 t+2}{t-1}-binom{2 t+2}{t}right)=frac{7}{8}-frac{(3 t+4), Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$ The partial sum
$$S_p = sum_{t=1}^{p} left(sum_{l=1}^{t-1}b_lright) a_t$$ is evaluated (the result is very nasty but "almost" explicit ) but, given by the same CAS,
$$S_infty =frac{20}{pi }-frac{195}{32}$$ is obtained without any problem (how ? this is the question).
Edit
After your two major changes (summations starting at $0$ and the formula for $S$), the results become quite different.
$$sum_{l=0}^{t-1}b_l=1-frac{(3 t+4) Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$
$$S_p = sum_{t=0}^{p} left(1-sum_{l=0}^{t-1}b_lright) a_t=7-frac{4 (5 p+12) ,Gamma left(p+frac{5}{2}right)^2}{pi , Gamma (p+3)^2}$$
Using Stirling approximation for lage values of $p$, we have
$$S_p=7-frac{20}pileft(1+frac{3}{20 p}-frac{59}{160 p^2}+frac{573}{640 p^3}-frac{21977}{10240
p^4}+Oleft(frac{1}{p^5}right)right)$$
$$S_infty =7-frac{20}{pi }$$
$endgroup$
$begingroup$
But you can see the python code.. it implements exactly the equation (1) I described above. I don't have access to your CAS, so can you possibly debug by checking the actual sequences and comparing with the ones in the python snippet?
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:09
$begingroup$
And in any case, the core question is how the summation is translating to the $frac{20}{pi}$ term.
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:11
1
$begingroup$
You can get a little help towards gamma function expressions if you parametrize part of the sum. There are supposedly ways to see what Mathematica is doing with Trace or TraceInternal, but I’ve never tried. i.stack.imgur.com/lSL7c.png i.stack.imgur.com/FMnVW.png ($a$ wasn’t the best parameter name, but it works.) A different CAS might be helpful, per Claude’s comment.
$endgroup$
– Steve Kass
Dec 24 '18 at 22:35
1
$begingroup$
Within the OEIS entry for the squares of Catalan numbers (oeis.org/A001246) is this identity: $$sum _{t=0}^{infty } frac{binom{2 t}{t}^2}{2^{4 t} (t+1)}={4overpi}.$$ That identity and bit more can be found here (scroll down just a bit from “Similar Series”): en.wikipedia.org/wiki/…. Your result might follow from this.
$endgroup$
– Steve Kass
Dec 25 '18 at 19:52
2
$begingroup$
Ahhah.. very interesting. Will try to read through and prove it but should be just a matter of time now. Also interesting to note that if we change the $(t+1)$ in the denominator with $(t+1)^2$, we get $4(frac{4}{pi}-1)$. @SteveKass: You have been immensely helpful in this endeavor. Please add a brief answer with these resources and I'll happily award you the 50 point bounty.
$endgroup$
– Rohit Pandey
Dec 25 '18 at 23:31
|
show 7 more comments
$begingroup$
The basic idea of this question was to understand how $pi$ was coming into these expressions for competing gambler problems. So, I decided to tackle a slightly simpler problem. Instead of finding the probability a 2$ chasing gambler would beat a 3$ chasing gambler, I tried to find the probability that a 2$ chasing gambler would beat a 1$ chasing gambler. The answer here is $4/pi-1$.
First, I wrote the PDF of the stopping time of the 1$ chasing gambler in a simplified form:
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
Then using Mathematica, I got the survival function (there is also an elegant way to get it from "scratch"):
$$Z_t = sumlimits_{l=t+1}^{infty}z_t = frac{2t+1 choose t}{2^{2t+1}}$$
Similarly, the PDF of the stopping time of the 2$ chasing gambler becomes:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler in $2p+2$ tosses then becomes:
$$S_p = sumlimits_{t=0}^{p} a_t Z_t = sumlimits_{t=0}^{p} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
Now, this is the part that I haven't figured out how to do without Mathematica. The expression for $S_p$ becomes:
$$S_p = -1 + (p+2)frac{{2p+3 choose p+1}^2}{2^{4p+4}} tag{2}$$
Now, using the sterling approximation for the factorials and letting $p to infty$ we get:
$$S_{infty} = frac{4}{pi}-1$$
Everything is fine apart from equation (2). Don't know how to prove that one. Which is why I added an update to the question on the very top asking for help with proving this expression.
But, I'm starting to get a rough inkling for where these $pi$'s are coming from. Basically, all these expression describing the PDF's and survival functions of the gamblers involve binomial terms close to the mode.
The probabilities we're after (one of the gamblers beating the other) involve two of these binomial terms close to their mode multiplied over all possible $t$ up to $p$. These summations tend to produce expressions that also involve squares of binomial terms close to the mode, this time involving $p$. And this is the part I need help with. I don't understand why this happens.
However once we accept that, we'll be left with some term like equation (2) which will involve a binomial coefficient around the mode for a large $p$. However as $p$ becomes large, this binomial distribution starts to resemble a Gaussian (law of large numbers). And we know that the gaussian around it's mode has a PDF with a $frac{1}{sqrt{pi}}$ term. So, squaring it yields a $frac{1}{pi}$ term.
$endgroup$
add a comment |
$begingroup$
Since there was some interest in this question on this and other forums, I wanted this page to become a wiki for these kinds of wealthy gambler race problems. So here are some expressions.
For a gambler targeting 1$;
- PDF of stopping time (probability he will reach 1$ on toss $2t+1$ where $t$ is the number of tails):
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
- Survival function of stopping time (probability that he will reach 1$ after toss $2t+1$):
$$Z_t = frac{2t choose t}{2^{2t+1}}$$
For a gambler targeting 2$:
- PDF of stopping time:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
- Survival function of stopping time:
$$A_t = frac{2t+3 choose t+1}{2^{2t+2}}$$
For a gambler targeting 3$:
- PDF of stopping time:
$$b_t = frac{3}{3+t}{2t+2 choose t}frac{1}{2^{2t+2}}$$
- Survival function of stopping time:
$$B_t = frac{3t+7}{t+2} frac{2t+2 choose t}{2^{2t+2}}$$
Probability two 1$ targeting gamblers will draw by toss $2s+1$:
$$D_s(1,1) = sumlimits_{t=0}^{p} z_t^2 = sumlimits_{t=0}^{s} left(frac{2t choose t}{(t+1)2^{2t+1}} right)^2 = -1 + frac{(4s+5) {2s+2 choose 1+s}^2}{2^{4s+4}}$$
The overall probability that the game ends in a draw:
$$D_{infty}(1,1) = frac{4}{pi}-1$$
And so the probability the first gambler wins (by symmetry):
$$W_{infty}(1,1) = frac{1-D_{infty}(1,1)}{2} = 1-frac{2}{pi}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler by toss $2s+2$:
$$W_s(2,1) = sumlimits_{t=0}^{s} a_t Z_t = sumlimits_{t=0}^{s} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
$$=-1 + (s+2)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
$$W_{infty}(2,1) = frac{4}{pi}-1$$
And since there is no scope for a draw because the 2$ targeting one reaches his target only on even tosses while the 1$ targeting one reaches his target only on odd tosses we can say,
$$W_{infty}(1,2) = 1-W_{infty}(2,1) = 2-frac{4}{pi}$$
Note that $W_{infty}(1,2) = 2 W_{infty}(1,1)$ and $D_{infty}(1,1) = W_{infty}(2,1)$
Not sure why that is.
The probability that a 3$ targeting gambler will beat a 2$ targeting gambler by toss $2s+1$:
$$W_s(3,2) = sumlimits_{t=0}^{s} b_t A_t = sumlimits_{t=0}^{s} frac{3}{3+t} frac{{2t+2 choose t}{2t+3 choose t+1}}{2^{2t+4}}$$
Also note that:
$${2t+2 choose t}{2t+3 choose t+1} = frac{2t+3}{t+1} {2t+2 choose t}^2$$
Substituting this into the above, we get the partial summation:
$$ W_s(3,2) = -6 + frac{(10 s^3 + 81s^2 +218s+195){2s+4 choose s+1}^2}{(s+2)^2 2^{4s+7}}$$
So the overall probability that the 3$ targeting gambler beats the 2$ targeting gambler becomes:
$$W_{infty}(3,2) = frac{20}{pi}-6$$
And this means the probability that the 2$ targeting gambler beats the 3$ targeting gambler becomes:
$$W_{infty}(2,3) = 1-W_{infty}(3,2) = 7- frac{20}{pi}$$
The probability that a 1$ targeting gambler will beat a 3$ targeting gambler by toss $2s+1$:
$$W_s(1,3) = sumlimits_{t=0}^{s}z_t B_{t-1} = sumlimits_{t=0}^{s}frac{3t+4}{(t+1)^2} frac{{2t choose t}{2t+2 choose t}}{2^{4t+3}}$$
$$ = 1 - (s^2+7s+12) frac{{2s+2 choose s+1}{2s+4 choose s+1}}{3(2+s)2^{4s+5}}$$
And this makes his overall probability of winning:
$$W_{infty}(1,3) = 1-frac{2}{3 pi}$$
Now let's find some more draw probabilities:
Probability that a 2$ targeting gambler will draw with another 2$ targeting gambler by toss $2s+2$:
$$D_s(2,2) = sumlimits_{t=0}^{s} a_t^2 = sumlimits_{t=0}^{s} left(frac{2t+1 choose t}{(t+2)2^{2t+1}}right)^2 = -5+4(4s+9)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
This means that the probability they will draw period becomes:
$$D_{infty}(2,2) = frac{16}{pi}-5$$
And the probability the first one wins becomes:
$$W_{infty}(2,2) = frac{1-D_{infty}(2,2)}{2} = 3-frac{8}{pi}$$
The probability that a 3$ targeting gambler draws with another by toss $2s+3$:
$$D_s(3,3) = sumlimits_{t=0}^{s} b_t^2 = sumlimits_{t=0}^{s} left(frac{2t+2 choose t}{(t+2)(2^{2t+1})}right)^2$$
$$ = -25 + (236s^3 +1947 s^2+5314s+4803) frac{{2s+4 choose s+1}^2}{3 (s+2)^2 2^{4s+8}}$$
The probability that they will draw period then becomes:
$$D_infty(3,3) = frac{236}{3 pi} - 25$$
So the probability one of them wins becomes:
$$W_infty(3,3) = frac{1-D_infty(3,3)}{2} = 13 - frac{118}{3 pi}$$
$endgroup$
add a comment |
$begingroup$
We have the identity,
$$
begin{align}
frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2
&=frac1{16}frac{k+1}{16^k}binom{2k}{k}^2left(frac{4k+2}{k+1}right)^2-frac{k}{16^k}binom{2k}{k}^2\
&=frac1{16^k}binom{2k}{k}^2left(frac{k+1}{16}left(frac{4k+2}{k+1}right)^2-kright)\
&=frac1{16^k}binom{2k}{k}^2frac1{4(k+1)}tag1
end{align}
$$
Applying $(1)$:
$$
begin{align}
sum_{k=0}^pleft(frac{binom{2k+1}{k}}{2^{2k+1}}right)^2frac1{k+2}
&=sum_{k=1}^{p+1}left(frac{binom{2k-1}{k-1}}{2^{2k-1}}right)^2frac1{k+1}\
&=sum_{k=1}^{p+1}left(frac{binom{2k}{k}}{2^{2k}}right)^2frac1{k+1}\
&=4sum_{k=1}^{p+1}left(frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2right)\
&=4frac{p+2}{16^{p+2}}binom{2p+4}{p+2}^2-1\
&=frac{p+2}{16^{p+1}}binom{2p+3}{p+1}^2-1tag2
end{align}
$$
$endgroup$
$begingroup$
Nice solution. +1. Is the identity you used a famous one? Is there a reference for it? How did you think about using it?
$endgroup$
– Rohit Pandey
Dec 31 '18 at 3:51
1
$begingroup$
It comes from looking at the ratio of $frac{binom{2k+2}{k+1}^2}{16^{k+1}}$ to $frac{binom{2k}{k}^2}{16^k}$, which is $left(frac{2k+1}{2k+2}right)^2sim1-frac1{k+1}$. This means that $frac{binom{2k}{k}^2}{16^k}$ is decreasing. To make up for the rate of decrease, we can multiply by $k$, which applies a factor of $frac{k+1}k$ to the ratio. The resulting ratio is $left(frac{2k+1}{2k+2}right)^2frac{k+1}k=1+frac1{4k(k+1)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{k+1}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
1
$begingroup$
If we multiply by $k+frac14$, we apply a factor of $frac{k+frac54}{k+frac14}$ to get a resultant ratio of $left(frac{2k+1}{2k+2}right)^2frac{k+frac54}{k+frac14}=1+frac1{16(k+1)^2left(k+frac14right)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{(k+1)^2}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049876%2frace-of-the-wealthy-gamblers-how-do-i-get-this-closed-form%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Let $a_t triangleq left( frac{begin{pmatrix} 2t+1 \ t end{pmatrix}}{2^{2t+1}} right)^2 frac{1}{t+2}$. We need $S_p triangleq sumlimits_{t=0}^p a_t$ in closed form.
First, note that
$$frac{a_{t+1}}{a_t} = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ begin{pmatrix} 2t+3 \ t+1 end{pmatrix} }{ begin{pmatrix} 2t+1 \ t end{pmatrix} } right)^2 = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ (2t+3)(2t+2) }{ (t+1)(t+2) } right)^2 = frac{(t+3/2)^2}{(t+2)(t+3)} tag 1$$
Using $(1)$ repeatedly starting with $a_0 = 1/8$, we get:
$$a_t = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} ldots frac{(t+1/2)^2}{(t+1).(t+2)} text{ for $tgeq 1$} tag 2$$
Using $(2)$, we have
$$S_0 = a_0 = frac{1}{8} = frac{9}{8} - 1 = bbox[yellow]{frac{1}{8}.3^2 - 1}$$
$$S_1 = a_1 + a_0 = a_1 + S_0 = frac{1}{8}.frac{(3/2)^2}{2.3} + frac{3^2}{8} - 1 $$
$$ = bbox[yellow]{frac{1}{8}.frac{(3/2)^2}{2.3}. 5^2 -1}$$
$$S_2 = a_2 + S_1 = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} + frac{5^2}{8}.frac{(3/2)^2}{2.3} -1 $$ $$= bbox[yellow]{frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4}. 7^2 - 1}$$
The general pattern we observe is (can be proved formally via induction)
$$
begin{align}
S_p
&= frac{1}{8}. frac{(3/2)^2}{2.3}. frac{(5/2)^2}{3.4} ldots frac{((2p+1)/2)^2}{(p+1)(p+2)}.(2p+3)^2 -1 \
&= frac{1}{16}. frac{(3/2)^2}{3^2}. frac{(5/2)^2}{4^2} ldots frac{((2p+1)/2)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{1}{2^{2p+4}}. frac{3^2}{3^2}. frac{5^2}{4^2} ldots frac{(2p+1)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{2.3.4 ldots (p+1)(p+2)}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{(p+2)!}.frac{(p+1)! 2^{p+1}}{(p+1)! 2^{p+1}}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{(2p+3)!}{(p+2)!(p+1)!}.frac{1}{ 2^{p}}right]^2 -1 \
&= frac{p+2}{2^{4p+4}} begin{pmatrix} 2p+3 \ p+1 end{pmatrix}^2 -1 \
end{align}
$$
which is what we want.
$endgroup$
1
$begingroup$
Nice solution. (+1)
$endgroup$
– Markus Scheuer
Dec 27 '18 at 23:26
add a comment |
$begingroup$
Let $a_t triangleq left( frac{begin{pmatrix} 2t+1 \ t end{pmatrix}}{2^{2t+1}} right)^2 frac{1}{t+2}$. We need $S_p triangleq sumlimits_{t=0}^p a_t$ in closed form.
First, note that
$$frac{a_{t+1}}{a_t} = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ begin{pmatrix} 2t+3 \ t+1 end{pmatrix} }{ begin{pmatrix} 2t+1 \ t end{pmatrix} } right)^2 = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ (2t+3)(2t+2) }{ (t+1)(t+2) } right)^2 = frac{(t+3/2)^2}{(t+2)(t+3)} tag 1$$
Using $(1)$ repeatedly starting with $a_0 = 1/8$, we get:
$$a_t = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} ldots frac{(t+1/2)^2}{(t+1).(t+2)} text{ for $tgeq 1$} tag 2$$
Using $(2)$, we have
$$S_0 = a_0 = frac{1}{8} = frac{9}{8} - 1 = bbox[yellow]{frac{1}{8}.3^2 - 1}$$
$$S_1 = a_1 + a_0 = a_1 + S_0 = frac{1}{8}.frac{(3/2)^2}{2.3} + frac{3^2}{8} - 1 $$
$$ = bbox[yellow]{frac{1}{8}.frac{(3/2)^2}{2.3}. 5^2 -1}$$
$$S_2 = a_2 + S_1 = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} + frac{5^2}{8}.frac{(3/2)^2}{2.3} -1 $$ $$= bbox[yellow]{frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4}. 7^2 - 1}$$
The general pattern we observe is (can be proved formally via induction)
$$
begin{align}
S_p
&= frac{1}{8}. frac{(3/2)^2}{2.3}. frac{(5/2)^2}{3.4} ldots frac{((2p+1)/2)^2}{(p+1)(p+2)}.(2p+3)^2 -1 \
&= frac{1}{16}. frac{(3/2)^2}{3^2}. frac{(5/2)^2}{4^2} ldots frac{((2p+1)/2)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{1}{2^{2p+4}}. frac{3^2}{3^2}. frac{5^2}{4^2} ldots frac{(2p+1)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{2.3.4 ldots (p+1)(p+2)}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{(p+2)!}.frac{(p+1)! 2^{p+1}}{(p+1)! 2^{p+1}}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{(2p+3)!}{(p+2)!(p+1)!}.frac{1}{ 2^{p}}right]^2 -1 \
&= frac{p+2}{2^{4p+4}} begin{pmatrix} 2p+3 \ p+1 end{pmatrix}^2 -1 \
end{align}
$$
which is what we want.
$endgroup$
1
$begingroup$
Nice solution. (+1)
$endgroup$
– Markus Scheuer
Dec 27 '18 at 23:26
add a comment |
$begingroup$
Let $a_t triangleq left( frac{begin{pmatrix} 2t+1 \ t end{pmatrix}}{2^{2t+1}} right)^2 frac{1}{t+2}$. We need $S_p triangleq sumlimits_{t=0}^p a_t$ in closed form.
First, note that
$$frac{a_{t+1}}{a_t} = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ begin{pmatrix} 2t+3 \ t+1 end{pmatrix} }{ begin{pmatrix} 2t+1 \ t end{pmatrix} } right)^2 = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ (2t+3)(2t+2) }{ (t+1)(t+2) } right)^2 = frac{(t+3/2)^2}{(t+2)(t+3)} tag 1$$
Using $(1)$ repeatedly starting with $a_0 = 1/8$, we get:
$$a_t = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} ldots frac{(t+1/2)^2}{(t+1).(t+2)} text{ for $tgeq 1$} tag 2$$
Using $(2)$, we have
$$S_0 = a_0 = frac{1}{8} = frac{9}{8} - 1 = bbox[yellow]{frac{1}{8}.3^2 - 1}$$
$$S_1 = a_1 + a_0 = a_1 + S_0 = frac{1}{8}.frac{(3/2)^2}{2.3} + frac{3^2}{8} - 1 $$
$$ = bbox[yellow]{frac{1}{8}.frac{(3/2)^2}{2.3}. 5^2 -1}$$
$$S_2 = a_2 + S_1 = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} + frac{5^2}{8}.frac{(3/2)^2}{2.3} -1 $$ $$= bbox[yellow]{frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4}. 7^2 - 1}$$
The general pattern we observe is (can be proved formally via induction)
$$
begin{align}
S_p
&= frac{1}{8}. frac{(3/2)^2}{2.3}. frac{(5/2)^2}{3.4} ldots frac{((2p+1)/2)^2}{(p+1)(p+2)}.(2p+3)^2 -1 \
&= frac{1}{16}. frac{(3/2)^2}{3^2}. frac{(5/2)^2}{4^2} ldots frac{((2p+1)/2)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{1}{2^{2p+4}}. frac{3^2}{3^2}. frac{5^2}{4^2} ldots frac{(2p+1)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{2.3.4 ldots (p+1)(p+2)}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{(p+2)!}.frac{(p+1)! 2^{p+1}}{(p+1)! 2^{p+1}}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{(2p+3)!}{(p+2)!(p+1)!}.frac{1}{ 2^{p}}right]^2 -1 \
&= frac{p+2}{2^{4p+4}} begin{pmatrix} 2p+3 \ p+1 end{pmatrix}^2 -1 \
end{align}
$$
which is what we want.
$endgroup$
Let $a_t triangleq left( frac{begin{pmatrix} 2t+1 \ t end{pmatrix}}{2^{2t+1}} right)^2 frac{1}{t+2}$. We need $S_p triangleq sumlimits_{t=0}^p a_t$ in closed form.
First, note that
$$frac{a_{t+1}}{a_t} = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ begin{pmatrix} 2t+3 \ t+1 end{pmatrix} }{ begin{pmatrix} 2t+1 \ t end{pmatrix} } right)^2 = frac{t+2}{t+3} . frac{1}{2^4} . left( frac{ (2t+3)(2t+2) }{ (t+1)(t+2) } right)^2 = frac{(t+3/2)^2}{(t+2)(t+3)} tag 1$$
Using $(1)$ repeatedly starting with $a_0 = 1/8$, we get:
$$a_t = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} ldots frac{(t+1/2)^2}{(t+1).(t+2)} text{ for $tgeq 1$} tag 2$$
Using $(2)$, we have
$$S_0 = a_0 = frac{1}{8} = frac{9}{8} - 1 = bbox[yellow]{frac{1}{8}.3^2 - 1}$$
$$S_1 = a_1 + a_0 = a_1 + S_0 = frac{1}{8}.frac{(3/2)^2}{2.3} + frac{3^2}{8} - 1 $$
$$ = bbox[yellow]{frac{1}{8}.frac{(3/2)^2}{2.3}. 5^2 -1}$$
$$S_2 = a_2 + S_1 = frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4} + frac{5^2}{8}.frac{(3/2)^2}{2.3} -1 $$ $$= bbox[yellow]{frac{1}{8} . frac{(3/2)^2}{2.3} . frac{(5/2)^2}{3.4}. 7^2 - 1}$$
The general pattern we observe is (can be proved formally via induction)
$$
begin{align}
S_p
&= frac{1}{8}. frac{(3/2)^2}{2.3}. frac{(5/2)^2}{3.4} ldots frac{((2p+1)/2)^2}{(p+1)(p+2)}.(2p+3)^2 -1 \
&= frac{1}{16}. frac{(3/2)^2}{3^2}. frac{(5/2)^2}{4^2} ldots frac{((2p+1)/2)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{1}{2^{2p+4}}. frac{3^2}{3^2}. frac{5^2}{4^2} ldots frac{(2p+1)^2}{(p+2)^2}.(2p+3)^2.(p+2) -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{2.3.4 ldots (p+1)(p+2)}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{3.5.7 ldots (2p+1)(2p+3)}{(p+2)!}.frac{(p+1)! 2^{p+1}}{(p+1)! 2^{p+1}}.2right]^2 -1 \
&= frac{p+2}{2^{2p+4}} left[ frac{(2p+3)!}{(p+2)!(p+1)!}.frac{1}{ 2^{p}}right]^2 -1 \
&= frac{p+2}{2^{4p+4}} begin{pmatrix} 2p+3 \ p+1 end{pmatrix}^2 -1 \
end{align}
$$
which is what we want.
edited Dec 30 '18 at 22:34


Rohit Pandey
1,2301021
1,2301021
answered Dec 27 '18 at 8:25
apsadapsad
1813
1813
1
$begingroup$
Nice solution. (+1)
$endgroup$
– Markus Scheuer
Dec 27 '18 at 23:26
add a comment |
1
$begingroup$
Nice solution. (+1)
$endgroup$
– Markus Scheuer
Dec 27 '18 at 23:26
1
1
$begingroup$
Nice solution. (+1)
$endgroup$
– Markus Scheuer
Dec 27 '18 at 23:26
$begingroup$
Nice solution. (+1)
$endgroup$
– Markus Scheuer
Dec 27 '18 at 23:26
add a comment |
$begingroup$
This answer is based upon the Gosper algorithm. It can also be used to solve structural similar identities (see (4) below). We follow closely Summa Summarum by M.E. Larsen.
We set
begin{align*}
color{blue}{A(p):=sum_{t=0}^pfrac{1}{t+2}binom{2t+1}{t}^2frac{1}{2^{4t+2}}}tag{1}
end{align*}
We rewrite $A(p)$ as follows
begin{align*}
A(p)&=sum_{t=0}^pa_t=a_0+a_1+a_2+cdots+a_p\
&=a_0+a_0frac{a_1}{a_0}+a_0frac{a_1a_2}{a_0a_1}+cdots+a_0frac{a_1a_2cdots a_p}{a_0a_1cdots a_{p-1}}\
&=a_0sum_{t=0}^pprod_{j=0}^{t-1}frac{a_{j+1}}{a_j}tag{2}
end{align*}
We obtain $a_0=frac{1}{8}$ and
begin{align*}
frac{a_{j+1}}{a_j}&=frac{j+2}{j+3}cdotfrac{binom{2j+3}{j+1}^2}{binom{2j+1}{j}^2}cdotfrac{2^{4j+6}}{2^{4j+2}}=frac{(2j+3)^2}{4(j+2)(j+3)}\
&=frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}
end{align*}
In the following we use the falling factorial notation $x^{underline{t}}=x(x-1)(x-2)cdots(x-t+1)$.
From (1) and (2) we get
begin{align*}
A(p)&=frac{1}{8}sum_{t=0}^{p}prod_{j=0}^{t-1}frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}\
&=frac{1}{8}sum_{t=0}^pfrac{left(-frac{3}{2}right)^{underline{t}}left(-frac{3}{2}right)^{underline{t}}}{(-2)^{underline{t}}(-3)^{underline{t}}}tag{3}
end{align*}
We consider Corollary 6.2 of Summa Summarum which states for $a,b,c,dinmathbb{C}$ with $a+b=c+d$
begin{align*}
sum_{t=0}^pfrac{a^{underline{t}}b^{underline{t}}}{(c-1)^{underline{t}}(d-1)^{underline{t}}}
=frac{1}{(a-c)(b-c)}left(frac{a^{underline{p+1}}b^{underline{p+1}}}{(c-1)^{underline{p}}(d-1)^{underline{p}}}-cdright)tag{4}
end{align*}
We can apply Corollary 6.2 since in (3) we have $a=b=-frac{3}{2}, c=-1,d=-2$ so that $a+b=c+d$. We get
begin{align*}
color{blue}{A(p)}&=frac{1}{8}cdotfrac{1}{left(-frac{1}{2}right)left(-frac{1}{2}right)}
left(frac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-(-1)(-2)right)\
&=frac{1}{2}cdotfrac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-1tag{5}\
&=frac{1}{2}left(frac{(2p+3)!}{2^{2p+2}(p+1)!}right)^2cdotfrac{1}{(p+1)!frac{1}{2}(p+2)!}-1\
&,,color{blue}{=frac{p+2}{2^{4p+4}}binom{2p+3}{p+1}^2-1}
end{align*}
and the claim follows.
Comment:
- In (5) we use
begin{align*}
left(-frac{3}{2}right)^{underline{p+1}}&=left(-frac{3}{2}right)left(-frac{5}{2}right)cdotsleft(-frac{3}{2}-pright)\
&=frac{(-1)^{p+1}}{2^{p+1}}(2p+3)!!=frac{(-1)^{p+1}}{2^{p+1}}cdotfrac{(2p+3)!}{(2p+2)!!}=frac{(-1)^{p+1}(2p+3)!}{2^{p+1}2^{p+1}(p+1)!}\
&=frac{(-1)^{p+1}(2p+3)!}{2^{2p+2}(p+1)!}\
(-2)^{underline{p}}&=(-2)(-3)cdots(-2-(p+1))=(-1)^p(p+1)!\
(-3)^{underline{p}}&=(-3)(-4)cdots(-3-(p+1))=(-1)^pfrac{1}{2}(p+2)!
end{align*}
$endgroup$
$begingroup$
Great to know about Gosper algorithm and that there are several related methods for (in)definite sums of hypergeometric terms!
$endgroup$
– apsad
Dec 28 '18 at 6:12
$begingroup$
@apsad: Yes, I also think this technique is valuable. You are right the calculations in the book are in terms of indefinite sums, which I tried to hide in my post. The author also explicitly tried to avoid hypergeometric sums contrary to some other famous books around this theme.
$endgroup$
– Markus Scheuer
Dec 28 '18 at 7:44
$begingroup$
Thank, very nice proof (+1). Similar in spirit to the one by @apsad, but uses the nice identity to make things super clean. Looked up corollary 6.2 in the book (web.math.ku.dk/~mel/larsen.pdf) and it has a $delta k$ added. Is that because those are indefinite summations? Will need to go back several chapters to figure out what exactly it means.
$endgroup$
– Rohit Pandey
Dec 28 '18 at 9:25
$begingroup$
@RohitPandey:You're welcome. Yes, the corollary uses indefinite summation. The connection with definite sums is given in (1.10). Thanks for the link to the book. :-)
$endgroup$
– Markus Scheuer
Dec 28 '18 at 10:39
add a comment |
$begingroup$
This answer is based upon the Gosper algorithm. It can also be used to solve structural similar identities (see (4) below). We follow closely Summa Summarum by M.E. Larsen.
We set
begin{align*}
color{blue}{A(p):=sum_{t=0}^pfrac{1}{t+2}binom{2t+1}{t}^2frac{1}{2^{4t+2}}}tag{1}
end{align*}
We rewrite $A(p)$ as follows
begin{align*}
A(p)&=sum_{t=0}^pa_t=a_0+a_1+a_2+cdots+a_p\
&=a_0+a_0frac{a_1}{a_0}+a_0frac{a_1a_2}{a_0a_1}+cdots+a_0frac{a_1a_2cdots a_p}{a_0a_1cdots a_{p-1}}\
&=a_0sum_{t=0}^pprod_{j=0}^{t-1}frac{a_{j+1}}{a_j}tag{2}
end{align*}
We obtain $a_0=frac{1}{8}$ and
begin{align*}
frac{a_{j+1}}{a_j}&=frac{j+2}{j+3}cdotfrac{binom{2j+3}{j+1}^2}{binom{2j+1}{j}^2}cdotfrac{2^{4j+6}}{2^{4j+2}}=frac{(2j+3)^2}{4(j+2)(j+3)}\
&=frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}
end{align*}
In the following we use the falling factorial notation $x^{underline{t}}=x(x-1)(x-2)cdots(x-t+1)$.
From (1) and (2) we get
begin{align*}
A(p)&=frac{1}{8}sum_{t=0}^{p}prod_{j=0}^{t-1}frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}\
&=frac{1}{8}sum_{t=0}^pfrac{left(-frac{3}{2}right)^{underline{t}}left(-frac{3}{2}right)^{underline{t}}}{(-2)^{underline{t}}(-3)^{underline{t}}}tag{3}
end{align*}
We consider Corollary 6.2 of Summa Summarum which states for $a,b,c,dinmathbb{C}$ with $a+b=c+d$
begin{align*}
sum_{t=0}^pfrac{a^{underline{t}}b^{underline{t}}}{(c-1)^{underline{t}}(d-1)^{underline{t}}}
=frac{1}{(a-c)(b-c)}left(frac{a^{underline{p+1}}b^{underline{p+1}}}{(c-1)^{underline{p}}(d-1)^{underline{p}}}-cdright)tag{4}
end{align*}
We can apply Corollary 6.2 since in (3) we have $a=b=-frac{3}{2}, c=-1,d=-2$ so that $a+b=c+d$. We get
begin{align*}
color{blue}{A(p)}&=frac{1}{8}cdotfrac{1}{left(-frac{1}{2}right)left(-frac{1}{2}right)}
left(frac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-(-1)(-2)right)\
&=frac{1}{2}cdotfrac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-1tag{5}\
&=frac{1}{2}left(frac{(2p+3)!}{2^{2p+2}(p+1)!}right)^2cdotfrac{1}{(p+1)!frac{1}{2}(p+2)!}-1\
&,,color{blue}{=frac{p+2}{2^{4p+4}}binom{2p+3}{p+1}^2-1}
end{align*}
and the claim follows.
Comment:
- In (5) we use
begin{align*}
left(-frac{3}{2}right)^{underline{p+1}}&=left(-frac{3}{2}right)left(-frac{5}{2}right)cdotsleft(-frac{3}{2}-pright)\
&=frac{(-1)^{p+1}}{2^{p+1}}(2p+3)!!=frac{(-1)^{p+1}}{2^{p+1}}cdotfrac{(2p+3)!}{(2p+2)!!}=frac{(-1)^{p+1}(2p+3)!}{2^{p+1}2^{p+1}(p+1)!}\
&=frac{(-1)^{p+1}(2p+3)!}{2^{2p+2}(p+1)!}\
(-2)^{underline{p}}&=(-2)(-3)cdots(-2-(p+1))=(-1)^p(p+1)!\
(-3)^{underline{p}}&=(-3)(-4)cdots(-3-(p+1))=(-1)^pfrac{1}{2}(p+2)!
end{align*}
$endgroup$
$begingroup$
Great to know about Gosper algorithm and that there are several related methods for (in)definite sums of hypergeometric terms!
$endgroup$
– apsad
Dec 28 '18 at 6:12
$begingroup$
@apsad: Yes, I also think this technique is valuable. You are right the calculations in the book are in terms of indefinite sums, which I tried to hide in my post. The author also explicitly tried to avoid hypergeometric sums contrary to some other famous books around this theme.
$endgroup$
– Markus Scheuer
Dec 28 '18 at 7:44
$begingroup$
Thank, very nice proof (+1). Similar in spirit to the one by @apsad, but uses the nice identity to make things super clean. Looked up corollary 6.2 in the book (web.math.ku.dk/~mel/larsen.pdf) and it has a $delta k$ added. Is that because those are indefinite summations? Will need to go back several chapters to figure out what exactly it means.
$endgroup$
– Rohit Pandey
Dec 28 '18 at 9:25
$begingroup$
@RohitPandey:You're welcome. Yes, the corollary uses indefinite summation. The connection with definite sums is given in (1.10). Thanks for the link to the book. :-)
$endgroup$
– Markus Scheuer
Dec 28 '18 at 10:39
add a comment |
$begingroup$
This answer is based upon the Gosper algorithm. It can also be used to solve structural similar identities (see (4) below). We follow closely Summa Summarum by M.E. Larsen.
We set
begin{align*}
color{blue}{A(p):=sum_{t=0}^pfrac{1}{t+2}binom{2t+1}{t}^2frac{1}{2^{4t+2}}}tag{1}
end{align*}
We rewrite $A(p)$ as follows
begin{align*}
A(p)&=sum_{t=0}^pa_t=a_0+a_1+a_2+cdots+a_p\
&=a_0+a_0frac{a_1}{a_0}+a_0frac{a_1a_2}{a_0a_1}+cdots+a_0frac{a_1a_2cdots a_p}{a_0a_1cdots a_{p-1}}\
&=a_0sum_{t=0}^pprod_{j=0}^{t-1}frac{a_{j+1}}{a_j}tag{2}
end{align*}
We obtain $a_0=frac{1}{8}$ and
begin{align*}
frac{a_{j+1}}{a_j}&=frac{j+2}{j+3}cdotfrac{binom{2j+3}{j+1}^2}{binom{2j+1}{j}^2}cdotfrac{2^{4j+6}}{2^{4j+2}}=frac{(2j+3)^2}{4(j+2)(j+3)}\
&=frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}
end{align*}
In the following we use the falling factorial notation $x^{underline{t}}=x(x-1)(x-2)cdots(x-t+1)$.
From (1) and (2) we get
begin{align*}
A(p)&=frac{1}{8}sum_{t=0}^{p}prod_{j=0}^{t-1}frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}\
&=frac{1}{8}sum_{t=0}^pfrac{left(-frac{3}{2}right)^{underline{t}}left(-frac{3}{2}right)^{underline{t}}}{(-2)^{underline{t}}(-3)^{underline{t}}}tag{3}
end{align*}
We consider Corollary 6.2 of Summa Summarum which states for $a,b,c,dinmathbb{C}$ with $a+b=c+d$
begin{align*}
sum_{t=0}^pfrac{a^{underline{t}}b^{underline{t}}}{(c-1)^{underline{t}}(d-1)^{underline{t}}}
=frac{1}{(a-c)(b-c)}left(frac{a^{underline{p+1}}b^{underline{p+1}}}{(c-1)^{underline{p}}(d-1)^{underline{p}}}-cdright)tag{4}
end{align*}
We can apply Corollary 6.2 since in (3) we have $a=b=-frac{3}{2}, c=-1,d=-2$ so that $a+b=c+d$. We get
begin{align*}
color{blue}{A(p)}&=frac{1}{8}cdotfrac{1}{left(-frac{1}{2}right)left(-frac{1}{2}right)}
left(frac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-(-1)(-2)right)\
&=frac{1}{2}cdotfrac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-1tag{5}\
&=frac{1}{2}left(frac{(2p+3)!}{2^{2p+2}(p+1)!}right)^2cdotfrac{1}{(p+1)!frac{1}{2}(p+2)!}-1\
&,,color{blue}{=frac{p+2}{2^{4p+4}}binom{2p+3}{p+1}^2-1}
end{align*}
and the claim follows.
Comment:
- In (5) we use
begin{align*}
left(-frac{3}{2}right)^{underline{p+1}}&=left(-frac{3}{2}right)left(-frac{5}{2}right)cdotsleft(-frac{3}{2}-pright)\
&=frac{(-1)^{p+1}}{2^{p+1}}(2p+3)!!=frac{(-1)^{p+1}}{2^{p+1}}cdotfrac{(2p+3)!}{(2p+2)!!}=frac{(-1)^{p+1}(2p+3)!}{2^{p+1}2^{p+1}(p+1)!}\
&=frac{(-1)^{p+1}(2p+3)!}{2^{2p+2}(p+1)!}\
(-2)^{underline{p}}&=(-2)(-3)cdots(-2-(p+1))=(-1)^p(p+1)!\
(-3)^{underline{p}}&=(-3)(-4)cdots(-3-(p+1))=(-1)^pfrac{1}{2}(p+2)!
end{align*}
$endgroup$
This answer is based upon the Gosper algorithm. It can also be used to solve structural similar identities (see (4) below). We follow closely Summa Summarum by M.E. Larsen.
We set
begin{align*}
color{blue}{A(p):=sum_{t=0}^pfrac{1}{t+2}binom{2t+1}{t}^2frac{1}{2^{4t+2}}}tag{1}
end{align*}
We rewrite $A(p)$ as follows
begin{align*}
A(p)&=sum_{t=0}^pa_t=a_0+a_1+a_2+cdots+a_p\
&=a_0+a_0frac{a_1}{a_0}+a_0frac{a_1a_2}{a_0a_1}+cdots+a_0frac{a_1a_2cdots a_p}{a_0a_1cdots a_{p-1}}\
&=a_0sum_{t=0}^pprod_{j=0}^{t-1}frac{a_{j+1}}{a_j}tag{2}
end{align*}
We obtain $a_0=frac{1}{8}$ and
begin{align*}
frac{a_{j+1}}{a_j}&=frac{j+2}{j+3}cdotfrac{binom{2j+3}{j+1}^2}{binom{2j+1}{j}^2}cdotfrac{2^{4j+6}}{2^{4j+2}}=frac{(2j+3)^2}{4(j+2)(j+3)}\
&=frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}
end{align*}
In the following we use the falling factorial notation $x^{underline{t}}=x(x-1)(x-2)cdots(x-t+1)$.
From (1) and (2) we get
begin{align*}
A(p)&=frac{1}{8}sum_{t=0}^{p}prod_{j=0}^{t-1}frac{left(-frac{3}{2}-jright)^2}{(-2-j)(-3-j)}\
&=frac{1}{8}sum_{t=0}^pfrac{left(-frac{3}{2}right)^{underline{t}}left(-frac{3}{2}right)^{underline{t}}}{(-2)^{underline{t}}(-3)^{underline{t}}}tag{3}
end{align*}
We consider Corollary 6.2 of Summa Summarum which states for $a,b,c,dinmathbb{C}$ with $a+b=c+d$
begin{align*}
sum_{t=0}^pfrac{a^{underline{t}}b^{underline{t}}}{(c-1)^{underline{t}}(d-1)^{underline{t}}}
=frac{1}{(a-c)(b-c)}left(frac{a^{underline{p+1}}b^{underline{p+1}}}{(c-1)^{underline{p}}(d-1)^{underline{p}}}-cdright)tag{4}
end{align*}
We can apply Corollary 6.2 since in (3) we have $a=b=-frac{3}{2}, c=-1,d=-2$ so that $a+b=c+d$. We get
begin{align*}
color{blue}{A(p)}&=frac{1}{8}cdotfrac{1}{left(-frac{1}{2}right)left(-frac{1}{2}right)}
left(frac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-(-1)(-2)right)\
&=frac{1}{2}cdotfrac{left(-frac{3}{2}right)^{underline{p+1}}left(-frac{3}{2}right)^{underline{p+1}}}{(-2)^{underline{p}}(-3)^{underline{p}}}-1tag{5}\
&=frac{1}{2}left(frac{(2p+3)!}{2^{2p+2}(p+1)!}right)^2cdotfrac{1}{(p+1)!frac{1}{2}(p+2)!}-1\
&,,color{blue}{=frac{p+2}{2^{4p+4}}binom{2p+3}{p+1}^2-1}
end{align*}
and the claim follows.
Comment:
- In (5) we use
begin{align*}
left(-frac{3}{2}right)^{underline{p+1}}&=left(-frac{3}{2}right)left(-frac{5}{2}right)cdotsleft(-frac{3}{2}-pright)\
&=frac{(-1)^{p+1}}{2^{p+1}}(2p+3)!!=frac{(-1)^{p+1}}{2^{p+1}}cdotfrac{(2p+3)!}{(2p+2)!!}=frac{(-1)^{p+1}(2p+3)!}{2^{p+1}2^{p+1}(p+1)!}\
&=frac{(-1)^{p+1}(2p+3)!}{2^{2p+2}(p+1)!}\
(-2)^{underline{p}}&=(-2)(-3)cdots(-2-(p+1))=(-1)^p(p+1)!\
(-3)^{underline{p}}&=(-3)(-4)cdots(-3-(p+1))=(-1)^pfrac{1}{2}(p+2)!
end{align*}
edited Dec 29 '18 at 22:17
answered Dec 27 '18 at 23:25


Markus ScheuerMarkus Scheuer
61.2k456145
61.2k456145
$begingroup$
Great to know about Gosper algorithm and that there are several related methods for (in)definite sums of hypergeometric terms!
$endgroup$
– apsad
Dec 28 '18 at 6:12
$begingroup$
@apsad: Yes, I also think this technique is valuable. You are right the calculations in the book are in terms of indefinite sums, which I tried to hide in my post. The author also explicitly tried to avoid hypergeometric sums contrary to some other famous books around this theme.
$endgroup$
– Markus Scheuer
Dec 28 '18 at 7:44
$begingroup$
Thank, very nice proof (+1). Similar in spirit to the one by @apsad, but uses the nice identity to make things super clean. Looked up corollary 6.2 in the book (web.math.ku.dk/~mel/larsen.pdf) and it has a $delta k$ added. Is that because those are indefinite summations? Will need to go back several chapters to figure out what exactly it means.
$endgroup$
– Rohit Pandey
Dec 28 '18 at 9:25
$begingroup$
@RohitPandey:You're welcome. Yes, the corollary uses indefinite summation. The connection with definite sums is given in (1.10). Thanks for the link to the book. :-)
$endgroup$
– Markus Scheuer
Dec 28 '18 at 10:39
add a comment |
$begingroup$
Great to know about Gosper algorithm and that there are several related methods for (in)definite sums of hypergeometric terms!
$endgroup$
– apsad
Dec 28 '18 at 6:12
$begingroup$
@apsad: Yes, I also think this technique is valuable. You are right the calculations in the book are in terms of indefinite sums, which I tried to hide in my post. The author also explicitly tried to avoid hypergeometric sums contrary to some other famous books around this theme.
$endgroup$
– Markus Scheuer
Dec 28 '18 at 7:44
$begingroup$
Thank, very nice proof (+1). Similar in spirit to the one by @apsad, but uses the nice identity to make things super clean. Looked up corollary 6.2 in the book (web.math.ku.dk/~mel/larsen.pdf) and it has a $delta k$ added. Is that because those are indefinite summations? Will need to go back several chapters to figure out what exactly it means.
$endgroup$
– Rohit Pandey
Dec 28 '18 at 9:25
$begingroup$
@RohitPandey:You're welcome. Yes, the corollary uses indefinite summation. The connection with definite sums is given in (1.10). Thanks for the link to the book. :-)
$endgroup$
– Markus Scheuer
Dec 28 '18 at 10:39
$begingroup$
Great to know about Gosper algorithm and that there are several related methods for (in)definite sums of hypergeometric terms!
$endgroup$
– apsad
Dec 28 '18 at 6:12
$begingroup$
Great to know about Gosper algorithm and that there are several related methods for (in)definite sums of hypergeometric terms!
$endgroup$
– apsad
Dec 28 '18 at 6:12
$begingroup$
@apsad: Yes, I also think this technique is valuable. You are right the calculations in the book are in terms of indefinite sums, which I tried to hide in my post. The author also explicitly tried to avoid hypergeometric sums contrary to some other famous books around this theme.
$endgroup$
– Markus Scheuer
Dec 28 '18 at 7:44
$begingroup$
@apsad: Yes, I also think this technique is valuable. You are right the calculations in the book are in terms of indefinite sums, which I tried to hide in my post. The author also explicitly tried to avoid hypergeometric sums contrary to some other famous books around this theme.
$endgroup$
– Markus Scheuer
Dec 28 '18 at 7:44
$begingroup$
Thank, very nice proof (+1). Similar in spirit to the one by @apsad, but uses the nice identity to make things super clean. Looked up corollary 6.2 in the book (web.math.ku.dk/~mel/larsen.pdf) and it has a $delta k$ added. Is that because those are indefinite summations? Will need to go back several chapters to figure out what exactly it means.
$endgroup$
– Rohit Pandey
Dec 28 '18 at 9:25
$begingroup$
Thank, very nice proof (+1). Similar in spirit to the one by @apsad, but uses the nice identity to make things super clean. Looked up corollary 6.2 in the book (web.math.ku.dk/~mel/larsen.pdf) and it has a $delta k$ added. Is that because those are indefinite summations? Will need to go back several chapters to figure out what exactly it means.
$endgroup$
– Rohit Pandey
Dec 28 '18 at 9:25
$begingroup$
@RohitPandey:You're welcome. Yes, the corollary uses indefinite summation. The connection with definite sums is given in (1.10). Thanks for the link to the book. :-)
$endgroup$
– Markus Scheuer
Dec 28 '18 at 10:39
$begingroup$
@RohitPandey:You're welcome. Yes, the corollary uses indefinite summation. The connection with definite sums is given in (1.10). Thanks for the link to the book. :-)
$endgroup$
– Markus Scheuer
Dec 28 '18 at 10:39
add a comment |
$begingroup$
As Steve Kass commented, there is a problem somewhere.
Using a CAS, what I obtained after simplifications is
$$sum_{l=1}^{t-1}b_l=frac{7}{8}+frac{ (t+3) (3 t+4) }{3
(t+1),2^{2 (t+1)}}left(binom{2 t+2}{t-1}-binom{2 t+2}{t}right)=frac{7}{8}-frac{(3 t+4), Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$ The partial sum
$$S_p = sum_{t=1}^{p} left(sum_{l=1}^{t-1}b_lright) a_t$$ is evaluated (the result is very nasty but "almost" explicit ) but, given by the same CAS,
$$S_infty =frac{20}{pi }-frac{195}{32}$$ is obtained without any problem (how ? this is the question).
Edit
After your two major changes (summations starting at $0$ and the formula for $S$), the results become quite different.
$$sum_{l=0}^{t-1}b_l=1-frac{(3 t+4) Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$
$$S_p = sum_{t=0}^{p} left(1-sum_{l=0}^{t-1}b_lright) a_t=7-frac{4 (5 p+12) ,Gamma left(p+frac{5}{2}right)^2}{pi , Gamma (p+3)^2}$$
Using Stirling approximation for lage values of $p$, we have
$$S_p=7-frac{20}pileft(1+frac{3}{20 p}-frac{59}{160 p^2}+frac{573}{640 p^3}-frac{21977}{10240
p^4}+Oleft(frac{1}{p^5}right)right)$$
$$S_infty =7-frac{20}{pi }$$
$endgroup$
$begingroup$
But you can see the python code.. it implements exactly the equation (1) I described above. I don't have access to your CAS, so can you possibly debug by checking the actual sequences and comparing with the ones in the python snippet?
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:09
$begingroup$
And in any case, the core question is how the summation is translating to the $frac{20}{pi}$ term.
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:11
1
$begingroup$
You can get a little help towards gamma function expressions if you parametrize part of the sum. There are supposedly ways to see what Mathematica is doing with Trace or TraceInternal, but I’ve never tried. i.stack.imgur.com/lSL7c.png i.stack.imgur.com/FMnVW.png ($a$ wasn’t the best parameter name, but it works.) A different CAS might be helpful, per Claude’s comment.
$endgroup$
– Steve Kass
Dec 24 '18 at 22:35
1
$begingroup$
Within the OEIS entry for the squares of Catalan numbers (oeis.org/A001246) is this identity: $$sum _{t=0}^{infty } frac{binom{2 t}{t}^2}{2^{4 t} (t+1)}={4overpi}.$$ That identity and bit more can be found here (scroll down just a bit from “Similar Series”): en.wikipedia.org/wiki/…. Your result might follow from this.
$endgroup$
– Steve Kass
Dec 25 '18 at 19:52
2
$begingroup$
Ahhah.. very interesting. Will try to read through and prove it but should be just a matter of time now. Also interesting to note that if we change the $(t+1)$ in the denominator with $(t+1)^2$, we get $4(frac{4}{pi}-1)$. @SteveKass: You have been immensely helpful in this endeavor. Please add a brief answer with these resources and I'll happily award you the 50 point bounty.
$endgroup$
– Rohit Pandey
Dec 25 '18 at 23:31
|
show 7 more comments
$begingroup$
As Steve Kass commented, there is a problem somewhere.
Using a CAS, what I obtained after simplifications is
$$sum_{l=1}^{t-1}b_l=frac{7}{8}+frac{ (t+3) (3 t+4) }{3
(t+1),2^{2 (t+1)}}left(binom{2 t+2}{t-1}-binom{2 t+2}{t}right)=frac{7}{8}-frac{(3 t+4), Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$ The partial sum
$$S_p = sum_{t=1}^{p} left(sum_{l=1}^{t-1}b_lright) a_t$$ is evaluated (the result is very nasty but "almost" explicit ) but, given by the same CAS,
$$S_infty =frac{20}{pi }-frac{195}{32}$$ is obtained without any problem (how ? this is the question).
Edit
After your two major changes (summations starting at $0$ and the formula for $S$), the results become quite different.
$$sum_{l=0}^{t-1}b_l=1-frac{(3 t+4) Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$
$$S_p = sum_{t=0}^{p} left(1-sum_{l=0}^{t-1}b_lright) a_t=7-frac{4 (5 p+12) ,Gamma left(p+frac{5}{2}right)^2}{pi , Gamma (p+3)^2}$$
Using Stirling approximation for lage values of $p$, we have
$$S_p=7-frac{20}pileft(1+frac{3}{20 p}-frac{59}{160 p^2}+frac{573}{640 p^3}-frac{21977}{10240
p^4}+Oleft(frac{1}{p^5}right)right)$$
$$S_infty =7-frac{20}{pi }$$
$endgroup$
$begingroup$
But you can see the python code.. it implements exactly the equation (1) I described above. I don't have access to your CAS, so can you possibly debug by checking the actual sequences and comparing with the ones in the python snippet?
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:09
$begingroup$
And in any case, the core question is how the summation is translating to the $frac{20}{pi}$ term.
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:11
1
$begingroup$
You can get a little help towards gamma function expressions if you parametrize part of the sum. There are supposedly ways to see what Mathematica is doing with Trace or TraceInternal, but I’ve never tried. i.stack.imgur.com/lSL7c.png i.stack.imgur.com/FMnVW.png ($a$ wasn’t the best parameter name, but it works.) A different CAS might be helpful, per Claude’s comment.
$endgroup$
– Steve Kass
Dec 24 '18 at 22:35
1
$begingroup$
Within the OEIS entry for the squares of Catalan numbers (oeis.org/A001246) is this identity: $$sum _{t=0}^{infty } frac{binom{2 t}{t}^2}{2^{4 t} (t+1)}={4overpi}.$$ That identity and bit more can be found here (scroll down just a bit from “Similar Series”): en.wikipedia.org/wiki/…. Your result might follow from this.
$endgroup$
– Steve Kass
Dec 25 '18 at 19:52
2
$begingroup$
Ahhah.. very interesting. Will try to read through and prove it but should be just a matter of time now. Also interesting to note that if we change the $(t+1)$ in the denominator with $(t+1)^2$, we get $4(frac{4}{pi}-1)$. @SteveKass: You have been immensely helpful in this endeavor. Please add a brief answer with these resources and I'll happily award you the 50 point bounty.
$endgroup$
– Rohit Pandey
Dec 25 '18 at 23:31
|
show 7 more comments
$begingroup$
As Steve Kass commented, there is a problem somewhere.
Using a CAS, what I obtained after simplifications is
$$sum_{l=1}^{t-1}b_l=frac{7}{8}+frac{ (t+3) (3 t+4) }{3
(t+1),2^{2 (t+1)}}left(binom{2 t+2}{t-1}-binom{2 t+2}{t}right)=frac{7}{8}-frac{(3 t+4), Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$ The partial sum
$$S_p = sum_{t=1}^{p} left(sum_{l=1}^{t-1}b_lright) a_t$$ is evaluated (the result is very nasty but "almost" explicit ) but, given by the same CAS,
$$S_infty =frac{20}{pi }-frac{195}{32}$$ is obtained without any problem (how ? this is the question).
Edit
After your two major changes (summations starting at $0$ and the formula for $S$), the results become quite different.
$$sum_{l=0}^{t-1}b_l=1-frac{(3 t+4) Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$
$$S_p = sum_{t=0}^{p} left(1-sum_{l=0}^{t-1}b_lright) a_t=7-frac{4 (5 p+12) ,Gamma left(p+frac{5}{2}right)^2}{pi , Gamma (p+3)^2}$$
Using Stirling approximation for lage values of $p$, we have
$$S_p=7-frac{20}pileft(1+frac{3}{20 p}-frac{59}{160 p^2}+frac{573}{640 p^3}-frac{21977}{10240
p^4}+Oleft(frac{1}{p^5}right)right)$$
$$S_infty =7-frac{20}{pi }$$
$endgroup$
As Steve Kass commented, there is a problem somewhere.
Using a CAS, what I obtained after simplifications is
$$sum_{l=1}^{t-1}b_l=frac{7}{8}+frac{ (t+3) (3 t+4) }{3
(t+1),2^{2 (t+1)}}left(binom{2 t+2}{t-1}-binom{2 t+2}{t}right)=frac{7}{8}-frac{(3 t+4), Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$ The partial sum
$$S_p = sum_{t=1}^{p} left(sum_{l=1}^{t-1}b_lright) a_t$$ is evaluated (the result is very nasty but "almost" explicit ) but, given by the same CAS,
$$S_infty =frac{20}{pi }-frac{195}{32}$$ is obtained without any problem (how ? this is the question).
Edit
After your two major changes (summations starting at $0$ and the formula for $S$), the results become quite different.
$$sum_{l=0}^{t-1}b_l=1-frac{(3 t+4) Gamma left(t+frac{3}{2}right)}{sqrt{pi } ,,Gamma (t+3)}$$
$$S_p = sum_{t=0}^{p} left(1-sum_{l=0}^{t-1}b_lright) a_t=7-frac{4 (5 p+12) ,Gamma left(p+frac{5}{2}right)^2}{pi , Gamma (p+3)^2}$$
Using Stirling approximation for lage values of $p$, we have
$$S_p=7-frac{20}pileft(1+frac{3}{20 p}-frac{59}{160 p^2}+frac{573}{640 p^3}-frac{21977}{10240
p^4}+Oleft(frac{1}{p^5}right)right)$$
$$S_infty =7-frac{20}{pi }$$
edited Dec 24 '18 at 4:02
answered Dec 23 '18 at 8:02
Claude LeiboviciClaude Leibovici
121k1157132
121k1157132
$begingroup$
But you can see the python code.. it implements exactly the equation (1) I described above. I don't have access to your CAS, so can you possibly debug by checking the actual sequences and comparing with the ones in the python snippet?
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:09
$begingroup$
And in any case, the core question is how the summation is translating to the $frac{20}{pi}$ term.
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:11
1
$begingroup$
You can get a little help towards gamma function expressions if you parametrize part of the sum. There are supposedly ways to see what Mathematica is doing with Trace or TraceInternal, but I’ve never tried. i.stack.imgur.com/lSL7c.png i.stack.imgur.com/FMnVW.png ($a$ wasn’t the best parameter name, but it works.) A different CAS might be helpful, per Claude’s comment.
$endgroup$
– Steve Kass
Dec 24 '18 at 22:35
1
$begingroup$
Within the OEIS entry for the squares of Catalan numbers (oeis.org/A001246) is this identity: $$sum _{t=0}^{infty } frac{binom{2 t}{t}^2}{2^{4 t} (t+1)}={4overpi}.$$ That identity and bit more can be found here (scroll down just a bit from “Similar Series”): en.wikipedia.org/wiki/…. Your result might follow from this.
$endgroup$
– Steve Kass
Dec 25 '18 at 19:52
2
$begingroup$
Ahhah.. very interesting. Will try to read through and prove it but should be just a matter of time now. Also interesting to note that if we change the $(t+1)$ in the denominator with $(t+1)^2$, we get $4(frac{4}{pi}-1)$. @SteveKass: You have been immensely helpful in this endeavor. Please add a brief answer with these resources and I'll happily award you the 50 point bounty.
$endgroup$
– Rohit Pandey
Dec 25 '18 at 23:31
|
show 7 more comments
$begingroup$
But you can see the python code.. it implements exactly the equation (1) I described above. I don't have access to your CAS, so can you possibly debug by checking the actual sequences and comparing with the ones in the python snippet?
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:09
$begingroup$
And in any case, the core question is how the summation is translating to the $frac{20}{pi}$ term.
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:11
1
$begingroup$
You can get a little help towards gamma function expressions if you parametrize part of the sum. There are supposedly ways to see what Mathematica is doing with Trace or TraceInternal, but I’ve never tried. i.stack.imgur.com/lSL7c.png i.stack.imgur.com/FMnVW.png ($a$ wasn’t the best parameter name, but it works.) A different CAS might be helpful, per Claude’s comment.
$endgroup$
– Steve Kass
Dec 24 '18 at 22:35
1
$begingroup$
Within the OEIS entry for the squares of Catalan numbers (oeis.org/A001246) is this identity: $$sum _{t=0}^{infty } frac{binom{2 t}{t}^2}{2^{4 t} (t+1)}={4overpi}.$$ That identity and bit more can be found here (scroll down just a bit from “Similar Series”): en.wikipedia.org/wiki/…. Your result might follow from this.
$endgroup$
– Steve Kass
Dec 25 '18 at 19:52
2
$begingroup$
Ahhah.. very interesting. Will try to read through and prove it but should be just a matter of time now. Also interesting to note that if we change the $(t+1)$ in the denominator with $(t+1)^2$, we get $4(frac{4}{pi}-1)$. @SteveKass: You have been immensely helpful in this endeavor. Please add a brief answer with these resources and I'll happily award you the 50 point bounty.
$endgroup$
– Rohit Pandey
Dec 25 '18 at 23:31
$begingroup$
But you can see the python code.. it implements exactly the equation (1) I described above. I don't have access to your CAS, so can you possibly debug by checking the actual sequences and comparing with the ones in the python snippet?
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:09
$begingroup$
But you can see the python code.. it implements exactly the equation (1) I described above. I don't have access to your CAS, so can you possibly debug by checking the actual sequences and comparing with the ones in the python snippet?
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:09
$begingroup$
And in any case, the core question is how the summation is translating to the $frac{20}{pi}$ term.
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:11
$begingroup$
And in any case, the core question is how the summation is translating to the $frac{20}{pi}$ term.
$endgroup$
– Rohit Pandey
Dec 23 '18 at 8:11
1
1
$begingroup$
You can get a little help towards gamma function expressions if you parametrize part of the sum. There are supposedly ways to see what Mathematica is doing with Trace or TraceInternal, but I’ve never tried. i.stack.imgur.com/lSL7c.png i.stack.imgur.com/FMnVW.png ($a$ wasn’t the best parameter name, but it works.) A different CAS might be helpful, per Claude’s comment.
$endgroup$
– Steve Kass
Dec 24 '18 at 22:35
$begingroup$
You can get a little help towards gamma function expressions if you parametrize part of the sum. There are supposedly ways to see what Mathematica is doing with Trace or TraceInternal, but I’ve never tried. i.stack.imgur.com/lSL7c.png i.stack.imgur.com/FMnVW.png ($a$ wasn’t the best parameter name, but it works.) A different CAS might be helpful, per Claude’s comment.
$endgroup$
– Steve Kass
Dec 24 '18 at 22:35
1
1
$begingroup$
Within the OEIS entry for the squares of Catalan numbers (oeis.org/A001246) is this identity: $$sum _{t=0}^{infty } frac{binom{2 t}{t}^2}{2^{4 t} (t+1)}={4overpi}.$$ That identity and bit more can be found here (scroll down just a bit from “Similar Series”): en.wikipedia.org/wiki/…. Your result might follow from this.
$endgroup$
– Steve Kass
Dec 25 '18 at 19:52
$begingroup$
Within the OEIS entry for the squares of Catalan numbers (oeis.org/A001246) is this identity: $$sum _{t=0}^{infty } frac{binom{2 t}{t}^2}{2^{4 t} (t+1)}={4overpi}.$$ That identity and bit more can be found here (scroll down just a bit from “Similar Series”): en.wikipedia.org/wiki/…. Your result might follow from this.
$endgroup$
– Steve Kass
Dec 25 '18 at 19:52
2
2
$begingroup$
Ahhah.. very interesting. Will try to read through and prove it but should be just a matter of time now. Also interesting to note that if we change the $(t+1)$ in the denominator with $(t+1)^2$, we get $4(frac{4}{pi}-1)$. @SteveKass: You have been immensely helpful in this endeavor. Please add a brief answer with these resources and I'll happily award you the 50 point bounty.
$endgroup$
– Rohit Pandey
Dec 25 '18 at 23:31
$begingroup$
Ahhah.. very interesting. Will try to read through and prove it but should be just a matter of time now. Also interesting to note that if we change the $(t+1)$ in the denominator with $(t+1)^2$, we get $4(frac{4}{pi}-1)$. @SteveKass: You have been immensely helpful in this endeavor. Please add a brief answer with these resources and I'll happily award you the 50 point bounty.
$endgroup$
– Rohit Pandey
Dec 25 '18 at 23:31
|
show 7 more comments
$begingroup$
The basic idea of this question was to understand how $pi$ was coming into these expressions for competing gambler problems. So, I decided to tackle a slightly simpler problem. Instead of finding the probability a 2$ chasing gambler would beat a 3$ chasing gambler, I tried to find the probability that a 2$ chasing gambler would beat a 1$ chasing gambler. The answer here is $4/pi-1$.
First, I wrote the PDF of the stopping time of the 1$ chasing gambler in a simplified form:
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
Then using Mathematica, I got the survival function (there is also an elegant way to get it from "scratch"):
$$Z_t = sumlimits_{l=t+1}^{infty}z_t = frac{2t+1 choose t}{2^{2t+1}}$$
Similarly, the PDF of the stopping time of the 2$ chasing gambler becomes:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler in $2p+2$ tosses then becomes:
$$S_p = sumlimits_{t=0}^{p} a_t Z_t = sumlimits_{t=0}^{p} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
Now, this is the part that I haven't figured out how to do without Mathematica. The expression for $S_p$ becomes:
$$S_p = -1 + (p+2)frac{{2p+3 choose p+1}^2}{2^{4p+4}} tag{2}$$
Now, using the sterling approximation for the factorials and letting $p to infty$ we get:
$$S_{infty} = frac{4}{pi}-1$$
Everything is fine apart from equation (2). Don't know how to prove that one. Which is why I added an update to the question on the very top asking for help with proving this expression.
But, I'm starting to get a rough inkling for where these $pi$'s are coming from. Basically, all these expression describing the PDF's and survival functions of the gamblers involve binomial terms close to the mode.
The probabilities we're after (one of the gamblers beating the other) involve two of these binomial terms close to their mode multiplied over all possible $t$ up to $p$. These summations tend to produce expressions that also involve squares of binomial terms close to the mode, this time involving $p$. And this is the part I need help with. I don't understand why this happens.
However once we accept that, we'll be left with some term like equation (2) which will involve a binomial coefficient around the mode for a large $p$. However as $p$ becomes large, this binomial distribution starts to resemble a Gaussian (law of large numbers). And we know that the gaussian around it's mode has a PDF with a $frac{1}{sqrt{pi}}$ term. So, squaring it yields a $frac{1}{pi}$ term.
$endgroup$
add a comment |
$begingroup$
The basic idea of this question was to understand how $pi$ was coming into these expressions for competing gambler problems. So, I decided to tackle a slightly simpler problem. Instead of finding the probability a 2$ chasing gambler would beat a 3$ chasing gambler, I tried to find the probability that a 2$ chasing gambler would beat a 1$ chasing gambler. The answer here is $4/pi-1$.
First, I wrote the PDF of the stopping time of the 1$ chasing gambler in a simplified form:
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
Then using Mathematica, I got the survival function (there is also an elegant way to get it from "scratch"):
$$Z_t = sumlimits_{l=t+1}^{infty}z_t = frac{2t+1 choose t}{2^{2t+1}}$$
Similarly, the PDF of the stopping time of the 2$ chasing gambler becomes:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler in $2p+2$ tosses then becomes:
$$S_p = sumlimits_{t=0}^{p} a_t Z_t = sumlimits_{t=0}^{p} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
Now, this is the part that I haven't figured out how to do without Mathematica. The expression for $S_p$ becomes:
$$S_p = -1 + (p+2)frac{{2p+3 choose p+1}^2}{2^{4p+4}} tag{2}$$
Now, using the sterling approximation for the factorials and letting $p to infty$ we get:
$$S_{infty} = frac{4}{pi}-1$$
Everything is fine apart from equation (2). Don't know how to prove that one. Which is why I added an update to the question on the very top asking for help with proving this expression.
But, I'm starting to get a rough inkling for where these $pi$'s are coming from. Basically, all these expression describing the PDF's and survival functions of the gamblers involve binomial terms close to the mode.
The probabilities we're after (one of the gamblers beating the other) involve two of these binomial terms close to their mode multiplied over all possible $t$ up to $p$. These summations tend to produce expressions that also involve squares of binomial terms close to the mode, this time involving $p$. And this is the part I need help with. I don't understand why this happens.
However once we accept that, we'll be left with some term like equation (2) which will involve a binomial coefficient around the mode for a large $p$. However as $p$ becomes large, this binomial distribution starts to resemble a Gaussian (law of large numbers). And we know that the gaussian around it's mode has a PDF with a $frac{1}{sqrt{pi}}$ term. So, squaring it yields a $frac{1}{pi}$ term.
$endgroup$
add a comment |
$begingroup$
The basic idea of this question was to understand how $pi$ was coming into these expressions for competing gambler problems. So, I decided to tackle a slightly simpler problem. Instead of finding the probability a 2$ chasing gambler would beat a 3$ chasing gambler, I tried to find the probability that a 2$ chasing gambler would beat a 1$ chasing gambler. The answer here is $4/pi-1$.
First, I wrote the PDF of the stopping time of the 1$ chasing gambler in a simplified form:
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
Then using Mathematica, I got the survival function (there is also an elegant way to get it from "scratch"):
$$Z_t = sumlimits_{l=t+1}^{infty}z_t = frac{2t+1 choose t}{2^{2t+1}}$$
Similarly, the PDF of the stopping time of the 2$ chasing gambler becomes:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler in $2p+2$ tosses then becomes:
$$S_p = sumlimits_{t=0}^{p} a_t Z_t = sumlimits_{t=0}^{p} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
Now, this is the part that I haven't figured out how to do without Mathematica. The expression for $S_p$ becomes:
$$S_p = -1 + (p+2)frac{{2p+3 choose p+1}^2}{2^{4p+4}} tag{2}$$
Now, using the sterling approximation for the factorials and letting $p to infty$ we get:
$$S_{infty} = frac{4}{pi}-1$$
Everything is fine apart from equation (2). Don't know how to prove that one. Which is why I added an update to the question on the very top asking for help with proving this expression.
But, I'm starting to get a rough inkling for where these $pi$'s are coming from. Basically, all these expression describing the PDF's and survival functions of the gamblers involve binomial terms close to the mode.
The probabilities we're after (one of the gamblers beating the other) involve two of these binomial terms close to their mode multiplied over all possible $t$ up to $p$. These summations tend to produce expressions that also involve squares of binomial terms close to the mode, this time involving $p$. And this is the part I need help with. I don't understand why this happens.
However once we accept that, we'll be left with some term like equation (2) which will involve a binomial coefficient around the mode for a large $p$. However as $p$ becomes large, this binomial distribution starts to resemble a Gaussian (law of large numbers). And we know that the gaussian around it's mode has a PDF with a $frac{1}{sqrt{pi}}$ term. So, squaring it yields a $frac{1}{pi}$ term.
$endgroup$
The basic idea of this question was to understand how $pi$ was coming into these expressions for competing gambler problems. So, I decided to tackle a slightly simpler problem. Instead of finding the probability a 2$ chasing gambler would beat a 3$ chasing gambler, I tried to find the probability that a 2$ chasing gambler would beat a 1$ chasing gambler. The answer here is $4/pi-1$.
First, I wrote the PDF of the stopping time of the 1$ chasing gambler in a simplified form:
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
Then using Mathematica, I got the survival function (there is also an elegant way to get it from "scratch"):
$$Z_t = sumlimits_{l=t+1}^{infty}z_t = frac{2t+1 choose t}{2^{2t+1}}$$
Similarly, the PDF of the stopping time of the 2$ chasing gambler becomes:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler in $2p+2$ tosses then becomes:
$$S_p = sumlimits_{t=0}^{p} a_t Z_t = sumlimits_{t=0}^{p} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
Now, this is the part that I haven't figured out how to do without Mathematica. The expression for $S_p$ becomes:
$$S_p = -1 + (p+2)frac{{2p+3 choose p+1}^2}{2^{4p+4}} tag{2}$$
Now, using the sterling approximation for the factorials and letting $p to infty$ we get:
$$S_{infty} = frac{4}{pi}-1$$
Everything is fine apart from equation (2). Don't know how to prove that one. Which is why I added an update to the question on the very top asking for help with proving this expression.
But, I'm starting to get a rough inkling for where these $pi$'s are coming from. Basically, all these expression describing the PDF's and survival functions of the gamblers involve binomial terms close to the mode.
The probabilities we're after (one of the gamblers beating the other) involve two of these binomial terms close to their mode multiplied over all possible $t$ up to $p$. These summations tend to produce expressions that also involve squares of binomial terms close to the mode, this time involving $p$. And this is the part I need help with. I don't understand why this happens.
However once we accept that, we'll be left with some term like equation (2) which will involve a binomial coefficient around the mode for a large $p$. However as $p$ becomes large, this binomial distribution starts to resemble a Gaussian (law of large numbers). And we know that the gaussian around it's mode has a PDF with a $frac{1}{sqrt{pi}}$ term. So, squaring it yields a $frac{1}{pi}$ term.
edited Dec 25 '18 at 10:50
answered Dec 25 '18 at 10:39


Rohit PandeyRohit Pandey
1,2301021
1,2301021
add a comment |
add a comment |
$begingroup$
Since there was some interest in this question on this and other forums, I wanted this page to become a wiki for these kinds of wealthy gambler race problems. So here are some expressions.
For a gambler targeting 1$;
- PDF of stopping time (probability he will reach 1$ on toss $2t+1$ where $t$ is the number of tails):
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
- Survival function of stopping time (probability that he will reach 1$ after toss $2t+1$):
$$Z_t = frac{2t choose t}{2^{2t+1}}$$
For a gambler targeting 2$:
- PDF of stopping time:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
- Survival function of stopping time:
$$A_t = frac{2t+3 choose t+1}{2^{2t+2}}$$
For a gambler targeting 3$:
- PDF of stopping time:
$$b_t = frac{3}{3+t}{2t+2 choose t}frac{1}{2^{2t+2}}$$
- Survival function of stopping time:
$$B_t = frac{3t+7}{t+2} frac{2t+2 choose t}{2^{2t+2}}$$
Probability two 1$ targeting gamblers will draw by toss $2s+1$:
$$D_s(1,1) = sumlimits_{t=0}^{p} z_t^2 = sumlimits_{t=0}^{s} left(frac{2t choose t}{(t+1)2^{2t+1}} right)^2 = -1 + frac{(4s+5) {2s+2 choose 1+s}^2}{2^{4s+4}}$$
The overall probability that the game ends in a draw:
$$D_{infty}(1,1) = frac{4}{pi}-1$$
And so the probability the first gambler wins (by symmetry):
$$W_{infty}(1,1) = frac{1-D_{infty}(1,1)}{2} = 1-frac{2}{pi}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler by toss $2s+2$:
$$W_s(2,1) = sumlimits_{t=0}^{s} a_t Z_t = sumlimits_{t=0}^{s} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
$$=-1 + (s+2)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
$$W_{infty}(2,1) = frac{4}{pi}-1$$
And since there is no scope for a draw because the 2$ targeting one reaches his target only on even tosses while the 1$ targeting one reaches his target only on odd tosses we can say,
$$W_{infty}(1,2) = 1-W_{infty}(2,1) = 2-frac{4}{pi}$$
Note that $W_{infty}(1,2) = 2 W_{infty}(1,1)$ and $D_{infty}(1,1) = W_{infty}(2,1)$
Not sure why that is.
The probability that a 3$ targeting gambler will beat a 2$ targeting gambler by toss $2s+1$:
$$W_s(3,2) = sumlimits_{t=0}^{s} b_t A_t = sumlimits_{t=0}^{s} frac{3}{3+t} frac{{2t+2 choose t}{2t+3 choose t+1}}{2^{2t+4}}$$
Also note that:
$${2t+2 choose t}{2t+3 choose t+1} = frac{2t+3}{t+1} {2t+2 choose t}^2$$
Substituting this into the above, we get the partial summation:
$$ W_s(3,2) = -6 + frac{(10 s^3 + 81s^2 +218s+195){2s+4 choose s+1}^2}{(s+2)^2 2^{4s+7}}$$
So the overall probability that the 3$ targeting gambler beats the 2$ targeting gambler becomes:
$$W_{infty}(3,2) = frac{20}{pi}-6$$
And this means the probability that the 2$ targeting gambler beats the 3$ targeting gambler becomes:
$$W_{infty}(2,3) = 1-W_{infty}(3,2) = 7- frac{20}{pi}$$
The probability that a 1$ targeting gambler will beat a 3$ targeting gambler by toss $2s+1$:
$$W_s(1,3) = sumlimits_{t=0}^{s}z_t B_{t-1} = sumlimits_{t=0}^{s}frac{3t+4}{(t+1)^2} frac{{2t choose t}{2t+2 choose t}}{2^{4t+3}}$$
$$ = 1 - (s^2+7s+12) frac{{2s+2 choose s+1}{2s+4 choose s+1}}{3(2+s)2^{4s+5}}$$
And this makes his overall probability of winning:
$$W_{infty}(1,3) = 1-frac{2}{3 pi}$$
Now let's find some more draw probabilities:
Probability that a 2$ targeting gambler will draw with another 2$ targeting gambler by toss $2s+2$:
$$D_s(2,2) = sumlimits_{t=0}^{s} a_t^2 = sumlimits_{t=0}^{s} left(frac{2t+1 choose t}{(t+2)2^{2t+1}}right)^2 = -5+4(4s+9)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
This means that the probability they will draw period becomes:
$$D_{infty}(2,2) = frac{16}{pi}-5$$
And the probability the first one wins becomes:
$$W_{infty}(2,2) = frac{1-D_{infty}(2,2)}{2} = 3-frac{8}{pi}$$
The probability that a 3$ targeting gambler draws with another by toss $2s+3$:
$$D_s(3,3) = sumlimits_{t=0}^{s} b_t^2 = sumlimits_{t=0}^{s} left(frac{2t+2 choose t}{(t+2)(2^{2t+1})}right)^2$$
$$ = -25 + (236s^3 +1947 s^2+5314s+4803) frac{{2s+4 choose s+1}^2}{3 (s+2)^2 2^{4s+8}}$$
The probability that they will draw period then becomes:
$$D_infty(3,3) = frac{236}{3 pi} - 25$$
So the probability one of them wins becomes:
$$W_infty(3,3) = frac{1-D_infty(3,3)}{2} = 13 - frac{118}{3 pi}$$
$endgroup$
add a comment |
$begingroup$
Since there was some interest in this question on this and other forums, I wanted this page to become a wiki for these kinds of wealthy gambler race problems. So here are some expressions.
For a gambler targeting 1$;
- PDF of stopping time (probability he will reach 1$ on toss $2t+1$ where $t$ is the number of tails):
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
- Survival function of stopping time (probability that he will reach 1$ after toss $2t+1$):
$$Z_t = frac{2t choose t}{2^{2t+1}}$$
For a gambler targeting 2$:
- PDF of stopping time:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
- Survival function of stopping time:
$$A_t = frac{2t+3 choose t+1}{2^{2t+2}}$$
For a gambler targeting 3$:
- PDF of stopping time:
$$b_t = frac{3}{3+t}{2t+2 choose t}frac{1}{2^{2t+2}}$$
- Survival function of stopping time:
$$B_t = frac{3t+7}{t+2} frac{2t+2 choose t}{2^{2t+2}}$$
Probability two 1$ targeting gamblers will draw by toss $2s+1$:
$$D_s(1,1) = sumlimits_{t=0}^{p} z_t^2 = sumlimits_{t=0}^{s} left(frac{2t choose t}{(t+1)2^{2t+1}} right)^2 = -1 + frac{(4s+5) {2s+2 choose 1+s}^2}{2^{4s+4}}$$
The overall probability that the game ends in a draw:
$$D_{infty}(1,1) = frac{4}{pi}-1$$
And so the probability the first gambler wins (by symmetry):
$$W_{infty}(1,1) = frac{1-D_{infty}(1,1)}{2} = 1-frac{2}{pi}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler by toss $2s+2$:
$$W_s(2,1) = sumlimits_{t=0}^{s} a_t Z_t = sumlimits_{t=0}^{s} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
$$=-1 + (s+2)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
$$W_{infty}(2,1) = frac{4}{pi}-1$$
And since there is no scope for a draw because the 2$ targeting one reaches his target only on even tosses while the 1$ targeting one reaches his target only on odd tosses we can say,
$$W_{infty}(1,2) = 1-W_{infty}(2,1) = 2-frac{4}{pi}$$
Note that $W_{infty}(1,2) = 2 W_{infty}(1,1)$ and $D_{infty}(1,1) = W_{infty}(2,1)$
Not sure why that is.
The probability that a 3$ targeting gambler will beat a 2$ targeting gambler by toss $2s+1$:
$$W_s(3,2) = sumlimits_{t=0}^{s} b_t A_t = sumlimits_{t=0}^{s} frac{3}{3+t} frac{{2t+2 choose t}{2t+3 choose t+1}}{2^{2t+4}}$$
Also note that:
$${2t+2 choose t}{2t+3 choose t+1} = frac{2t+3}{t+1} {2t+2 choose t}^2$$
Substituting this into the above, we get the partial summation:
$$ W_s(3,2) = -6 + frac{(10 s^3 + 81s^2 +218s+195){2s+4 choose s+1}^2}{(s+2)^2 2^{4s+7}}$$
So the overall probability that the 3$ targeting gambler beats the 2$ targeting gambler becomes:
$$W_{infty}(3,2) = frac{20}{pi}-6$$
And this means the probability that the 2$ targeting gambler beats the 3$ targeting gambler becomes:
$$W_{infty}(2,3) = 1-W_{infty}(3,2) = 7- frac{20}{pi}$$
The probability that a 1$ targeting gambler will beat a 3$ targeting gambler by toss $2s+1$:
$$W_s(1,3) = sumlimits_{t=0}^{s}z_t B_{t-1} = sumlimits_{t=0}^{s}frac{3t+4}{(t+1)^2} frac{{2t choose t}{2t+2 choose t}}{2^{4t+3}}$$
$$ = 1 - (s^2+7s+12) frac{{2s+2 choose s+1}{2s+4 choose s+1}}{3(2+s)2^{4s+5}}$$
And this makes his overall probability of winning:
$$W_{infty}(1,3) = 1-frac{2}{3 pi}$$
Now let's find some more draw probabilities:
Probability that a 2$ targeting gambler will draw with another 2$ targeting gambler by toss $2s+2$:
$$D_s(2,2) = sumlimits_{t=0}^{s} a_t^2 = sumlimits_{t=0}^{s} left(frac{2t+1 choose t}{(t+2)2^{2t+1}}right)^2 = -5+4(4s+9)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
This means that the probability they will draw period becomes:
$$D_{infty}(2,2) = frac{16}{pi}-5$$
And the probability the first one wins becomes:
$$W_{infty}(2,2) = frac{1-D_{infty}(2,2)}{2} = 3-frac{8}{pi}$$
The probability that a 3$ targeting gambler draws with another by toss $2s+3$:
$$D_s(3,3) = sumlimits_{t=0}^{s} b_t^2 = sumlimits_{t=0}^{s} left(frac{2t+2 choose t}{(t+2)(2^{2t+1})}right)^2$$
$$ = -25 + (236s^3 +1947 s^2+5314s+4803) frac{{2s+4 choose s+1}^2}{3 (s+2)^2 2^{4s+8}}$$
The probability that they will draw period then becomes:
$$D_infty(3,3) = frac{236}{3 pi} - 25$$
So the probability one of them wins becomes:
$$W_infty(3,3) = frac{1-D_infty(3,3)}{2} = 13 - frac{118}{3 pi}$$
$endgroup$
add a comment |
$begingroup$
Since there was some interest in this question on this and other forums, I wanted this page to become a wiki for these kinds of wealthy gambler race problems. So here are some expressions.
For a gambler targeting 1$;
- PDF of stopping time (probability he will reach 1$ on toss $2t+1$ where $t$ is the number of tails):
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
- Survival function of stopping time (probability that he will reach 1$ after toss $2t+1$):
$$Z_t = frac{2t choose t}{2^{2t+1}}$$
For a gambler targeting 2$:
- PDF of stopping time:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
- Survival function of stopping time:
$$A_t = frac{2t+3 choose t+1}{2^{2t+2}}$$
For a gambler targeting 3$:
- PDF of stopping time:
$$b_t = frac{3}{3+t}{2t+2 choose t}frac{1}{2^{2t+2}}$$
- Survival function of stopping time:
$$B_t = frac{3t+7}{t+2} frac{2t+2 choose t}{2^{2t+2}}$$
Probability two 1$ targeting gamblers will draw by toss $2s+1$:
$$D_s(1,1) = sumlimits_{t=0}^{p} z_t^2 = sumlimits_{t=0}^{s} left(frac{2t choose t}{(t+1)2^{2t+1}} right)^2 = -1 + frac{(4s+5) {2s+2 choose 1+s}^2}{2^{4s+4}}$$
The overall probability that the game ends in a draw:
$$D_{infty}(1,1) = frac{4}{pi}-1$$
And so the probability the first gambler wins (by symmetry):
$$W_{infty}(1,1) = frac{1-D_{infty}(1,1)}{2} = 1-frac{2}{pi}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler by toss $2s+2$:
$$W_s(2,1) = sumlimits_{t=0}^{s} a_t Z_t = sumlimits_{t=0}^{s} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
$$=-1 + (s+2)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
$$W_{infty}(2,1) = frac{4}{pi}-1$$
And since there is no scope for a draw because the 2$ targeting one reaches his target only on even tosses while the 1$ targeting one reaches his target only on odd tosses we can say,
$$W_{infty}(1,2) = 1-W_{infty}(2,1) = 2-frac{4}{pi}$$
Note that $W_{infty}(1,2) = 2 W_{infty}(1,1)$ and $D_{infty}(1,1) = W_{infty}(2,1)$
Not sure why that is.
The probability that a 3$ targeting gambler will beat a 2$ targeting gambler by toss $2s+1$:
$$W_s(3,2) = sumlimits_{t=0}^{s} b_t A_t = sumlimits_{t=0}^{s} frac{3}{3+t} frac{{2t+2 choose t}{2t+3 choose t+1}}{2^{2t+4}}$$
Also note that:
$${2t+2 choose t}{2t+3 choose t+1} = frac{2t+3}{t+1} {2t+2 choose t}^2$$
Substituting this into the above, we get the partial summation:
$$ W_s(3,2) = -6 + frac{(10 s^3 + 81s^2 +218s+195){2s+4 choose s+1}^2}{(s+2)^2 2^{4s+7}}$$
So the overall probability that the 3$ targeting gambler beats the 2$ targeting gambler becomes:
$$W_{infty}(3,2) = frac{20}{pi}-6$$
And this means the probability that the 2$ targeting gambler beats the 3$ targeting gambler becomes:
$$W_{infty}(2,3) = 1-W_{infty}(3,2) = 7- frac{20}{pi}$$
The probability that a 1$ targeting gambler will beat a 3$ targeting gambler by toss $2s+1$:
$$W_s(1,3) = sumlimits_{t=0}^{s}z_t B_{t-1} = sumlimits_{t=0}^{s}frac{3t+4}{(t+1)^2} frac{{2t choose t}{2t+2 choose t}}{2^{4t+3}}$$
$$ = 1 - (s^2+7s+12) frac{{2s+2 choose s+1}{2s+4 choose s+1}}{3(2+s)2^{4s+5}}$$
And this makes his overall probability of winning:
$$W_{infty}(1,3) = 1-frac{2}{3 pi}$$
Now let's find some more draw probabilities:
Probability that a 2$ targeting gambler will draw with another 2$ targeting gambler by toss $2s+2$:
$$D_s(2,2) = sumlimits_{t=0}^{s} a_t^2 = sumlimits_{t=0}^{s} left(frac{2t+1 choose t}{(t+2)2^{2t+1}}right)^2 = -5+4(4s+9)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
This means that the probability they will draw period becomes:
$$D_{infty}(2,2) = frac{16}{pi}-5$$
And the probability the first one wins becomes:
$$W_{infty}(2,2) = frac{1-D_{infty}(2,2)}{2} = 3-frac{8}{pi}$$
The probability that a 3$ targeting gambler draws with another by toss $2s+3$:
$$D_s(3,3) = sumlimits_{t=0}^{s} b_t^2 = sumlimits_{t=0}^{s} left(frac{2t+2 choose t}{(t+2)(2^{2t+1})}right)^2$$
$$ = -25 + (236s^3 +1947 s^2+5314s+4803) frac{{2s+4 choose s+1}^2}{3 (s+2)^2 2^{4s+8}}$$
The probability that they will draw period then becomes:
$$D_infty(3,3) = frac{236}{3 pi} - 25$$
So the probability one of them wins becomes:
$$W_infty(3,3) = frac{1-D_infty(3,3)}{2} = 13 - frac{118}{3 pi}$$
$endgroup$
Since there was some interest in this question on this and other forums, I wanted this page to become a wiki for these kinds of wealthy gambler race problems. So here are some expressions.
For a gambler targeting 1$;
- PDF of stopping time (probability he will reach 1$ on toss $2t+1$ where $t$ is the number of tails):
$$z_t = frac{2t choose t}{(t+1)2^{2t+1}}$$
- Survival function of stopping time (probability that he will reach 1$ after toss $2t+1$):
$$Z_t = frac{2t choose t}{2^{2t+1}}$$
For a gambler targeting 2$:
- PDF of stopping time:
$$a_t = frac{2t+1 choose t}{(t+2)2^{2t+1}}$$
- Survival function of stopping time:
$$A_t = frac{2t+3 choose t+1}{2^{2t+2}}$$
For a gambler targeting 3$:
- PDF of stopping time:
$$b_t = frac{3}{3+t}{2t+2 choose t}frac{1}{2^{2t+2}}$$
- Survival function of stopping time:
$$B_t = frac{3t+7}{t+2} frac{2t+2 choose t}{2^{2t+2}}$$
Probability two 1$ targeting gamblers will draw by toss $2s+1$:
$$D_s(1,1) = sumlimits_{t=0}^{p} z_t^2 = sumlimits_{t=0}^{s} left(frac{2t choose t}{(t+1)2^{2t+1}} right)^2 = -1 + frac{(4s+5) {2s+2 choose 1+s}^2}{2^{4s+4}}$$
The overall probability that the game ends in a draw:
$$D_{infty}(1,1) = frac{4}{pi}-1$$
And so the probability the first gambler wins (by symmetry):
$$W_{infty}(1,1) = frac{1-D_{infty}(1,1)}{2} = 1-frac{2}{pi}$$
The probability the 2$ targeting gambler will beat the 1$ targeting gambler by toss $2s+2$:
$$W_s(2,1) = sumlimits_{t=0}^{s} a_t Z_t = sumlimits_{t=0}^{s} frac{{2t+1 choose t}^2}{(t+2)2^{4t+2}}$$
$$=-1 + (s+2)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
$$W_{infty}(2,1) = frac{4}{pi}-1$$
And since there is no scope for a draw because the 2$ targeting one reaches his target only on even tosses while the 1$ targeting one reaches his target only on odd tosses we can say,
$$W_{infty}(1,2) = 1-W_{infty}(2,1) = 2-frac{4}{pi}$$
Note that $W_{infty}(1,2) = 2 W_{infty}(1,1)$ and $D_{infty}(1,1) = W_{infty}(2,1)$
Not sure why that is.
The probability that a 3$ targeting gambler will beat a 2$ targeting gambler by toss $2s+1$:
$$W_s(3,2) = sumlimits_{t=0}^{s} b_t A_t = sumlimits_{t=0}^{s} frac{3}{3+t} frac{{2t+2 choose t}{2t+3 choose t+1}}{2^{2t+4}}$$
Also note that:
$${2t+2 choose t}{2t+3 choose t+1} = frac{2t+3}{t+1} {2t+2 choose t}^2$$
Substituting this into the above, we get the partial summation:
$$ W_s(3,2) = -6 + frac{(10 s^3 + 81s^2 +218s+195){2s+4 choose s+1}^2}{(s+2)^2 2^{4s+7}}$$
So the overall probability that the 3$ targeting gambler beats the 2$ targeting gambler becomes:
$$W_{infty}(3,2) = frac{20}{pi}-6$$
And this means the probability that the 2$ targeting gambler beats the 3$ targeting gambler becomes:
$$W_{infty}(2,3) = 1-W_{infty}(3,2) = 7- frac{20}{pi}$$
The probability that a 1$ targeting gambler will beat a 3$ targeting gambler by toss $2s+1$:
$$W_s(1,3) = sumlimits_{t=0}^{s}z_t B_{t-1} = sumlimits_{t=0}^{s}frac{3t+4}{(t+1)^2} frac{{2t choose t}{2t+2 choose t}}{2^{4t+3}}$$
$$ = 1 - (s^2+7s+12) frac{{2s+2 choose s+1}{2s+4 choose s+1}}{3(2+s)2^{4s+5}}$$
And this makes his overall probability of winning:
$$W_{infty}(1,3) = 1-frac{2}{3 pi}$$
Now let's find some more draw probabilities:
Probability that a 2$ targeting gambler will draw with another 2$ targeting gambler by toss $2s+2$:
$$D_s(2,2) = sumlimits_{t=0}^{s} a_t^2 = sumlimits_{t=0}^{s} left(frac{2t+1 choose t}{(t+2)2^{2t+1}}right)^2 = -5+4(4s+9)frac{{2s+3 choose s+1}^2}{2^{4s+4}}$$
This means that the probability they will draw period becomes:
$$D_{infty}(2,2) = frac{16}{pi}-5$$
And the probability the first one wins becomes:
$$W_{infty}(2,2) = frac{1-D_{infty}(2,2)}{2} = 3-frac{8}{pi}$$
The probability that a 3$ targeting gambler draws with another by toss $2s+3$:
$$D_s(3,3) = sumlimits_{t=0}^{s} b_t^2 = sumlimits_{t=0}^{s} left(frac{2t+2 choose t}{(t+2)(2^{2t+1})}right)^2$$
$$ = -25 + (236s^3 +1947 s^2+5314s+4803) frac{{2s+4 choose s+1}^2}{3 (s+2)^2 2^{4s+8}}$$
The probability that they will draw period then becomes:
$$D_infty(3,3) = frac{236}{3 pi} - 25$$
So the probability one of them wins becomes:
$$W_infty(3,3) = frac{1-D_infty(3,3)}{2} = 13 - frac{118}{3 pi}$$
edited Jan 27 at 2:48
answered Dec 27 '18 at 9:13


Rohit PandeyRohit Pandey
1,2301021
1,2301021
add a comment |
add a comment |
$begingroup$
We have the identity,
$$
begin{align}
frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2
&=frac1{16}frac{k+1}{16^k}binom{2k}{k}^2left(frac{4k+2}{k+1}right)^2-frac{k}{16^k}binom{2k}{k}^2\
&=frac1{16^k}binom{2k}{k}^2left(frac{k+1}{16}left(frac{4k+2}{k+1}right)^2-kright)\
&=frac1{16^k}binom{2k}{k}^2frac1{4(k+1)}tag1
end{align}
$$
Applying $(1)$:
$$
begin{align}
sum_{k=0}^pleft(frac{binom{2k+1}{k}}{2^{2k+1}}right)^2frac1{k+2}
&=sum_{k=1}^{p+1}left(frac{binom{2k-1}{k-1}}{2^{2k-1}}right)^2frac1{k+1}\
&=sum_{k=1}^{p+1}left(frac{binom{2k}{k}}{2^{2k}}right)^2frac1{k+1}\
&=4sum_{k=1}^{p+1}left(frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2right)\
&=4frac{p+2}{16^{p+2}}binom{2p+4}{p+2}^2-1\
&=frac{p+2}{16^{p+1}}binom{2p+3}{p+1}^2-1tag2
end{align}
$$
$endgroup$
$begingroup$
Nice solution. +1. Is the identity you used a famous one? Is there a reference for it? How did you think about using it?
$endgroup$
– Rohit Pandey
Dec 31 '18 at 3:51
1
$begingroup$
It comes from looking at the ratio of $frac{binom{2k+2}{k+1}^2}{16^{k+1}}$ to $frac{binom{2k}{k}^2}{16^k}$, which is $left(frac{2k+1}{2k+2}right)^2sim1-frac1{k+1}$. This means that $frac{binom{2k}{k}^2}{16^k}$ is decreasing. To make up for the rate of decrease, we can multiply by $k$, which applies a factor of $frac{k+1}k$ to the ratio. The resulting ratio is $left(frac{2k+1}{2k+2}right)^2frac{k+1}k=1+frac1{4k(k+1)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{k+1}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
1
$begingroup$
If we multiply by $k+frac14$, we apply a factor of $frac{k+frac54}{k+frac14}$ to get a resultant ratio of $left(frac{2k+1}{2k+2}right)^2frac{k+frac54}{k+frac14}=1+frac1{16(k+1)^2left(k+frac14right)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{(k+1)^2}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
add a comment |
$begingroup$
We have the identity,
$$
begin{align}
frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2
&=frac1{16}frac{k+1}{16^k}binom{2k}{k}^2left(frac{4k+2}{k+1}right)^2-frac{k}{16^k}binom{2k}{k}^2\
&=frac1{16^k}binom{2k}{k}^2left(frac{k+1}{16}left(frac{4k+2}{k+1}right)^2-kright)\
&=frac1{16^k}binom{2k}{k}^2frac1{4(k+1)}tag1
end{align}
$$
Applying $(1)$:
$$
begin{align}
sum_{k=0}^pleft(frac{binom{2k+1}{k}}{2^{2k+1}}right)^2frac1{k+2}
&=sum_{k=1}^{p+1}left(frac{binom{2k-1}{k-1}}{2^{2k-1}}right)^2frac1{k+1}\
&=sum_{k=1}^{p+1}left(frac{binom{2k}{k}}{2^{2k}}right)^2frac1{k+1}\
&=4sum_{k=1}^{p+1}left(frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2right)\
&=4frac{p+2}{16^{p+2}}binom{2p+4}{p+2}^2-1\
&=frac{p+2}{16^{p+1}}binom{2p+3}{p+1}^2-1tag2
end{align}
$$
$endgroup$
$begingroup$
Nice solution. +1. Is the identity you used a famous one? Is there a reference for it? How did you think about using it?
$endgroup$
– Rohit Pandey
Dec 31 '18 at 3:51
1
$begingroup$
It comes from looking at the ratio of $frac{binom{2k+2}{k+1}^2}{16^{k+1}}$ to $frac{binom{2k}{k}^2}{16^k}$, which is $left(frac{2k+1}{2k+2}right)^2sim1-frac1{k+1}$. This means that $frac{binom{2k}{k}^2}{16^k}$ is decreasing. To make up for the rate of decrease, we can multiply by $k$, which applies a factor of $frac{k+1}k$ to the ratio. The resulting ratio is $left(frac{2k+1}{2k+2}right)^2frac{k+1}k=1+frac1{4k(k+1)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{k+1}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
1
$begingroup$
If we multiply by $k+frac14$, we apply a factor of $frac{k+frac54}{k+frac14}$ to get a resultant ratio of $left(frac{2k+1}{2k+2}right)^2frac{k+frac54}{k+frac14}=1+frac1{16(k+1)^2left(k+frac14right)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{(k+1)^2}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
add a comment |
$begingroup$
We have the identity,
$$
begin{align}
frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2
&=frac1{16}frac{k+1}{16^k}binom{2k}{k}^2left(frac{4k+2}{k+1}right)^2-frac{k}{16^k}binom{2k}{k}^2\
&=frac1{16^k}binom{2k}{k}^2left(frac{k+1}{16}left(frac{4k+2}{k+1}right)^2-kright)\
&=frac1{16^k}binom{2k}{k}^2frac1{4(k+1)}tag1
end{align}
$$
Applying $(1)$:
$$
begin{align}
sum_{k=0}^pleft(frac{binom{2k+1}{k}}{2^{2k+1}}right)^2frac1{k+2}
&=sum_{k=1}^{p+1}left(frac{binom{2k-1}{k-1}}{2^{2k-1}}right)^2frac1{k+1}\
&=sum_{k=1}^{p+1}left(frac{binom{2k}{k}}{2^{2k}}right)^2frac1{k+1}\
&=4sum_{k=1}^{p+1}left(frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2right)\
&=4frac{p+2}{16^{p+2}}binom{2p+4}{p+2}^2-1\
&=frac{p+2}{16^{p+1}}binom{2p+3}{p+1}^2-1tag2
end{align}
$$
$endgroup$
We have the identity,
$$
begin{align}
frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2
&=frac1{16}frac{k+1}{16^k}binom{2k}{k}^2left(frac{4k+2}{k+1}right)^2-frac{k}{16^k}binom{2k}{k}^2\
&=frac1{16^k}binom{2k}{k}^2left(frac{k+1}{16}left(frac{4k+2}{k+1}right)^2-kright)\
&=frac1{16^k}binom{2k}{k}^2frac1{4(k+1)}tag1
end{align}
$$
Applying $(1)$:
$$
begin{align}
sum_{k=0}^pleft(frac{binom{2k+1}{k}}{2^{2k+1}}right)^2frac1{k+2}
&=sum_{k=1}^{p+1}left(frac{binom{2k-1}{k-1}}{2^{2k-1}}right)^2frac1{k+1}\
&=sum_{k=1}^{p+1}left(frac{binom{2k}{k}}{2^{2k}}right)^2frac1{k+1}\
&=4sum_{k=1}^{p+1}left(frac{k+1}{16^{k+1}}binom{2k+2}{k+1}^2-frac{k}{16^k}binom{2k}{k}^2right)\
&=4frac{p+2}{16^{p+2}}binom{2p+4}{p+2}^2-1\
&=frac{p+2}{16^{p+1}}binom{2p+3}{p+1}^2-1tag2
end{align}
$$
answered Dec 29 '18 at 17:28
robjohn♦robjohn
267k27308631
267k27308631
$begingroup$
Nice solution. +1. Is the identity you used a famous one? Is there a reference for it? How did you think about using it?
$endgroup$
– Rohit Pandey
Dec 31 '18 at 3:51
1
$begingroup$
It comes from looking at the ratio of $frac{binom{2k+2}{k+1}^2}{16^{k+1}}$ to $frac{binom{2k}{k}^2}{16^k}$, which is $left(frac{2k+1}{2k+2}right)^2sim1-frac1{k+1}$. This means that $frac{binom{2k}{k}^2}{16^k}$ is decreasing. To make up for the rate of decrease, we can multiply by $k$, which applies a factor of $frac{k+1}k$ to the ratio. The resulting ratio is $left(frac{2k+1}{2k+2}right)^2frac{k+1}k=1+frac1{4k(k+1)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{k+1}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
1
$begingroup$
If we multiply by $k+frac14$, we apply a factor of $frac{k+frac54}{k+frac14}$ to get a resultant ratio of $left(frac{2k+1}{2k+2}right)^2frac{k+frac54}{k+frac14}=1+frac1{16(k+1)^2left(k+frac14right)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{(k+1)^2}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
add a comment |
$begingroup$
Nice solution. +1. Is the identity you used a famous one? Is there a reference for it? How did you think about using it?
$endgroup$
– Rohit Pandey
Dec 31 '18 at 3:51
1
$begingroup$
It comes from looking at the ratio of $frac{binom{2k+2}{k+1}^2}{16^{k+1}}$ to $frac{binom{2k}{k}^2}{16^k}$, which is $left(frac{2k+1}{2k+2}right)^2sim1-frac1{k+1}$. This means that $frac{binom{2k}{k}^2}{16^k}$ is decreasing. To make up for the rate of decrease, we can multiply by $k$, which applies a factor of $frac{k+1}k$ to the ratio. The resulting ratio is $left(frac{2k+1}{2k+2}right)^2frac{k+1}k=1+frac1{4k(k+1)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{k+1}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
1
$begingroup$
If we multiply by $k+frac14$, we apply a factor of $frac{k+frac54}{k+frac14}$ to get a resultant ratio of $left(frac{2k+1}{2k+2}right)^2frac{k+frac54}{k+frac14}=1+frac1{16(k+1)^2left(k+frac14right)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{(k+1)^2}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
$begingroup$
Nice solution. +1. Is the identity you used a famous one? Is there a reference for it? How did you think about using it?
$endgroup$
– Rohit Pandey
Dec 31 '18 at 3:51
$begingroup$
Nice solution. +1. Is the identity you used a famous one? Is there a reference for it? How did you think about using it?
$endgroup$
– Rohit Pandey
Dec 31 '18 at 3:51
1
1
$begingroup$
It comes from looking at the ratio of $frac{binom{2k+2}{k+1}^2}{16^{k+1}}$ to $frac{binom{2k}{k}^2}{16^k}$, which is $left(frac{2k+1}{2k+2}right)^2sim1-frac1{k+1}$. This means that $frac{binom{2k}{k}^2}{16^k}$ is decreasing. To make up for the rate of decrease, we can multiply by $k$, which applies a factor of $frac{k+1}k$ to the ratio. The resulting ratio is $left(frac{2k+1}{2k+2}right)^2frac{k+1}k=1+frac1{4k(k+1)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{k+1}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
$begingroup$
It comes from looking at the ratio of $frac{binom{2k+2}{k+1}^2}{16^{k+1}}$ to $frac{binom{2k}{k}^2}{16^k}$, which is $left(frac{2k+1}{2k+2}right)^2sim1-frac1{k+1}$. This means that $frac{binom{2k}{k}^2}{16^k}$ is decreasing. To make up for the rate of decrease, we can multiply by $k$, which applies a factor of $frac{k+1}k$ to the ratio. The resulting ratio is $left(frac{2k+1}{2k+2}right)^2frac{k+1}k=1+frac1{4k(k+1)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{k+1}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
1
1
$begingroup$
If we multiply by $k+frac14$, we apply a factor of $frac{k+frac54}{k+frac14}$ to get a resultant ratio of $left(frac{2k+1}{2k+2}right)^2frac{k+frac54}{k+frac14}=1+frac1{16(k+1)^2left(k+frac14right)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{(k+1)^2}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
$begingroup$
If we multiply by $k+frac14$, we apply a factor of $frac{k+frac54}{k+frac14}$ to get a resultant ratio of $left(frac{2k+1}{2k+2}right)^2frac{k+frac54}{k+frac14}=1+frac1{16(k+1)^2left(k+frac14right)}$. This allows us to sum $frac{binom{2k}{k}^2}{16^k}frac1{(k+1)^2}$.
$endgroup$
– robjohn♦
Dec 31 '18 at 13:25
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3049876%2frace-of-the-wealthy-gamblers-how-do-i-get-this-closed-form%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
This game can be considered a random walk in the plane beginning at $(-2,-3)$, with diagonal steps $(pm1,pm1)$. It ends when the walk hits a coordinate axis, and you want the probability of escape from the first quadrant at $(0,k)$ with $k>0$. There is a fair bit of literature on escape probabilities for random walks (not my specialty!), and maybe this framing is helpful. I found one paper mentioning the diagonal-steps walk at digitalcommons.wku.edu/cgi/… Maybe it or some references in it will be useful.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:36
2
$begingroup$
Where did that strange expression $7-dfrac{20}{pi}$ come from? Is there a theoretical basis for it, or is it just numerically close to your computed value?
$endgroup$
– TonyK
Dec 22 '18 at 23:43
2
$begingroup$
Sorry - too late to edit the comment, but the starting point should be $(2,3)$.
$endgroup$
– Steve Kass
Dec 22 '18 at 23:43
2
$begingroup$
I had posted an answer, but it was to a misreading of the question. Deleted now. To clarify so that others don't make the same mistake, the two gamblers are not gambling against each other, but instead independently gambling against the house. It's a two-dimensional problem.
$endgroup$
– jmerry
Dec 23 '18 at 0:17
2
$begingroup$
Actually, it might not help so much, because it has $pi$ in the numerator, not the denominator, but hopefully you’ll get there.
$endgroup$
– Steve Kass
Dec 23 '18 at 2:42