Solve the operations needed for the recursive formula
$begingroup$
$f(n) = 1+frac{1}{n}sum_{i = 0}^{n - 1} f(i)$
Base case: $f(0) = 0$
how can I solve the recurrence?
algorithms recursive-algorithms
$endgroup$
add a comment |
$begingroup$
$f(n) = 1+frac{1}{n}sum_{i = 0}^{n - 1} f(i)$
Base case: $f(0) = 0$
how can I solve the recurrence?
algorithms recursive-algorithms
$endgroup$
add a comment |
$begingroup$
$f(n) = 1+frac{1}{n}sum_{i = 0}^{n - 1} f(i)$
Base case: $f(0) = 0$
how can I solve the recurrence?
algorithms recursive-algorithms
$endgroup$
$f(n) = 1+frac{1}{n}sum_{i = 0}^{n - 1} f(i)$
Base case: $f(0) = 0$
how can I solve the recurrence?
algorithms recursive-algorithms
algorithms recursive-algorithms
edited Jan 12 at 13:57
HelloWorld
asked Jan 9 at 2:32
HelloWorldHelloWorld
114
114
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Your first step if you don't know what to do should be to find some of the small values of $f(n)$. This gives you $f(1) = 1$, $f(2) = frac32 = 1 + frac12$, $f(3) = frac{11}{6} = 1 + frac12 + frac13$, and we begin to conjecture that $f(n)$ is the $n^{text{th}}$ harmonic number $H_n := 1 + frac12 + frac13 + dots + frac1n$. If this is true, then it solves the problem, since $H_n$ is known to be approximately $ln n + gamma + O(frac1n)$, where $gamma$ is a constant, and in particular $H_n = O(log n)$.
We can prove that $f(n) = H_n$ by induction. Assuming that the pattern holds for $f(1), f(2), dots, f(n-1)$, we have
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{i=1}^{n-1} sum_{j=1}^i frac1j.
$$
If we switch the order of summation (which is always a good step to take when dealing with a double sum, we get
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{j=1}^{n-1} sum_{i=j}^{n-1} frac1j = 1 + frac1n sum_{j=1}^{n-1} frac{n-j}{j} = 1 + frac1n sum_{j=1}^{n-1} frac nj - frac1n sum_{j=1}^{n-1}1
$$
and this simplifies to $1 + H_{n-1} - frac{n-1}{n} = frac1n + H_{n-1} = H_n$.
The summation at work, which is
$$
sum_{i=0}^{n-1} H_i = n H_n - n,
$$
might be worth remembering and easy to remember, since it is very similar to the integral
$$
int_0^x ln t ,text{d}t = x ln x - x.
$$
It comes up in many other algorithm analyses (for instance, the expected running time of quicksort with a random pivot).
Alternative solution: seeing the recurrence $f(n) = 1 + frac1n sum_{i=0}^{n-1} f(i)$, we see a sum that's hard to evaluate but looks very similar from term to term. We can try to get a simpler recurrence by combining the recurrences for $f(n)$ and for $f(n-1)$ to eliminate the sum.
Take the equations $$n f(n) = n + sum_{i=0}^{n-1}f(i)$$ and $$(n-1) f(n-1) = (n-1) + sum_{i=0}^{n-2} f(i)$$ which are the recurrence relations for $n$ and for $n-1$, with denominators cleared to make the sums more similar. If we take the difference, we get $$n f(n) - (n-1) f(n-1) = 1 + f(n-1)$$ which simplifies to $n f(n) = 1 + n f(n-1)$, or $f(n) = f(n-1) + frac1n$. Since $f(0) = 0$, we conclude that $$f(n) = sum_{i=1}^n frac1i = H_n.$$
$endgroup$
add a comment |
$begingroup$
Given any set $S$ of distinct natural numbers with cardinality $n$, pick $x in S$. For any $y neq x$ in $S$, $P(y > x) = frac{1}{2}$ (naive definition of probability applies here). Thus, for the whole set, we expect $n times frac{1}{2}$ elements to be greater than $x$ by linearity of expectation. This gives that during each iteration, the size of the set is expected to halve. Thus, the set becomes empty after $mathrm{log}_2 > n$.
$endgroup$
$begingroup$
Technically, you have only shown that after $log_2 n$ iterations, the expected size of the set is $1$. This is not the same thing as the expected number of iterations until the set reaches size $0$ (or size $1$, for that matter).
$endgroup$
– Misha Lavrov
Jan 9 at 16:34
$begingroup$
(And in fact, we see that in this example, the number of iterations until the expected size is $1$ is off from the expected number of iterations until the size is $1$ by a constant factor. This is because a better-than-average turn helps us by more than an equivalent worse-than-average turn hurts us, due to the concavity of $log$.)
$endgroup$
– Misha Lavrov
Jan 9 at 16:48
$begingroup$
@Misha Lavrov Yes, you are correct. Then to complete this response, it remains to show the difference between the number of iterations until the expected size is 0 (which this response shows is $log_2 n + 1$) and the expected number of iterations until the size is 0 differs by a factor which is smaller than $log n$. I don't currently see a way to do that which is different from just figuring out the expected number of iterations until the size is 0.
$endgroup$
– Klint Qinami
Jan 9 at 17:08
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3067007%2fsolve-the-operations-needed-for-the-recursive-formula%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Your first step if you don't know what to do should be to find some of the small values of $f(n)$. This gives you $f(1) = 1$, $f(2) = frac32 = 1 + frac12$, $f(3) = frac{11}{6} = 1 + frac12 + frac13$, and we begin to conjecture that $f(n)$ is the $n^{text{th}}$ harmonic number $H_n := 1 + frac12 + frac13 + dots + frac1n$. If this is true, then it solves the problem, since $H_n$ is known to be approximately $ln n + gamma + O(frac1n)$, where $gamma$ is a constant, and in particular $H_n = O(log n)$.
We can prove that $f(n) = H_n$ by induction. Assuming that the pattern holds for $f(1), f(2), dots, f(n-1)$, we have
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{i=1}^{n-1} sum_{j=1}^i frac1j.
$$
If we switch the order of summation (which is always a good step to take when dealing with a double sum, we get
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{j=1}^{n-1} sum_{i=j}^{n-1} frac1j = 1 + frac1n sum_{j=1}^{n-1} frac{n-j}{j} = 1 + frac1n sum_{j=1}^{n-1} frac nj - frac1n sum_{j=1}^{n-1}1
$$
and this simplifies to $1 + H_{n-1} - frac{n-1}{n} = frac1n + H_{n-1} = H_n$.
The summation at work, which is
$$
sum_{i=0}^{n-1} H_i = n H_n - n,
$$
might be worth remembering and easy to remember, since it is very similar to the integral
$$
int_0^x ln t ,text{d}t = x ln x - x.
$$
It comes up in many other algorithm analyses (for instance, the expected running time of quicksort with a random pivot).
Alternative solution: seeing the recurrence $f(n) = 1 + frac1n sum_{i=0}^{n-1} f(i)$, we see a sum that's hard to evaluate but looks very similar from term to term. We can try to get a simpler recurrence by combining the recurrences for $f(n)$ and for $f(n-1)$ to eliminate the sum.
Take the equations $$n f(n) = n + sum_{i=0}^{n-1}f(i)$$ and $$(n-1) f(n-1) = (n-1) + sum_{i=0}^{n-2} f(i)$$ which are the recurrence relations for $n$ and for $n-1$, with denominators cleared to make the sums more similar. If we take the difference, we get $$n f(n) - (n-1) f(n-1) = 1 + f(n-1)$$ which simplifies to $n f(n) = 1 + n f(n-1)$, or $f(n) = f(n-1) + frac1n$. Since $f(0) = 0$, we conclude that $$f(n) = sum_{i=1}^n frac1i = H_n.$$
$endgroup$
add a comment |
$begingroup$
Your first step if you don't know what to do should be to find some of the small values of $f(n)$. This gives you $f(1) = 1$, $f(2) = frac32 = 1 + frac12$, $f(3) = frac{11}{6} = 1 + frac12 + frac13$, and we begin to conjecture that $f(n)$ is the $n^{text{th}}$ harmonic number $H_n := 1 + frac12 + frac13 + dots + frac1n$. If this is true, then it solves the problem, since $H_n$ is known to be approximately $ln n + gamma + O(frac1n)$, where $gamma$ is a constant, and in particular $H_n = O(log n)$.
We can prove that $f(n) = H_n$ by induction. Assuming that the pattern holds for $f(1), f(2), dots, f(n-1)$, we have
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{i=1}^{n-1} sum_{j=1}^i frac1j.
$$
If we switch the order of summation (which is always a good step to take when dealing with a double sum, we get
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{j=1}^{n-1} sum_{i=j}^{n-1} frac1j = 1 + frac1n sum_{j=1}^{n-1} frac{n-j}{j} = 1 + frac1n sum_{j=1}^{n-1} frac nj - frac1n sum_{j=1}^{n-1}1
$$
and this simplifies to $1 + H_{n-1} - frac{n-1}{n} = frac1n + H_{n-1} = H_n$.
The summation at work, which is
$$
sum_{i=0}^{n-1} H_i = n H_n - n,
$$
might be worth remembering and easy to remember, since it is very similar to the integral
$$
int_0^x ln t ,text{d}t = x ln x - x.
$$
It comes up in many other algorithm analyses (for instance, the expected running time of quicksort with a random pivot).
Alternative solution: seeing the recurrence $f(n) = 1 + frac1n sum_{i=0}^{n-1} f(i)$, we see a sum that's hard to evaluate but looks very similar from term to term. We can try to get a simpler recurrence by combining the recurrences for $f(n)$ and for $f(n-1)$ to eliminate the sum.
Take the equations $$n f(n) = n + sum_{i=0}^{n-1}f(i)$$ and $$(n-1) f(n-1) = (n-1) + sum_{i=0}^{n-2} f(i)$$ which are the recurrence relations for $n$ and for $n-1$, with denominators cleared to make the sums more similar. If we take the difference, we get $$n f(n) - (n-1) f(n-1) = 1 + f(n-1)$$ which simplifies to $n f(n) = 1 + n f(n-1)$, or $f(n) = f(n-1) + frac1n$. Since $f(0) = 0$, we conclude that $$f(n) = sum_{i=1}^n frac1i = H_n.$$
$endgroup$
add a comment |
$begingroup$
Your first step if you don't know what to do should be to find some of the small values of $f(n)$. This gives you $f(1) = 1$, $f(2) = frac32 = 1 + frac12$, $f(3) = frac{11}{6} = 1 + frac12 + frac13$, and we begin to conjecture that $f(n)$ is the $n^{text{th}}$ harmonic number $H_n := 1 + frac12 + frac13 + dots + frac1n$. If this is true, then it solves the problem, since $H_n$ is known to be approximately $ln n + gamma + O(frac1n)$, where $gamma$ is a constant, and in particular $H_n = O(log n)$.
We can prove that $f(n) = H_n$ by induction. Assuming that the pattern holds for $f(1), f(2), dots, f(n-1)$, we have
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{i=1}^{n-1} sum_{j=1}^i frac1j.
$$
If we switch the order of summation (which is always a good step to take when dealing with a double sum, we get
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{j=1}^{n-1} sum_{i=j}^{n-1} frac1j = 1 + frac1n sum_{j=1}^{n-1} frac{n-j}{j} = 1 + frac1n sum_{j=1}^{n-1} frac nj - frac1n sum_{j=1}^{n-1}1
$$
and this simplifies to $1 + H_{n-1} - frac{n-1}{n} = frac1n + H_{n-1} = H_n$.
The summation at work, which is
$$
sum_{i=0}^{n-1} H_i = n H_n - n,
$$
might be worth remembering and easy to remember, since it is very similar to the integral
$$
int_0^x ln t ,text{d}t = x ln x - x.
$$
It comes up in many other algorithm analyses (for instance, the expected running time of quicksort with a random pivot).
Alternative solution: seeing the recurrence $f(n) = 1 + frac1n sum_{i=0}^{n-1} f(i)$, we see a sum that's hard to evaluate but looks very similar from term to term. We can try to get a simpler recurrence by combining the recurrences for $f(n)$ and for $f(n-1)$ to eliminate the sum.
Take the equations $$n f(n) = n + sum_{i=0}^{n-1}f(i)$$ and $$(n-1) f(n-1) = (n-1) + sum_{i=0}^{n-2} f(i)$$ which are the recurrence relations for $n$ and for $n-1$, with denominators cleared to make the sums more similar. If we take the difference, we get $$n f(n) - (n-1) f(n-1) = 1 + f(n-1)$$ which simplifies to $n f(n) = 1 + n f(n-1)$, or $f(n) = f(n-1) + frac1n$. Since $f(0) = 0$, we conclude that $$f(n) = sum_{i=1}^n frac1i = H_n.$$
$endgroup$
Your first step if you don't know what to do should be to find some of the small values of $f(n)$. This gives you $f(1) = 1$, $f(2) = frac32 = 1 + frac12$, $f(3) = frac{11}{6} = 1 + frac12 + frac13$, and we begin to conjecture that $f(n)$ is the $n^{text{th}}$ harmonic number $H_n := 1 + frac12 + frac13 + dots + frac1n$. If this is true, then it solves the problem, since $H_n$ is known to be approximately $ln n + gamma + O(frac1n)$, where $gamma$ is a constant, and in particular $H_n = O(log n)$.
We can prove that $f(n) = H_n$ by induction. Assuming that the pattern holds for $f(1), f(2), dots, f(n-1)$, we have
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{i=1}^{n-1} sum_{j=1}^i frac1j.
$$
If we switch the order of summation (which is always a good step to take when dealing with a double sum, we get
$$
f(n) = 1 + frac1n sum_{i=1}^{n-1} H_i = 1 + frac1n sum_{j=1}^{n-1} sum_{i=j}^{n-1} frac1j = 1 + frac1n sum_{j=1}^{n-1} frac{n-j}{j} = 1 + frac1n sum_{j=1}^{n-1} frac nj - frac1n sum_{j=1}^{n-1}1
$$
and this simplifies to $1 + H_{n-1} - frac{n-1}{n} = frac1n + H_{n-1} = H_n$.
The summation at work, which is
$$
sum_{i=0}^{n-1} H_i = n H_n - n,
$$
might be worth remembering and easy to remember, since it is very similar to the integral
$$
int_0^x ln t ,text{d}t = x ln x - x.
$$
It comes up in many other algorithm analyses (for instance, the expected running time of quicksort with a random pivot).
Alternative solution: seeing the recurrence $f(n) = 1 + frac1n sum_{i=0}^{n-1} f(i)$, we see a sum that's hard to evaluate but looks very similar from term to term. We can try to get a simpler recurrence by combining the recurrences for $f(n)$ and for $f(n-1)$ to eliminate the sum.
Take the equations $$n f(n) = n + sum_{i=0}^{n-1}f(i)$$ and $$(n-1) f(n-1) = (n-1) + sum_{i=0}^{n-2} f(i)$$ which are the recurrence relations for $n$ and for $n-1$, with denominators cleared to make the sums more similar. If we take the difference, we get $$n f(n) - (n-1) f(n-1) = 1 + f(n-1)$$ which simplifies to $n f(n) = 1 + n f(n-1)$, or $f(n) = f(n-1) + frac1n$. Since $f(0) = 0$, we conclude that $$f(n) = sum_{i=1}^n frac1i = H_n.$$
edited Jan 9 at 17:02
answered Jan 9 at 16:44
Misha LavrovMisha Lavrov
45.9k656107
45.9k656107
add a comment |
add a comment |
$begingroup$
Given any set $S$ of distinct natural numbers with cardinality $n$, pick $x in S$. For any $y neq x$ in $S$, $P(y > x) = frac{1}{2}$ (naive definition of probability applies here). Thus, for the whole set, we expect $n times frac{1}{2}$ elements to be greater than $x$ by linearity of expectation. This gives that during each iteration, the size of the set is expected to halve. Thus, the set becomes empty after $mathrm{log}_2 > n$.
$endgroup$
$begingroup$
Technically, you have only shown that after $log_2 n$ iterations, the expected size of the set is $1$. This is not the same thing as the expected number of iterations until the set reaches size $0$ (or size $1$, for that matter).
$endgroup$
– Misha Lavrov
Jan 9 at 16:34
$begingroup$
(And in fact, we see that in this example, the number of iterations until the expected size is $1$ is off from the expected number of iterations until the size is $1$ by a constant factor. This is because a better-than-average turn helps us by more than an equivalent worse-than-average turn hurts us, due to the concavity of $log$.)
$endgroup$
– Misha Lavrov
Jan 9 at 16:48
$begingroup$
@Misha Lavrov Yes, you are correct. Then to complete this response, it remains to show the difference between the number of iterations until the expected size is 0 (which this response shows is $log_2 n + 1$) and the expected number of iterations until the size is 0 differs by a factor which is smaller than $log n$. I don't currently see a way to do that which is different from just figuring out the expected number of iterations until the size is 0.
$endgroup$
– Klint Qinami
Jan 9 at 17:08
add a comment |
$begingroup$
Given any set $S$ of distinct natural numbers with cardinality $n$, pick $x in S$. For any $y neq x$ in $S$, $P(y > x) = frac{1}{2}$ (naive definition of probability applies here). Thus, for the whole set, we expect $n times frac{1}{2}$ elements to be greater than $x$ by linearity of expectation. This gives that during each iteration, the size of the set is expected to halve. Thus, the set becomes empty after $mathrm{log}_2 > n$.
$endgroup$
$begingroup$
Technically, you have only shown that after $log_2 n$ iterations, the expected size of the set is $1$. This is not the same thing as the expected number of iterations until the set reaches size $0$ (or size $1$, for that matter).
$endgroup$
– Misha Lavrov
Jan 9 at 16:34
$begingroup$
(And in fact, we see that in this example, the number of iterations until the expected size is $1$ is off from the expected number of iterations until the size is $1$ by a constant factor. This is because a better-than-average turn helps us by more than an equivalent worse-than-average turn hurts us, due to the concavity of $log$.)
$endgroup$
– Misha Lavrov
Jan 9 at 16:48
$begingroup$
@Misha Lavrov Yes, you are correct. Then to complete this response, it remains to show the difference between the number of iterations until the expected size is 0 (which this response shows is $log_2 n + 1$) and the expected number of iterations until the size is 0 differs by a factor which is smaller than $log n$. I don't currently see a way to do that which is different from just figuring out the expected number of iterations until the size is 0.
$endgroup$
– Klint Qinami
Jan 9 at 17:08
add a comment |
$begingroup$
Given any set $S$ of distinct natural numbers with cardinality $n$, pick $x in S$. For any $y neq x$ in $S$, $P(y > x) = frac{1}{2}$ (naive definition of probability applies here). Thus, for the whole set, we expect $n times frac{1}{2}$ elements to be greater than $x$ by linearity of expectation. This gives that during each iteration, the size of the set is expected to halve. Thus, the set becomes empty after $mathrm{log}_2 > n$.
$endgroup$
Given any set $S$ of distinct natural numbers with cardinality $n$, pick $x in S$. For any $y neq x$ in $S$, $P(y > x) = frac{1}{2}$ (naive definition of probability applies here). Thus, for the whole set, we expect $n times frac{1}{2}$ elements to be greater than $x$ by linearity of expectation. This gives that during each iteration, the size of the set is expected to halve. Thus, the set becomes empty after $mathrm{log}_2 > n$.
answered Jan 9 at 3:13
Klint QinamiKlint Qinami
1,137410
1,137410
$begingroup$
Technically, you have only shown that after $log_2 n$ iterations, the expected size of the set is $1$. This is not the same thing as the expected number of iterations until the set reaches size $0$ (or size $1$, for that matter).
$endgroup$
– Misha Lavrov
Jan 9 at 16:34
$begingroup$
(And in fact, we see that in this example, the number of iterations until the expected size is $1$ is off from the expected number of iterations until the size is $1$ by a constant factor. This is because a better-than-average turn helps us by more than an equivalent worse-than-average turn hurts us, due to the concavity of $log$.)
$endgroup$
– Misha Lavrov
Jan 9 at 16:48
$begingroup$
@Misha Lavrov Yes, you are correct. Then to complete this response, it remains to show the difference between the number of iterations until the expected size is 0 (which this response shows is $log_2 n + 1$) and the expected number of iterations until the size is 0 differs by a factor which is smaller than $log n$. I don't currently see a way to do that which is different from just figuring out the expected number of iterations until the size is 0.
$endgroup$
– Klint Qinami
Jan 9 at 17:08
add a comment |
$begingroup$
Technically, you have only shown that after $log_2 n$ iterations, the expected size of the set is $1$. This is not the same thing as the expected number of iterations until the set reaches size $0$ (or size $1$, for that matter).
$endgroup$
– Misha Lavrov
Jan 9 at 16:34
$begingroup$
(And in fact, we see that in this example, the number of iterations until the expected size is $1$ is off from the expected number of iterations until the size is $1$ by a constant factor. This is because a better-than-average turn helps us by more than an equivalent worse-than-average turn hurts us, due to the concavity of $log$.)
$endgroup$
– Misha Lavrov
Jan 9 at 16:48
$begingroup$
@Misha Lavrov Yes, you are correct. Then to complete this response, it remains to show the difference between the number of iterations until the expected size is 0 (which this response shows is $log_2 n + 1$) and the expected number of iterations until the size is 0 differs by a factor which is smaller than $log n$. I don't currently see a way to do that which is different from just figuring out the expected number of iterations until the size is 0.
$endgroup$
– Klint Qinami
Jan 9 at 17:08
$begingroup$
Technically, you have only shown that after $log_2 n$ iterations, the expected size of the set is $1$. This is not the same thing as the expected number of iterations until the set reaches size $0$ (or size $1$, for that matter).
$endgroup$
– Misha Lavrov
Jan 9 at 16:34
$begingroup$
Technically, you have only shown that after $log_2 n$ iterations, the expected size of the set is $1$. This is not the same thing as the expected number of iterations until the set reaches size $0$ (or size $1$, for that matter).
$endgroup$
– Misha Lavrov
Jan 9 at 16:34
$begingroup$
(And in fact, we see that in this example, the number of iterations until the expected size is $1$ is off from the expected number of iterations until the size is $1$ by a constant factor. This is because a better-than-average turn helps us by more than an equivalent worse-than-average turn hurts us, due to the concavity of $log$.)
$endgroup$
– Misha Lavrov
Jan 9 at 16:48
$begingroup$
(And in fact, we see that in this example, the number of iterations until the expected size is $1$ is off from the expected number of iterations until the size is $1$ by a constant factor. This is because a better-than-average turn helps us by more than an equivalent worse-than-average turn hurts us, due to the concavity of $log$.)
$endgroup$
– Misha Lavrov
Jan 9 at 16:48
$begingroup$
@Misha Lavrov Yes, you are correct. Then to complete this response, it remains to show the difference between the number of iterations until the expected size is 0 (which this response shows is $log_2 n + 1$) and the expected number of iterations until the size is 0 differs by a factor which is smaller than $log n$. I don't currently see a way to do that which is different from just figuring out the expected number of iterations until the size is 0.
$endgroup$
– Klint Qinami
Jan 9 at 17:08
$begingroup$
@Misha Lavrov Yes, you are correct. Then to complete this response, it remains to show the difference between the number of iterations until the expected size is 0 (which this response shows is $log_2 n + 1$) and the expected number of iterations until the size is 0 differs by a factor which is smaller than $log n$. I don't currently see a way to do that which is different from just figuring out the expected number of iterations until the size is 0.
$endgroup$
– Klint Qinami
Jan 9 at 17:08
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3067007%2fsolve-the-operations-needed-for-the-recursive-formula%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown