Dominoes and induction, or how does induction work?
$begingroup$
I've never really understood why math induction is supposed to work.
You have these 3 steps:
Prove true for base case (n=0 or 1 or whatever)
Assume true for n=k. Call this the induction hypothesis.
Prove true for n=k+1, somewhere using the induction hypothesis in your proof.
In my experience the proof is usually algebraic, and you just manipulate the problem until you get the induction hypothesis to appear. If you can do that and it works out, then you say the proof holds.
Here's one I just worked out,
Show $displaystylelim_{xtoinfty} frac{(ln x)^n}{x} = 0$
So you go:
Use L'Hospital's rule.
$displaystylelim_{xtoinfty} frac{ln x}{x} = 0$.
Since that's
$displaystylelim_{xtoinfty} frac{1}{x} = 0$.Assume true for $n=k$.
$displaystylelim_{xtoinfty} frac{(ln x)^k}{x} = 0$.Prove true for $n=k+1$. You get
$displaystylelim_{xtoinfty} frac{(ln x)^{k+1}}{x} = 0.$
Use L'Hospital again:
$displaystylelim_{xtoinfty} frac{(k+1)(ln x)^{k}}{x} = 0$.
Then you see the induction hypothesis appear, and you can say this is equal to $0$.
What I'm not comfortable with is this idea that you can just assume something to be true ($n=k$), then based on that assumption, form a proof for $n=k+1$ case.
I don't see how you can use something you've assumed to be true to prove something else to be true.
intuition induction
$endgroup$
add a comment |
$begingroup$
I've never really understood why math induction is supposed to work.
You have these 3 steps:
Prove true for base case (n=0 or 1 or whatever)
Assume true for n=k. Call this the induction hypothesis.
Prove true for n=k+1, somewhere using the induction hypothesis in your proof.
In my experience the proof is usually algebraic, and you just manipulate the problem until you get the induction hypothesis to appear. If you can do that and it works out, then you say the proof holds.
Here's one I just worked out,
Show $displaystylelim_{xtoinfty} frac{(ln x)^n}{x} = 0$
So you go:
Use L'Hospital's rule.
$displaystylelim_{xtoinfty} frac{ln x}{x} = 0$.
Since that's
$displaystylelim_{xtoinfty} frac{1}{x} = 0$.Assume true for $n=k$.
$displaystylelim_{xtoinfty} frac{(ln x)^k}{x} = 0$.Prove true for $n=k+1$. You get
$displaystylelim_{xtoinfty} frac{(ln x)^{k+1}}{x} = 0.$
Use L'Hospital again:
$displaystylelim_{xtoinfty} frac{(k+1)(ln x)^{k}}{x} = 0$.
Then you see the induction hypothesis appear, and you can say this is equal to $0$.
What I'm not comfortable with is this idea that you can just assume something to be true ($n=k$), then based on that assumption, form a proof for $n=k+1$ case.
I don't see how you can use something you've assumed to be true to prove something else to be true.
intuition induction
$endgroup$
2
$begingroup$
I think [math] is a pretty useless tag here; I hope you don't mind that I changed it into [intuition], which seems more apropo.
$endgroup$
– Arturo Magidin
Jan 29 '11 at 21:45
4
$begingroup$
not to pick, but "dominoes" do not actually appear in your question.
$endgroup$
– Pete L. Clark
Jan 29 '11 at 23:43
14
$begingroup$
You're looking at a train of dominoes, and you see that each one, if toppled, will knock down the next one. What happens when you push over the first domino? Induction!
$endgroup$
– I. J. Kennedy
Jan 30 '11 at 3:48
add a comment |
$begingroup$
I've never really understood why math induction is supposed to work.
You have these 3 steps:
Prove true for base case (n=0 or 1 or whatever)
Assume true for n=k. Call this the induction hypothesis.
Prove true for n=k+1, somewhere using the induction hypothesis in your proof.
In my experience the proof is usually algebraic, and you just manipulate the problem until you get the induction hypothesis to appear. If you can do that and it works out, then you say the proof holds.
Here's one I just worked out,
Show $displaystylelim_{xtoinfty} frac{(ln x)^n}{x} = 0$
So you go:
Use L'Hospital's rule.
$displaystylelim_{xtoinfty} frac{ln x}{x} = 0$.
Since that's
$displaystylelim_{xtoinfty} frac{1}{x} = 0$.Assume true for $n=k$.
$displaystylelim_{xtoinfty} frac{(ln x)^k}{x} = 0$.Prove true for $n=k+1$. You get
$displaystylelim_{xtoinfty} frac{(ln x)^{k+1}}{x} = 0.$
Use L'Hospital again:
$displaystylelim_{xtoinfty} frac{(k+1)(ln x)^{k}}{x} = 0$.
Then you see the induction hypothesis appear, and you can say this is equal to $0$.
What I'm not comfortable with is this idea that you can just assume something to be true ($n=k$), then based on that assumption, form a proof for $n=k+1$ case.
I don't see how you can use something you've assumed to be true to prove something else to be true.
intuition induction
$endgroup$
I've never really understood why math induction is supposed to work.
You have these 3 steps:
Prove true for base case (n=0 or 1 or whatever)
Assume true for n=k. Call this the induction hypothesis.
Prove true for n=k+1, somewhere using the induction hypothesis in your proof.
In my experience the proof is usually algebraic, and you just manipulate the problem until you get the induction hypothesis to appear. If you can do that and it works out, then you say the proof holds.
Here's one I just worked out,
Show $displaystylelim_{xtoinfty} frac{(ln x)^n}{x} = 0$
So you go:
Use L'Hospital's rule.
$displaystylelim_{xtoinfty} frac{ln x}{x} = 0$.
Since that's
$displaystylelim_{xtoinfty} frac{1}{x} = 0$.Assume true for $n=k$.
$displaystylelim_{xtoinfty} frac{(ln x)^k}{x} = 0$.Prove true for $n=k+1$. You get
$displaystylelim_{xtoinfty} frac{(ln x)^{k+1}}{x} = 0.$
Use L'Hospital again:
$displaystylelim_{xtoinfty} frac{(k+1)(ln x)^{k}}{x} = 0$.
Then you see the induction hypothesis appear, and you can say this is equal to $0$.
What I'm not comfortable with is this idea that you can just assume something to be true ($n=k$), then based on that assumption, form a proof for $n=k+1$ case.
I don't see how you can use something you've assumed to be true to prove something else to be true.
intuition induction
intuition induction
edited Jan 16 at 19:09
J.G.
27.7k22843
27.7k22843
asked Jan 29 '11 at 21:20
bobobobobobobobo
4,198105377
4,198105377
2
$begingroup$
I think [math] is a pretty useless tag here; I hope you don't mind that I changed it into [intuition], which seems more apropo.
$endgroup$
– Arturo Magidin
Jan 29 '11 at 21:45
4
$begingroup$
not to pick, but "dominoes" do not actually appear in your question.
$endgroup$
– Pete L. Clark
Jan 29 '11 at 23:43
14
$begingroup$
You're looking at a train of dominoes, and you see that each one, if toppled, will knock down the next one. What happens when you push over the first domino? Induction!
$endgroup$
– I. J. Kennedy
Jan 30 '11 at 3:48
add a comment |
2
$begingroup$
I think [math] is a pretty useless tag here; I hope you don't mind that I changed it into [intuition], which seems more apropo.
$endgroup$
– Arturo Magidin
Jan 29 '11 at 21:45
4
$begingroup$
not to pick, but "dominoes" do not actually appear in your question.
$endgroup$
– Pete L. Clark
Jan 29 '11 at 23:43
14
$begingroup$
You're looking at a train of dominoes, and you see that each one, if toppled, will knock down the next one. What happens when you push over the first domino? Induction!
$endgroup$
– I. J. Kennedy
Jan 30 '11 at 3:48
2
2
$begingroup$
I think [math] is a pretty useless tag here; I hope you don't mind that I changed it into [intuition], which seems more apropo.
$endgroup$
– Arturo Magidin
Jan 29 '11 at 21:45
$begingroup$
I think [math] is a pretty useless tag here; I hope you don't mind that I changed it into [intuition], which seems more apropo.
$endgroup$
– Arturo Magidin
Jan 29 '11 at 21:45
4
4
$begingroup$
not to pick, but "dominoes" do not actually appear in your question.
$endgroup$
– Pete L. Clark
Jan 29 '11 at 23:43
$begingroup$
not to pick, but "dominoes" do not actually appear in your question.
$endgroup$
– Pete L. Clark
Jan 29 '11 at 23:43
14
14
$begingroup$
You're looking at a train of dominoes, and you see that each one, if toppled, will knock down the next one. What happens when you push over the first domino? Induction!
$endgroup$
– I. J. Kennedy
Jan 30 '11 at 3:48
$begingroup$
You're looking at a train of dominoes, and you see that each one, if toppled, will knock down the next one. What happens when you push over the first domino? Induction!
$endgroup$
– I. J. Kennedy
Jan 30 '11 at 3:48
add a comment |
7 Answers
7
active
oldest
votes
$begingroup$
The inductive step is a proof of an implication: you are proving that if the property you want holds for $k$, then it holds for $k+1$.
It is a result of formal logic that if you can prove $Prightarrow Q$ (that $P$ implies $Q$), then from $P$ you can prove $Q$; and conversely, that if from assuming that $P$ is true you can prove $Q$, then you can in fact prove $Prightarrow Q$.
We do this pretty much every time we prove something. For example, suppose you want to prove that if $n$ is a natural number, then $n^2$ is a natural number. How do we start? "Let $n$ be a natural number." Wait! Why are you allowed to just assume that you already have a natural number? Shouldn't you have to start by proving it's a natural number? The answer is no, we don't have to, because we are not trying to prove an absolute, we are trying to prove a conditional statement: that if $n$ is a natural number, then something happens. So we may begin by assuming we are already in the case where the antecedent is true. (Intuitively, this is because if the antecedent is false, then the implication is necessarily true and there is nothing to be done; formally, it is because the Deduction Theorem, which is what I described above, tells you that if you manage to find a formal proof that ends with "$n^2$ is a natural number" by assuming that "$n$ is a natural number" is true, then you can use that proof to produce a formal proof that establishes the implication "if $n$ is a natural number then $n^2$ is a natural number"; we don't have to go through the exercise of actually producing the latter proof, we know it's "out there").
We do that in Calculus: "if $limlimits_{xto x_0}f(x) = a$ and $limlimits_{xto x_0}g(x) = b$, then $limlimits_{xto x_0}(f(x)+g(x)) = a+b$." How do we prove this? We begin by assuming that the limit of $f(x)$ as $xto x_0$ is $a$, and that the limit of $g(x)$ as $xto x_0$ is $b$. We assume the premise/antecedent, and procede to try to prove the consequent.
What this means in the case of induction is that, since the "Inductive Step" is actually a statement that says that an implication holds:
$$mbox{"It" holds for $k$}rightarrow mbox{"it" holds for $k+1$},$$
then in order to prove this implication we can begin by assuming that the antecedent is already true, and then procede to prove the consequent. Assuming that the antecedent is true is precisely the "Induction Hypothesis".
When you are done with the inductive step, you have in fact not proven that it holds for any particular number, you have only shown that if it holds for a particular number $k$, then it must hold for the next number $k+1$. It is a conditional statement, not an absolute one.
It is only when you combine that conditional statement with the base, which is an absolute statement that says "it" holds for a specific number, that you can conclude that the original statement holds for all natural numbers (greater than or equal to the base).
Since you mention dominoes in your title, I assume you are familiar with the standard metaphor of induction like dominoes that are standing all in a row falling. The inductive step is like arguing that all the dominos will fall if you topple the first one (without actually toppling it): first, you argue that each domino is sufficiently close to the next domino so that if one falls, then the next one falls. You are not tumbling every domino. And when you argue this, you argue along the lines of "suppose this one falls; since it's length is ...", that is, you assume it falls in order to argue the next one will then fall. This is the same with the inductive step.
In a sense you are right that if feels like "cheating" to assume what you want; but the point is that you aren't really assuming what you want. Again, the inductive step does not in fact establish that the result holds for any number, it only establishes a conditional statement. If the result happens to hold for some $k$, then it would necessarily have to also hold for $k+1$. But we are completely silent on whether it actually holds for $k$ or not. We are not saying anything about that at the inductive-step stage.
Added: Here's an example to emphasize that the "inductive step" does not make any absolute statement, but only a conditional statement: Suppose you want to prove that for all natural numbers $n$, $n+1 = n$.
Inductive step. Induction Hypothesis: The statement holds for $k$; that is, I'm assuming that $k+1 = k$.
To be proven: The statement holds for $k+1$. Indeed: notice that since $k+1= k$, then adding one to both sides of the equation we have $(k+1)+1 = k+1$; this proves the statement holds for $k+1$. QED
This is a perfectly valid proof! It says that if $k+1=k$, then $(k+1)+1=k+1$. This is true! Of course, the antecedent is never true, but the implication is. The reason this is not a full proof by induction of a false statement is that there is no "base"; the inductive step only proves the conditional, nothing more.
By the way: Yes, most proofs by induction that one encounters early on involve algebraic manipulations, but not all proofs by induction are of that kind. Consider the following simplified game of Nim: there are a certain number of matchsticks, and players alternate taking $1$, $2$, or $3$ matchsticks every turn. The person who takes the last matchstick wins.
Proposition. In the simplified game above, the first player has a winning strategy if the number of matchsticks is not divisible by $4$, and the second player has a winning strategy if the number of matchsticks is divisible by 4.
The proof is by (strong) induction, and it involves no algebraic manipulations whatsoever.
$endgroup$
1
$begingroup$
Very well written.
$endgroup$
– Philoxopher
Apr 15 '11 at 0:28
$begingroup$
in you last Added: section , where is the base statement? or should I say basis case, we don't need it?
$endgroup$
– Mr.Anubis
Sep 23 '12 at 15:47
$begingroup$
Sorry if my question sounds silly,it's just I never read about the proof of why induction works which is why I don't feel confident in doing induction proofs; why induction show that the property holds for all k? i.e if we assume P(k) is true then we show P(k+1) is also true. Does it work because k is arbitrary ? We can pick k to be anything after we've shown it holds therefore the induction proof shows it holds for all k.
$endgroup$
– Nubcake
May 2 '15 at 10:19
add a comment |
$begingroup$
Arturo's lengthy answer hits all the right points in great detail.
More briefly: you are not doing yourself any favors by writing the Principle of Mathematical Induction as you are: the word "assume" need not appear anywhere in it.
Here is a statement of POMI (see here for variants on this statement and more on mathematical induction; but unfortunately this handout does not explicitly address your question.)
Let $S$ be a subset of the positive integers satisfying both of the following properties:
(i) $1 in S$.
(ii) For all positive integers $n$, $n in S implies n+1 in S$.
Then $S$ is the set of all positive integers.
So you need to understand the logic of an implication $A implies B$. You seem to be thinking that from $A implies B$ we can deduce $A$: this is absolutely false. Rather $A implies B$ is a logical predicate defined by its truth table: i.e., for each of the four possible combinations of truth/falsity of $A$ and $B$, we define whether $A implies B$ is true or false, as follows.
If $A$ is true and $B$ is true, then $A implies B$ is true.
If $A$ is true and $B$ is false, then $A implies B$ is false.
If $A$ is false and $B$ is true, then $A implies B$ is true.
If $A$ is false and $B$ is false, then $A implies B$ is true.
Thus, the only case in which $A implies B$ is false is if $A$ is true and $B$ is false. So, if you are trying to prove that $A implies B$ is true, it is enough to look at the situation when $A$ is true and show that in those cases $B$ is also true. So you see that we are not assuming that A is true: rather, we are looking at that case, since the other case is trivial: if $A$ is false, we don't need to know whether $B$ is true to know that the implication is true.
As an aside, I have to say that I wish I had a better understanding of why students have so much trouble with the logic of induction. I personally never did. While that was of some use to me mathematically, at this point I sort of wish that I did have some trouble at first: I would presumably have long since gotten over it but would be in a better situation with regards to my current teaching.
As I alluded to above, I do think that going over some (mildly) formal logic is a good way to be clear about induction (and this is also an argument for covering logic in "transitions" courses, as I wrote about in an answer to a MO question). Still I confess that I don't completely get the problems: this semester I am teaching an intermediate course on sequences and series which has this other intro to proofs course as a prerequisite. So the students have all seen induction before and seem to mostly get it. Mostly but not perfectly: when I did the second induction proof in class [the first one had two variables in it and I was inducting on one of them; in retrospect, that must have been just too complicated for some of the students], several students commented afterwards on the fact that I wrote "Induction step: suppose $n in mathbb{N}$. We will show that $P(n) implies P(n+1)$." (And yes, after that, I said "Assume $P(n)$" and wrote it out: this "assume" business is the language which is used in practice!) In their previous course, for the induction step the variable name had been switched to $k$. They wanted to know "if it was okay" to use $n$ instead.
So I tried putting my hands on my hips and asking: is there any logical difference between the statements
"For all $n in mathbb{N}, P(n) implies P(n+1)$"
and
"For all $k in mathbb{N}, P(k) implies P(k+1)$"?
Of course not: we can use whatever names we want for the variables! (In my own "intro to proofs course" I would have stressed that the point that these are quantified variables, so it really can't make a difference, but I am leery of referring to things that other instructors may not have covered.)
But this answer reveals a sort of lack of understanding of the cognitive process behind the question. What are they really confused about? Can someone suggest a better answer?
$endgroup$
$begingroup$
So why is it exactly that so many math books and instructors choose to start with n and then substitute k for the inductive step? No one seems to have a justification for doing so other than an apparent convention for summation. It seems quite unnecessary to switch the variable in mid-stream, pretend it's a constant, add one, fiddle around to find a matching pattern and somehow have it all prove itself. That's the impression I get when so many explanations of the steps involved lack the reasons why. Mechanics are more difficult to master when an understanding of the steps is absent.
$endgroup$
– Adam
Sep 26 '14 at 2:57
$begingroup$
@Adam: I sincerely don't know a good answer to your question. Nevertheless, many students seem to find the statement with the variable switched from $n$ to $k$ easier to understand: or more precisely, easier to partially understand, enough to perform certain kinds of induction arguments in practice. I wish I know why. Concerning your last sentence: I (and every other math instructor) agree! However, although switching the variable name is strange, it is certainly acceptable. I have never found a textbook treatment of mathematical induction that I thought was truly inadequate.
$endgroup$
– Pete L. Clark
Sep 28 '14 at 23:05
1
$begingroup$
When my professor was working through another induction example on the board last week, I saw the line which read, something to the effect of "for the case when n = k + 1," where n was an exponent. Suddenly it dawned on me that while "n = n + 1" is completely natural in programming languages (assignment), it would rarely be a true statement in mathematics. I'm satisfied with a single case where this kind of substitution is useful but am still not entirely comfortable with these kinds of proofs. I think part of it comes from a lack of standards for writing proofs (or awareness of them).
$endgroup$
– Adam
Oct 6 '14 at 2:54
$begingroup$
@Adam: I think that's a good explanation. Thanks.
$endgroup$
– Pete L. Clark
Oct 6 '14 at 19:51
add a comment |
$begingroup$
You prove the base case for $n=1$ (or $0$, let's use $1$ for the sake of simplicity), and then you use inductive reasoning. The way you do it us by arguing that if the formula holds for $n=k$, it must also hold for $n=k+1$. If is the key here. Once you have proven that, you have effectively proven that the formula will work for all $n$, and here's why -- you can always start with $n=1$ ($n=k$), and prove that $n=1+1$ ($n=k+1$) will work. Then you can treat $n=1+1$ as $n=k$, and prove than $n=1+1+1$ ($n=k+1$) will satisfy the formula too, and so on to the infinity.
In other words, since you have demonstrated that the "base step" works, you only need to demonstrate that the "base step + 1" will work, too. Then the "base step + 1" can be treated as a 'new base step,' crudely put, and the sequence of "base step + 1" can be continued on and on. It does not matter if the statement holds when $n=k$ or not, all that matters is proving the implication -- if it holds for k, it will also hold for $k+1$. That allows you to move further from the base step.
So, maybe, the more correct way of stating the method is "Assuming that statement holds for some n, show that it holds for $n+1$." Once you get that, you can go further infinitely as described above. Obviously, this principle does not work for all proofs, as there are problems where you will need to use Principle of Strong Induction (break the problem down into cases, for which $n$ and $n+1$ actually correspond).
$endgroup$
add a comment |
$begingroup$
Just for another try at the intuition: usually an induction proof is proving something about natural numbers, that is, proving a statement for n, where n goes over all numbers starting from 0 and going on up.
So you want to prove the statement for all $n$. You could prove it for a constant, but that doesn't do it for all $n$. The 'inductive' way is to show that, if your statement is true for some $n$ (you don't (yet) know it's true for all $n$) you can show that the statement is true for $n+1$. The $n$ here isn't -all- $n$, just one you don't know yet. And whichever $n$ this is, you can always reach $n+1$.
The domino image is that the inductive case of a proof shows you can always 'go' from one integer to the next, you -can- always knock over the next domino. The base case just means you can knock over the first domino in a sequence. The intuition is if those two things hold, then -all- the dominos will fall over. Knock over block 0, and it will knock over block 1, and then that will knock over block 2 and...that will knock over block $n$, and then that will knock over $n+1$, and then that will... (go on forever).
Back to the proof itself, you're not assuming the thing you're trying to prove (which is truth for -all- $n$). You're assuming that -if- it is true for some $n$, then it must be true for $n+1$.
In formal symbols you're trying to prove $${rm for all} n, P(n)$$ ($P(n)$ is the formula with $n$ in it).
The inductive part of the proof is $${rm for all} n, (P(n) implies P(n+1) )$$ (any one domino can knock over the next, no matter which domino you're thinking of).
$endgroup$
add a comment |
$begingroup$
At the risk of seeming repetitive (see "How to prove the mathematical induction is true?"), the principle of mathematical induction is just one of Peano's axioms for the natural numbers (Axiom 5, below):
Axiom 1: 1 is a natural number. That is, the set of natural numbers is not empty; it contains an object called 1 (read "one").
Axiom 2: For each natural number $x$ there exists a unique natural number, called the successor of $x$, which will be denoted by $x'$.
Axiom 3: For each natural number $x$, we have $x'neq 1$.
Axiom 4: If $x'=y'$ then $x=y$.
Axiom 5 (Axiom of Induction): Suppose $M$ is a subset of the natural numbers, such that (a) 1 belongs to $M$, and (b) if $x$ belongs to $M$ then so does $x'$. Then $M$ contains all natural numbers.
Source: http://www.ms.uky.edu/~lee/ma502/notes2/node7.html
If you have trouble visualizing these relationships, I find it helps to think of the set of natural numbers as directed graph (a network of nodes connected by one-way arrows) as follows:
(1) --> (2) --> (3) --> ...
In words: There is a node (call it 1) that has no arrows going into it (Axioms 1 and 3, above). Every node has exactly one arrow leaving it (Axiom 2). Every node has at most one arrow going into it (Axiom 4). You can get to any node by starting at 1 and following the arrows; that is, there are no isolated nodes or portions of the graph off to the side (Axiom 5).
$endgroup$
add a comment |
$begingroup$
A slightly different point of view - from a programmer's perspective.
Consider Peano representation of natural numbers.
In short, every natural number is either zero or a successor of a natural number.
This way we can construct an arbitrary natural number n by applying the successor function n times to zero.
We can represent natural numbers as a type:
data Natural : Type where
Zero : Natural
Succ : Natural -> Natural
This simply means that "Zero
is a Natural
" and "if given a Natural
, we can construct an another Natural
with the Succ
function". A mathematician might be more familiar with "Zero
is a constant function" and Succ
is a function with domain Natural
and range Natural
.
Now let us define a simple operation for Natural
so that we have something to prove.
(+) : Natural -> Natural -> Natural
(+) Zero y = y
(+) (Succ x) y = Succ (x + y)
Which simply defines a part-wise specified function (+)
with range (Natural X Natural)
, but written in its curried form (Natural -> Natural)
and range Natural
, hence (+) : Natural -> (Natural -> Natural)
, considering (->)
is right associative.
We state that Zero
plus y
= y
and Succ x
plus y
is the same thing as Succ
of x
plus y
. This is a definition of the operation.
Finally we can write a simple inductive proof!
$(forall x in mathbb{N}) x+0 = x$
In Idris, which is the programming language I'm using here, we say:x_plus_zero_is_x : (x : Natural) -> x + Zero = x
Inductive base:
We need a proof for 0 (Zero
).x_plus_zero_is_x Zero = ?something
What is ?something
then? Performing simple substitution x = Zero
we get Zero + Zero = Zero
. By definition, Zero + Zero
is Zero
(look at (+) Zero y = y
for y = Zero
) hence we are left with: Zero = Zero
. If the left hand side is exactly equal to the right hand side, we can construct equality (=)
by simply saying Refl
(from "reflexive"). Therefore ?something = Refl
ergo:
x_plus_zero_is_x Zero = Refl
Inductive step:
Now for the Succ y
.x_plus_zero_is_x (Succ k) = ?what
Here is an important thing to notice. x_plus_zero_is_x
is in this context a function. By saying we assume something, we can view it as "being given an argument for the function call". That is, if the function is not called (and its arguments are not provided) we're perfectly fine even without its definition. And what are we going to assume? Well, let us assume that this property holds for the immediate next component of the structure, therefore "unwrapping" one Succ
. As said before, to assume can in some sense be viewed as "to actually call the thing", therefore we can create a small inline helper call like so:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in ?what
Now induction
is simply a recursive function call to x_plus_zero_is_x
but not with (Succ k)
, just with k
, which happens to be the immediate component of the structure, whatever its shape might be (Zero
or another Succ k'
). By doing substitution x = k
, we get k + Zero = k
.
What is ?what
however? Let us perform substitution x = Succ k
, we get (Succ k) + Zero = Succ k
and by definition (look at (+) (Succ x) y = Succ (x + y)
) we get:
Succ (k + Zero) = Succ k
.
We cannot use Refl
yet because clearly rhs is not lhs. However, from our assumption (the recursive call of the function with k
), we have that k + Zero = k
, from which Succ k = Succ k
and therefore Refl
. In Idris we say:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in rewrite induction in Refl
The rewrite induction
simply performs the aforementioned k + Zero = k
substitution. Therefore we end up with:
x_plus_zero_is_x : (x : Natural) -> x + Zero = x
x_plus_zero_is_x Zero = Refl -- inductive base
x_plus_zero_is_x (Succ k) = -- inductive step
let induction = x_plus_zero_is_x k in rewrite induction in Refl
Which is a constructive proof that the equality holds. Notice that we simply called the function in question with something, which is our way of assuming things. We can also assume we have x : Nat
because the function was called with some x
! Shiny. The entire correspondence between propositions and types (and further programs as proofs, proof normalization as program execution) is known as Curry-Howard Isomorphism.
$endgroup$
add a comment |
$begingroup$
In explanations of the principle of induction, I think the 'domino' example is often not explored enough for it to be intuitive. Let me endeavour a more understandable explanation using the domino analogy.
As always, you first prove that P(1) is true (or P(2) or P(3) or P(4 septillion), depending from where you want your hypothesis to be valid. If you don't care that P(1) be true, you can begin from P(2) or wherever.).
Now, since P(k) implies P(k+1) (Which you prove by algebraic manipulation), and P(1) is true, P(2) is true. Since P(2) is true, P(2+1) is true. Since P(3) is true, P(4) is true. Since P(100,000) is true, P(100,001) is true.
This way, as you see, you can say P(n) is true for any n is the natural n.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f19485%2fdominoes-and-induction-or-how-does-induction-work%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
7 Answers
7
active
oldest
votes
7 Answers
7
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The inductive step is a proof of an implication: you are proving that if the property you want holds for $k$, then it holds for $k+1$.
It is a result of formal logic that if you can prove $Prightarrow Q$ (that $P$ implies $Q$), then from $P$ you can prove $Q$; and conversely, that if from assuming that $P$ is true you can prove $Q$, then you can in fact prove $Prightarrow Q$.
We do this pretty much every time we prove something. For example, suppose you want to prove that if $n$ is a natural number, then $n^2$ is a natural number. How do we start? "Let $n$ be a natural number." Wait! Why are you allowed to just assume that you already have a natural number? Shouldn't you have to start by proving it's a natural number? The answer is no, we don't have to, because we are not trying to prove an absolute, we are trying to prove a conditional statement: that if $n$ is a natural number, then something happens. So we may begin by assuming we are already in the case where the antecedent is true. (Intuitively, this is because if the antecedent is false, then the implication is necessarily true and there is nothing to be done; formally, it is because the Deduction Theorem, which is what I described above, tells you that if you manage to find a formal proof that ends with "$n^2$ is a natural number" by assuming that "$n$ is a natural number" is true, then you can use that proof to produce a formal proof that establishes the implication "if $n$ is a natural number then $n^2$ is a natural number"; we don't have to go through the exercise of actually producing the latter proof, we know it's "out there").
We do that in Calculus: "if $limlimits_{xto x_0}f(x) = a$ and $limlimits_{xto x_0}g(x) = b$, then $limlimits_{xto x_0}(f(x)+g(x)) = a+b$." How do we prove this? We begin by assuming that the limit of $f(x)$ as $xto x_0$ is $a$, and that the limit of $g(x)$ as $xto x_0$ is $b$. We assume the premise/antecedent, and procede to try to prove the consequent.
What this means in the case of induction is that, since the "Inductive Step" is actually a statement that says that an implication holds:
$$mbox{"It" holds for $k$}rightarrow mbox{"it" holds for $k+1$},$$
then in order to prove this implication we can begin by assuming that the antecedent is already true, and then procede to prove the consequent. Assuming that the antecedent is true is precisely the "Induction Hypothesis".
When you are done with the inductive step, you have in fact not proven that it holds for any particular number, you have only shown that if it holds for a particular number $k$, then it must hold for the next number $k+1$. It is a conditional statement, not an absolute one.
It is only when you combine that conditional statement with the base, which is an absolute statement that says "it" holds for a specific number, that you can conclude that the original statement holds for all natural numbers (greater than or equal to the base).
Since you mention dominoes in your title, I assume you are familiar with the standard metaphor of induction like dominoes that are standing all in a row falling. The inductive step is like arguing that all the dominos will fall if you topple the first one (without actually toppling it): first, you argue that each domino is sufficiently close to the next domino so that if one falls, then the next one falls. You are not tumbling every domino. And when you argue this, you argue along the lines of "suppose this one falls; since it's length is ...", that is, you assume it falls in order to argue the next one will then fall. This is the same with the inductive step.
In a sense you are right that if feels like "cheating" to assume what you want; but the point is that you aren't really assuming what you want. Again, the inductive step does not in fact establish that the result holds for any number, it only establishes a conditional statement. If the result happens to hold for some $k$, then it would necessarily have to also hold for $k+1$. But we are completely silent on whether it actually holds for $k$ or not. We are not saying anything about that at the inductive-step stage.
Added: Here's an example to emphasize that the "inductive step" does not make any absolute statement, but only a conditional statement: Suppose you want to prove that for all natural numbers $n$, $n+1 = n$.
Inductive step. Induction Hypothesis: The statement holds for $k$; that is, I'm assuming that $k+1 = k$.
To be proven: The statement holds for $k+1$. Indeed: notice that since $k+1= k$, then adding one to both sides of the equation we have $(k+1)+1 = k+1$; this proves the statement holds for $k+1$. QED
This is a perfectly valid proof! It says that if $k+1=k$, then $(k+1)+1=k+1$. This is true! Of course, the antecedent is never true, but the implication is. The reason this is not a full proof by induction of a false statement is that there is no "base"; the inductive step only proves the conditional, nothing more.
By the way: Yes, most proofs by induction that one encounters early on involve algebraic manipulations, but not all proofs by induction are of that kind. Consider the following simplified game of Nim: there are a certain number of matchsticks, and players alternate taking $1$, $2$, or $3$ matchsticks every turn. The person who takes the last matchstick wins.
Proposition. In the simplified game above, the first player has a winning strategy if the number of matchsticks is not divisible by $4$, and the second player has a winning strategy if the number of matchsticks is divisible by 4.
The proof is by (strong) induction, and it involves no algebraic manipulations whatsoever.
$endgroup$
1
$begingroup$
Very well written.
$endgroup$
– Philoxopher
Apr 15 '11 at 0:28
$begingroup$
in you last Added: section , where is the base statement? or should I say basis case, we don't need it?
$endgroup$
– Mr.Anubis
Sep 23 '12 at 15:47
$begingroup$
Sorry if my question sounds silly,it's just I never read about the proof of why induction works which is why I don't feel confident in doing induction proofs; why induction show that the property holds for all k? i.e if we assume P(k) is true then we show P(k+1) is also true. Does it work because k is arbitrary ? We can pick k to be anything after we've shown it holds therefore the induction proof shows it holds for all k.
$endgroup$
– Nubcake
May 2 '15 at 10:19
add a comment |
$begingroup$
The inductive step is a proof of an implication: you are proving that if the property you want holds for $k$, then it holds for $k+1$.
It is a result of formal logic that if you can prove $Prightarrow Q$ (that $P$ implies $Q$), then from $P$ you can prove $Q$; and conversely, that if from assuming that $P$ is true you can prove $Q$, then you can in fact prove $Prightarrow Q$.
We do this pretty much every time we prove something. For example, suppose you want to prove that if $n$ is a natural number, then $n^2$ is a natural number. How do we start? "Let $n$ be a natural number." Wait! Why are you allowed to just assume that you already have a natural number? Shouldn't you have to start by proving it's a natural number? The answer is no, we don't have to, because we are not trying to prove an absolute, we are trying to prove a conditional statement: that if $n$ is a natural number, then something happens. So we may begin by assuming we are already in the case where the antecedent is true. (Intuitively, this is because if the antecedent is false, then the implication is necessarily true and there is nothing to be done; formally, it is because the Deduction Theorem, which is what I described above, tells you that if you manage to find a formal proof that ends with "$n^2$ is a natural number" by assuming that "$n$ is a natural number" is true, then you can use that proof to produce a formal proof that establishes the implication "if $n$ is a natural number then $n^2$ is a natural number"; we don't have to go through the exercise of actually producing the latter proof, we know it's "out there").
We do that in Calculus: "if $limlimits_{xto x_0}f(x) = a$ and $limlimits_{xto x_0}g(x) = b$, then $limlimits_{xto x_0}(f(x)+g(x)) = a+b$." How do we prove this? We begin by assuming that the limit of $f(x)$ as $xto x_0$ is $a$, and that the limit of $g(x)$ as $xto x_0$ is $b$. We assume the premise/antecedent, and procede to try to prove the consequent.
What this means in the case of induction is that, since the "Inductive Step" is actually a statement that says that an implication holds:
$$mbox{"It" holds for $k$}rightarrow mbox{"it" holds for $k+1$},$$
then in order to prove this implication we can begin by assuming that the antecedent is already true, and then procede to prove the consequent. Assuming that the antecedent is true is precisely the "Induction Hypothesis".
When you are done with the inductive step, you have in fact not proven that it holds for any particular number, you have only shown that if it holds for a particular number $k$, then it must hold for the next number $k+1$. It is a conditional statement, not an absolute one.
It is only when you combine that conditional statement with the base, which is an absolute statement that says "it" holds for a specific number, that you can conclude that the original statement holds for all natural numbers (greater than or equal to the base).
Since you mention dominoes in your title, I assume you are familiar with the standard metaphor of induction like dominoes that are standing all in a row falling. The inductive step is like arguing that all the dominos will fall if you topple the first one (without actually toppling it): first, you argue that each domino is sufficiently close to the next domino so that if one falls, then the next one falls. You are not tumbling every domino. And when you argue this, you argue along the lines of "suppose this one falls; since it's length is ...", that is, you assume it falls in order to argue the next one will then fall. This is the same with the inductive step.
In a sense you are right that if feels like "cheating" to assume what you want; but the point is that you aren't really assuming what you want. Again, the inductive step does not in fact establish that the result holds for any number, it only establishes a conditional statement. If the result happens to hold for some $k$, then it would necessarily have to also hold for $k+1$. But we are completely silent on whether it actually holds for $k$ or not. We are not saying anything about that at the inductive-step stage.
Added: Here's an example to emphasize that the "inductive step" does not make any absolute statement, but only a conditional statement: Suppose you want to prove that for all natural numbers $n$, $n+1 = n$.
Inductive step. Induction Hypothesis: The statement holds for $k$; that is, I'm assuming that $k+1 = k$.
To be proven: The statement holds for $k+1$. Indeed: notice that since $k+1= k$, then adding one to both sides of the equation we have $(k+1)+1 = k+1$; this proves the statement holds for $k+1$. QED
This is a perfectly valid proof! It says that if $k+1=k$, then $(k+1)+1=k+1$. This is true! Of course, the antecedent is never true, but the implication is. The reason this is not a full proof by induction of a false statement is that there is no "base"; the inductive step only proves the conditional, nothing more.
By the way: Yes, most proofs by induction that one encounters early on involve algebraic manipulations, but not all proofs by induction are of that kind. Consider the following simplified game of Nim: there are a certain number of matchsticks, and players alternate taking $1$, $2$, or $3$ matchsticks every turn. The person who takes the last matchstick wins.
Proposition. In the simplified game above, the first player has a winning strategy if the number of matchsticks is not divisible by $4$, and the second player has a winning strategy if the number of matchsticks is divisible by 4.
The proof is by (strong) induction, and it involves no algebraic manipulations whatsoever.
$endgroup$
1
$begingroup$
Very well written.
$endgroup$
– Philoxopher
Apr 15 '11 at 0:28
$begingroup$
in you last Added: section , where is the base statement? or should I say basis case, we don't need it?
$endgroup$
– Mr.Anubis
Sep 23 '12 at 15:47
$begingroup$
Sorry if my question sounds silly,it's just I never read about the proof of why induction works which is why I don't feel confident in doing induction proofs; why induction show that the property holds for all k? i.e if we assume P(k) is true then we show P(k+1) is also true. Does it work because k is arbitrary ? We can pick k to be anything after we've shown it holds therefore the induction proof shows it holds for all k.
$endgroup$
– Nubcake
May 2 '15 at 10:19
add a comment |
$begingroup$
The inductive step is a proof of an implication: you are proving that if the property you want holds for $k$, then it holds for $k+1$.
It is a result of formal logic that if you can prove $Prightarrow Q$ (that $P$ implies $Q$), then from $P$ you can prove $Q$; and conversely, that if from assuming that $P$ is true you can prove $Q$, then you can in fact prove $Prightarrow Q$.
We do this pretty much every time we prove something. For example, suppose you want to prove that if $n$ is a natural number, then $n^2$ is a natural number. How do we start? "Let $n$ be a natural number." Wait! Why are you allowed to just assume that you already have a natural number? Shouldn't you have to start by proving it's a natural number? The answer is no, we don't have to, because we are not trying to prove an absolute, we are trying to prove a conditional statement: that if $n$ is a natural number, then something happens. So we may begin by assuming we are already in the case where the antecedent is true. (Intuitively, this is because if the antecedent is false, then the implication is necessarily true and there is nothing to be done; formally, it is because the Deduction Theorem, which is what I described above, tells you that if you manage to find a formal proof that ends with "$n^2$ is a natural number" by assuming that "$n$ is a natural number" is true, then you can use that proof to produce a formal proof that establishes the implication "if $n$ is a natural number then $n^2$ is a natural number"; we don't have to go through the exercise of actually producing the latter proof, we know it's "out there").
We do that in Calculus: "if $limlimits_{xto x_0}f(x) = a$ and $limlimits_{xto x_0}g(x) = b$, then $limlimits_{xto x_0}(f(x)+g(x)) = a+b$." How do we prove this? We begin by assuming that the limit of $f(x)$ as $xto x_0$ is $a$, and that the limit of $g(x)$ as $xto x_0$ is $b$. We assume the premise/antecedent, and procede to try to prove the consequent.
What this means in the case of induction is that, since the "Inductive Step" is actually a statement that says that an implication holds:
$$mbox{"It" holds for $k$}rightarrow mbox{"it" holds for $k+1$},$$
then in order to prove this implication we can begin by assuming that the antecedent is already true, and then procede to prove the consequent. Assuming that the antecedent is true is precisely the "Induction Hypothesis".
When you are done with the inductive step, you have in fact not proven that it holds for any particular number, you have only shown that if it holds for a particular number $k$, then it must hold for the next number $k+1$. It is a conditional statement, not an absolute one.
It is only when you combine that conditional statement with the base, which is an absolute statement that says "it" holds for a specific number, that you can conclude that the original statement holds for all natural numbers (greater than or equal to the base).
Since you mention dominoes in your title, I assume you are familiar with the standard metaphor of induction like dominoes that are standing all in a row falling. The inductive step is like arguing that all the dominos will fall if you topple the first one (without actually toppling it): first, you argue that each domino is sufficiently close to the next domino so that if one falls, then the next one falls. You are not tumbling every domino. And when you argue this, you argue along the lines of "suppose this one falls; since it's length is ...", that is, you assume it falls in order to argue the next one will then fall. This is the same with the inductive step.
In a sense you are right that if feels like "cheating" to assume what you want; but the point is that you aren't really assuming what you want. Again, the inductive step does not in fact establish that the result holds for any number, it only establishes a conditional statement. If the result happens to hold for some $k$, then it would necessarily have to also hold for $k+1$. But we are completely silent on whether it actually holds for $k$ or not. We are not saying anything about that at the inductive-step stage.
Added: Here's an example to emphasize that the "inductive step" does not make any absolute statement, but only a conditional statement: Suppose you want to prove that for all natural numbers $n$, $n+1 = n$.
Inductive step. Induction Hypothesis: The statement holds for $k$; that is, I'm assuming that $k+1 = k$.
To be proven: The statement holds for $k+1$. Indeed: notice that since $k+1= k$, then adding one to both sides of the equation we have $(k+1)+1 = k+1$; this proves the statement holds for $k+1$. QED
This is a perfectly valid proof! It says that if $k+1=k$, then $(k+1)+1=k+1$. This is true! Of course, the antecedent is never true, but the implication is. The reason this is not a full proof by induction of a false statement is that there is no "base"; the inductive step only proves the conditional, nothing more.
By the way: Yes, most proofs by induction that one encounters early on involve algebraic manipulations, but not all proofs by induction are of that kind. Consider the following simplified game of Nim: there are a certain number of matchsticks, and players alternate taking $1$, $2$, or $3$ matchsticks every turn. The person who takes the last matchstick wins.
Proposition. In the simplified game above, the first player has a winning strategy if the number of matchsticks is not divisible by $4$, and the second player has a winning strategy if the number of matchsticks is divisible by 4.
The proof is by (strong) induction, and it involves no algebraic manipulations whatsoever.
$endgroup$
The inductive step is a proof of an implication: you are proving that if the property you want holds for $k$, then it holds for $k+1$.
It is a result of formal logic that if you can prove $Prightarrow Q$ (that $P$ implies $Q$), then from $P$ you can prove $Q$; and conversely, that if from assuming that $P$ is true you can prove $Q$, then you can in fact prove $Prightarrow Q$.
We do this pretty much every time we prove something. For example, suppose you want to prove that if $n$ is a natural number, then $n^2$ is a natural number. How do we start? "Let $n$ be a natural number." Wait! Why are you allowed to just assume that you already have a natural number? Shouldn't you have to start by proving it's a natural number? The answer is no, we don't have to, because we are not trying to prove an absolute, we are trying to prove a conditional statement: that if $n$ is a natural number, then something happens. So we may begin by assuming we are already in the case where the antecedent is true. (Intuitively, this is because if the antecedent is false, then the implication is necessarily true and there is nothing to be done; formally, it is because the Deduction Theorem, which is what I described above, tells you that if you manage to find a formal proof that ends with "$n^2$ is a natural number" by assuming that "$n$ is a natural number" is true, then you can use that proof to produce a formal proof that establishes the implication "if $n$ is a natural number then $n^2$ is a natural number"; we don't have to go through the exercise of actually producing the latter proof, we know it's "out there").
We do that in Calculus: "if $limlimits_{xto x_0}f(x) = a$ and $limlimits_{xto x_0}g(x) = b$, then $limlimits_{xto x_0}(f(x)+g(x)) = a+b$." How do we prove this? We begin by assuming that the limit of $f(x)$ as $xto x_0$ is $a$, and that the limit of $g(x)$ as $xto x_0$ is $b$. We assume the premise/antecedent, and procede to try to prove the consequent.
What this means in the case of induction is that, since the "Inductive Step" is actually a statement that says that an implication holds:
$$mbox{"It" holds for $k$}rightarrow mbox{"it" holds for $k+1$},$$
then in order to prove this implication we can begin by assuming that the antecedent is already true, and then procede to prove the consequent. Assuming that the antecedent is true is precisely the "Induction Hypothesis".
When you are done with the inductive step, you have in fact not proven that it holds for any particular number, you have only shown that if it holds for a particular number $k$, then it must hold for the next number $k+1$. It is a conditional statement, not an absolute one.
It is only when you combine that conditional statement with the base, which is an absolute statement that says "it" holds for a specific number, that you can conclude that the original statement holds for all natural numbers (greater than or equal to the base).
Since you mention dominoes in your title, I assume you are familiar with the standard metaphor of induction like dominoes that are standing all in a row falling. The inductive step is like arguing that all the dominos will fall if you topple the first one (without actually toppling it): first, you argue that each domino is sufficiently close to the next domino so that if one falls, then the next one falls. You are not tumbling every domino. And when you argue this, you argue along the lines of "suppose this one falls; since it's length is ...", that is, you assume it falls in order to argue the next one will then fall. This is the same with the inductive step.
In a sense you are right that if feels like "cheating" to assume what you want; but the point is that you aren't really assuming what you want. Again, the inductive step does not in fact establish that the result holds for any number, it only establishes a conditional statement. If the result happens to hold for some $k$, then it would necessarily have to also hold for $k+1$. But we are completely silent on whether it actually holds for $k$ or not. We are not saying anything about that at the inductive-step stage.
Added: Here's an example to emphasize that the "inductive step" does not make any absolute statement, but only a conditional statement: Suppose you want to prove that for all natural numbers $n$, $n+1 = n$.
Inductive step. Induction Hypothesis: The statement holds for $k$; that is, I'm assuming that $k+1 = k$.
To be proven: The statement holds for $k+1$. Indeed: notice that since $k+1= k$, then adding one to both sides of the equation we have $(k+1)+1 = k+1$; this proves the statement holds for $k+1$. QED
This is a perfectly valid proof! It says that if $k+1=k$, then $(k+1)+1=k+1$. This is true! Of course, the antecedent is never true, but the implication is. The reason this is not a full proof by induction of a false statement is that there is no "base"; the inductive step only proves the conditional, nothing more.
By the way: Yes, most proofs by induction that one encounters early on involve algebraic manipulations, but not all proofs by induction are of that kind. Consider the following simplified game of Nim: there are a certain number of matchsticks, and players alternate taking $1$, $2$, or $3$ matchsticks every turn. The person who takes the last matchstick wins.
Proposition. In the simplified game above, the first player has a winning strategy if the number of matchsticks is not divisible by $4$, and the second player has a winning strategy if the number of matchsticks is divisible by 4.
The proof is by (strong) induction, and it involves no algebraic manipulations whatsoever.
edited Jan 30 '11 at 4:58
answered Jan 29 '11 at 21:38
Arturo MagidinArturo Magidin
263k34587914
263k34587914
1
$begingroup$
Very well written.
$endgroup$
– Philoxopher
Apr 15 '11 at 0:28
$begingroup$
in you last Added: section , where is the base statement? or should I say basis case, we don't need it?
$endgroup$
– Mr.Anubis
Sep 23 '12 at 15:47
$begingroup$
Sorry if my question sounds silly,it's just I never read about the proof of why induction works which is why I don't feel confident in doing induction proofs; why induction show that the property holds for all k? i.e if we assume P(k) is true then we show P(k+1) is also true. Does it work because k is arbitrary ? We can pick k to be anything after we've shown it holds therefore the induction proof shows it holds for all k.
$endgroup$
– Nubcake
May 2 '15 at 10:19
add a comment |
1
$begingroup$
Very well written.
$endgroup$
– Philoxopher
Apr 15 '11 at 0:28
$begingroup$
in you last Added: section , where is the base statement? or should I say basis case, we don't need it?
$endgroup$
– Mr.Anubis
Sep 23 '12 at 15:47
$begingroup$
Sorry if my question sounds silly,it's just I never read about the proof of why induction works which is why I don't feel confident in doing induction proofs; why induction show that the property holds for all k? i.e if we assume P(k) is true then we show P(k+1) is also true. Does it work because k is arbitrary ? We can pick k to be anything after we've shown it holds therefore the induction proof shows it holds for all k.
$endgroup$
– Nubcake
May 2 '15 at 10:19
1
1
$begingroup$
Very well written.
$endgroup$
– Philoxopher
Apr 15 '11 at 0:28
$begingroup$
Very well written.
$endgroup$
– Philoxopher
Apr 15 '11 at 0:28
$begingroup$
in you last Added: section , where is the base statement? or should I say basis case, we don't need it?
$endgroup$
– Mr.Anubis
Sep 23 '12 at 15:47
$begingroup$
in you last Added: section , where is the base statement? or should I say basis case, we don't need it?
$endgroup$
– Mr.Anubis
Sep 23 '12 at 15:47
$begingroup$
Sorry if my question sounds silly,it's just I never read about the proof of why induction works which is why I don't feel confident in doing induction proofs; why induction show that the property holds for all k? i.e if we assume P(k) is true then we show P(k+1) is also true. Does it work because k is arbitrary ? We can pick k to be anything after we've shown it holds therefore the induction proof shows it holds for all k.
$endgroup$
– Nubcake
May 2 '15 at 10:19
$begingroup$
Sorry if my question sounds silly,it's just I never read about the proof of why induction works which is why I don't feel confident in doing induction proofs; why induction show that the property holds for all k? i.e if we assume P(k) is true then we show P(k+1) is also true. Does it work because k is arbitrary ? We can pick k to be anything after we've shown it holds therefore the induction proof shows it holds for all k.
$endgroup$
– Nubcake
May 2 '15 at 10:19
add a comment |
$begingroup$
Arturo's lengthy answer hits all the right points in great detail.
More briefly: you are not doing yourself any favors by writing the Principle of Mathematical Induction as you are: the word "assume" need not appear anywhere in it.
Here is a statement of POMI (see here for variants on this statement and more on mathematical induction; but unfortunately this handout does not explicitly address your question.)
Let $S$ be a subset of the positive integers satisfying both of the following properties:
(i) $1 in S$.
(ii) For all positive integers $n$, $n in S implies n+1 in S$.
Then $S$ is the set of all positive integers.
So you need to understand the logic of an implication $A implies B$. You seem to be thinking that from $A implies B$ we can deduce $A$: this is absolutely false. Rather $A implies B$ is a logical predicate defined by its truth table: i.e., for each of the four possible combinations of truth/falsity of $A$ and $B$, we define whether $A implies B$ is true or false, as follows.
If $A$ is true and $B$ is true, then $A implies B$ is true.
If $A$ is true and $B$ is false, then $A implies B$ is false.
If $A$ is false and $B$ is true, then $A implies B$ is true.
If $A$ is false and $B$ is false, then $A implies B$ is true.
Thus, the only case in which $A implies B$ is false is if $A$ is true and $B$ is false. So, if you are trying to prove that $A implies B$ is true, it is enough to look at the situation when $A$ is true and show that in those cases $B$ is also true. So you see that we are not assuming that A is true: rather, we are looking at that case, since the other case is trivial: if $A$ is false, we don't need to know whether $B$ is true to know that the implication is true.
As an aside, I have to say that I wish I had a better understanding of why students have so much trouble with the logic of induction. I personally never did. While that was of some use to me mathematically, at this point I sort of wish that I did have some trouble at first: I would presumably have long since gotten over it but would be in a better situation with regards to my current teaching.
As I alluded to above, I do think that going over some (mildly) formal logic is a good way to be clear about induction (and this is also an argument for covering logic in "transitions" courses, as I wrote about in an answer to a MO question). Still I confess that I don't completely get the problems: this semester I am teaching an intermediate course on sequences and series which has this other intro to proofs course as a prerequisite. So the students have all seen induction before and seem to mostly get it. Mostly but not perfectly: when I did the second induction proof in class [the first one had two variables in it and I was inducting on one of them; in retrospect, that must have been just too complicated for some of the students], several students commented afterwards on the fact that I wrote "Induction step: suppose $n in mathbb{N}$. We will show that $P(n) implies P(n+1)$." (And yes, after that, I said "Assume $P(n)$" and wrote it out: this "assume" business is the language which is used in practice!) In their previous course, for the induction step the variable name had been switched to $k$. They wanted to know "if it was okay" to use $n$ instead.
So I tried putting my hands on my hips and asking: is there any logical difference between the statements
"For all $n in mathbb{N}, P(n) implies P(n+1)$"
and
"For all $k in mathbb{N}, P(k) implies P(k+1)$"?
Of course not: we can use whatever names we want for the variables! (In my own "intro to proofs course" I would have stressed that the point that these are quantified variables, so it really can't make a difference, but I am leery of referring to things that other instructors may not have covered.)
But this answer reveals a sort of lack of understanding of the cognitive process behind the question. What are they really confused about? Can someone suggest a better answer?
$endgroup$
$begingroup$
So why is it exactly that so many math books and instructors choose to start with n and then substitute k for the inductive step? No one seems to have a justification for doing so other than an apparent convention for summation. It seems quite unnecessary to switch the variable in mid-stream, pretend it's a constant, add one, fiddle around to find a matching pattern and somehow have it all prove itself. That's the impression I get when so many explanations of the steps involved lack the reasons why. Mechanics are more difficult to master when an understanding of the steps is absent.
$endgroup$
– Adam
Sep 26 '14 at 2:57
$begingroup$
@Adam: I sincerely don't know a good answer to your question. Nevertheless, many students seem to find the statement with the variable switched from $n$ to $k$ easier to understand: or more precisely, easier to partially understand, enough to perform certain kinds of induction arguments in practice. I wish I know why. Concerning your last sentence: I (and every other math instructor) agree! However, although switching the variable name is strange, it is certainly acceptable. I have never found a textbook treatment of mathematical induction that I thought was truly inadequate.
$endgroup$
– Pete L. Clark
Sep 28 '14 at 23:05
1
$begingroup$
When my professor was working through another induction example on the board last week, I saw the line which read, something to the effect of "for the case when n = k + 1," where n was an exponent. Suddenly it dawned on me that while "n = n + 1" is completely natural in programming languages (assignment), it would rarely be a true statement in mathematics. I'm satisfied with a single case where this kind of substitution is useful but am still not entirely comfortable with these kinds of proofs. I think part of it comes from a lack of standards for writing proofs (or awareness of them).
$endgroup$
– Adam
Oct 6 '14 at 2:54
$begingroup$
@Adam: I think that's a good explanation. Thanks.
$endgroup$
– Pete L. Clark
Oct 6 '14 at 19:51
add a comment |
$begingroup$
Arturo's lengthy answer hits all the right points in great detail.
More briefly: you are not doing yourself any favors by writing the Principle of Mathematical Induction as you are: the word "assume" need not appear anywhere in it.
Here is a statement of POMI (see here for variants on this statement and more on mathematical induction; but unfortunately this handout does not explicitly address your question.)
Let $S$ be a subset of the positive integers satisfying both of the following properties:
(i) $1 in S$.
(ii) For all positive integers $n$, $n in S implies n+1 in S$.
Then $S$ is the set of all positive integers.
So you need to understand the logic of an implication $A implies B$. You seem to be thinking that from $A implies B$ we can deduce $A$: this is absolutely false. Rather $A implies B$ is a logical predicate defined by its truth table: i.e., for each of the four possible combinations of truth/falsity of $A$ and $B$, we define whether $A implies B$ is true or false, as follows.
If $A$ is true and $B$ is true, then $A implies B$ is true.
If $A$ is true and $B$ is false, then $A implies B$ is false.
If $A$ is false and $B$ is true, then $A implies B$ is true.
If $A$ is false and $B$ is false, then $A implies B$ is true.
Thus, the only case in which $A implies B$ is false is if $A$ is true and $B$ is false. So, if you are trying to prove that $A implies B$ is true, it is enough to look at the situation when $A$ is true and show that in those cases $B$ is also true. So you see that we are not assuming that A is true: rather, we are looking at that case, since the other case is trivial: if $A$ is false, we don't need to know whether $B$ is true to know that the implication is true.
As an aside, I have to say that I wish I had a better understanding of why students have so much trouble with the logic of induction. I personally never did. While that was of some use to me mathematically, at this point I sort of wish that I did have some trouble at first: I would presumably have long since gotten over it but would be in a better situation with regards to my current teaching.
As I alluded to above, I do think that going over some (mildly) formal logic is a good way to be clear about induction (and this is also an argument for covering logic in "transitions" courses, as I wrote about in an answer to a MO question). Still I confess that I don't completely get the problems: this semester I am teaching an intermediate course on sequences and series which has this other intro to proofs course as a prerequisite. So the students have all seen induction before and seem to mostly get it. Mostly but not perfectly: when I did the second induction proof in class [the first one had two variables in it and I was inducting on one of them; in retrospect, that must have been just too complicated for some of the students], several students commented afterwards on the fact that I wrote "Induction step: suppose $n in mathbb{N}$. We will show that $P(n) implies P(n+1)$." (And yes, after that, I said "Assume $P(n)$" and wrote it out: this "assume" business is the language which is used in practice!) In their previous course, for the induction step the variable name had been switched to $k$. They wanted to know "if it was okay" to use $n$ instead.
So I tried putting my hands on my hips and asking: is there any logical difference between the statements
"For all $n in mathbb{N}, P(n) implies P(n+1)$"
and
"For all $k in mathbb{N}, P(k) implies P(k+1)$"?
Of course not: we can use whatever names we want for the variables! (In my own "intro to proofs course" I would have stressed that the point that these are quantified variables, so it really can't make a difference, but I am leery of referring to things that other instructors may not have covered.)
But this answer reveals a sort of lack of understanding of the cognitive process behind the question. What are they really confused about? Can someone suggest a better answer?
$endgroup$
$begingroup$
So why is it exactly that so many math books and instructors choose to start with n and then substitute k for the inductive step? No one seems to have a justification for doing so other than an apparent convention for summation. It seems quite unnecessary to switch the variable in mid-stream, pretend it's a constant, add one, fiddle around to find a matching pattern and somehow have it all prove itself. That's the impression I get when so many explanations of the steps involved lack the reasons why. Mechanics are more difficult to master when an understanding of the steps is absent.
$endgroup$
– Adam
Sep 26 '14 at 2:57
$begingroup$
@Adam: I sincerely don't know a good answer to your question. Nevertheless, many students seem to find the statement with the variable switched from $n$ to $k$ easier to understand: or more precisely, easier to partially understand, enough to perform certain kinds of induction arguments in practice. I wish I know why. Concerning your last sentence: I (and every other math instructor) agree! However, although switching the variable name is strange, it is certainly acceptable. I have never found a textbook treatment of mathematical induction that I thought was truly inadequate.
$endgroup$
– Pete L. Clark
Sep 28 '14 at 23:05
1
$begingroup$
When my professor was working through another induction example on the board last week, I saw the line which read, something to the effect of "for the case when n = k + 1," where n was an exponent. Suddenly it dawned on me that while "n = n + 1" is completely natural in programming languages (assignment), it would rarely be a true statement in mathematics. I'm satisfied with a single case where this kind of substitution is useful but am still not entirely comfortable with these kinds of proofs. I think part of it comes from a lack of standards for writing proofs (or awareness of them).
$endgroup$
– Adam
Oct 6 '14 at 2:54
$begingroup$
@Adam: I think that's a good explanation. Thanks.
$endgroup$
– Pete L. Clark
Oct 6 '14 at 19:51
add a comment |
$begingroup$
Arturo's lengthy answer hits all the right points in great detail.
More briefly: you are not doing yourself any favors by writing the Principle of Mathematical Induction as you are: the word "assume" need not appear anywhere in it.
Here is a statement of POMI (see here for variants on this statement and more on mathematical induction; but unfortunately this handout does not explicitly address your question.)
Let $S$ be a subset of the positive integers satisfying both of the following properties:
(i) $1 in S$.
(ii) For all positive integers $n$, $n in S implies n+1 in S$.
Then $S$ is the set of all positive integers.
So you need to understand the logic of an implication $A implies B$. You seem to be thinking that from $A implies B$ we can deduce $A$: this is absolutely false. Rather $A implies B$ is a logical predicate defined by its truth table: i.e., for each of the four possible combinations of truth/falsity of $A$ and $B$, we define whether $A implies B$ is true or false, as follows.
If $A$ is true and $B$ is true, then $A implies B$ is true.
If $A$ is true and $B$ is false, then $A implies B$ is false.
If $A$ is false and $B$ is true, then $A implies B$ is true.
If $A$ is false and $B$ is false, then $A implies B$ is true.
Thus, the only case in which $A implies B$ is false is if $A$ is true and $B$ is false. So, if you are trying to prove that $A implies B$ is true, it is enough to look at the situation when $A$ is true and show that in those cases $B$ is also true. So you see that we are not assuming that A is true: rather, we are looking at that case, since the other case is trivial: if $A$ is false, we don't need to know whether $B$ is true to know that the implication is true.
As an aside, I have to say that I wish I had a better understanding of why students have so much trouble with the logic of induction. I personally never did. While that was of some use to me mathematically, at this point I sort of wish that I did have some trouble at first: I would presumably have long since gotten over it but would be in a better situation with regards to my current teaching.
As I alluded to above, I do think that going over some (mildly) formal logic is a good way to be clear about induction (and this is also an argument for covering logic in "transitions" courses, as I wrote about in an answer to a MO question). Still I confess that I don't completely get the problems: this semester I am teaching an intermediate course on sequences and series which has this other intro to proofs course as a prerequisite. So the students have all seen induction before and seem to mostly get it. Mostly but not perfectly: when I did the second induction proof in class [the first one had two variables in it and I was inducting on one of them; in retrospect, that must have been just too complicated for some of the students], several students commented afterwards on the fact that I wrote "Induction step: suppose $n in mathbb{N}$. We will show that $P(n) implies P(n+1)$." (And yes, after that, I said "Assume $P(n)$" and wrote it out: this "assume" business is the language which is used in practice!) In their previous course, for the induction step the variable name had been switched to $k$. They wanted to know "if it was okay" to use $n$ instead.
So I tried putting my hands on my hips and asking: is there any logical difference between the statements
"For all $n in mathbb{N}, P(n) implies P(n+1)$"
and
"For all $k in mathbb{N}, P(k) implies P(k+1)$"?
Of course not: we can use whatever names we want for the variables! (In my own "intro to proofs course" I would have stressed that the point that these are quantified variables, so it really can't make a difference, but I am leery of referring to things that other instructors may not have covered.)
But this answer reveals a sort of lack of understanding of the cognitive process behind the question. What are they really confused about? Can someone suggest a better answer?
$endgroup$
Arturo's lengthy answer hits all the right points in great detail.
More briefly: you are not doing yourself any favors by writing the Principle of Mathematical Induction as you are: the word "assume" need not appear anywhere in it.
Here is a statement of POMI (see here for variants on this statement and more on mathematical induction; but unfortunately this handout does not explicitly address your question.)
Let $S$ be a subset of the positive integers satisfying both of the following properties:
(i) $1 in S$.
(ii) For all positive integers $n$, $n in S implies n+1 in S$.
Then $S$ is the set of all positive integers.
So you need to understand the logic of an implication $A implies B$. You seem to be thinking that from $A implies B$ we can deduce $A$: this is absolutely false. Rather $A implies B$ is a logical predicate defined by its truth table: i.e., for each of the four possible combinations of truth/falsity of $A$ and $B$, we define whether $A implies B$ is true or false, as follows.
If $A$ is true and $B$ is true, then $A implies B$ is true.
If $A$ is true and $B$ is false, then $A implies B$ is false.
If $A$ is false and $B$ is true, then $A implies B$ is true.
If $A$ is false and $B$ is false, then $A implies B$ is true.
Thus, the only case in which $A implies B$ is false is if $A$ is true and $B$ is false. So, if you are trying to prove that $A implies B$ is true, it is enough to look at the situation when $A$ is true and show that in those cases $B$ is also true. So you see that we are not assuming that A is true: rather, we are looking at that case, since the other case is trivial: if $A$ is false, we don't need to know whether $B$ is true to know that the implication is true.
As an aside, I have to say that I wish I had a better understanding of why students have so much trouble with the logic of induction. I personally never did. While that was of some use to me mathematically, at this point I sort of wish that I did have some trouble at first: I would presumably have long since gotten over it but would be in a better situation with regards to my current teaching.
As I alluded to above, I do think that going over some (mildly) formal logic is a good way to be clear about induction (and this is also an argument for covering logic in "transitions" courses, as I wrote about in an answer to a MO question). Still I confess that I don't completely get the problems: this semester I am teaching an intermediate course on sequences and series which has this other intro to proofs course as a prerequisite. So the students have all seen induction before and seem to mostly get it. Mostly but not perfectly: when I did the second induction proof in class [the first one had two variables in it and I was inducting on one of them; in retrospect, that must have been just too complicated for some of the students], several students commented afterwards on the fact that I wrote "Induction step: suppose $n in mathbb{N}$. We will show that $P(n) implies P(n+1)$." (And yes, after that, I said "Assume $P(n)$" and wrote it out: this "assume" business is the language which is used in practice!) In their previous course, for the induction step the variable name had been switched to $k$. They wanted to know "if it was okay" to use $n$ instead.
So I tried putting my hands on my hips and asking: is there any logical difference between the statements
"For all $n in mathbb{N}, P(n) implies P(n+1)$"
and
"For all $k in mathbb{N}, P(k) implies P(k+1)$"?
Of course not: we can use whatever names we want for the variables! (In my own "intro to proofs course" I would have stressed that the point that these are quantified variables, so it really can't make a difference, but I am leery of referring to things that other instructors may not have covered.)
But this answer reveals a sort of lack of understanding of the cognitive process behind the question. What are they really confused about? Can someone suggest a better answer?
edited Dec 23 '16 at 16:46
Martin Sleziak
44.7k10119272
44.7k10119272
answered Jan 30 '11 at 0:11
Pete L. ClarkPete L. Clark
80.6k9161312
80.6k9161312
$begingroup$
So why is it exactly that so many math books and instructors choose to start with n and then substitute k for the inductive step? No one seems to have a justification for doing so other than an apparent convention for summation. It seems quite unnecessary to switch the variable in mid-stream, pretend it's a constant, add one, fiddle around to find a matching pattern and somehow have it all prove itself. That's the impression I get when so many explanations of the steps involved lack the reasons why. Mechanics are more difficult to master when an understanding of the steps is absent.
$endgroup$
– Adam
Sep 26 '14 at 2:57
$begingroup$
@Adam: I sincerely don't know a good answer to your question. Nevertheless, many students seem to find the statement with the variable switched from $n$ to $k$ easier to understand: or more precisely, easier to partially understand, enough to perform certain kinds of induction arguments in practice. I wish I know why. Concerning your last sentence: I (and every other math instructor) agree! However, although switching the variable name is strange, it is certainly acceptable. I have never found a textbook treatment of mathematical induction that I thought was truly inadequate.
$endgroup$
– Pete L. Clark
Sep 28 '14 at 23:05
1
$begingroup$
When my professor was working through another induction example on the board last week, I saw the line which read, something to the effect of "for the case when n = k + 1," where n was an exponent. Suddenly it dawned on me that while "n = n + 1" is completely natural in programming languages (assignment), it would rarely be a true statement in mathematics. I'm satisfied with a single case where this kind of substitution is useful but am still not entirely comfortable with these kinds of proofs. I think part of it comes from a lack of standards for writing proofs (or awareness of them).
$endgroup$
– Adam
Oct 6 '14 at 2:54
$begingroup$
@Adam: I think that's a good explanation. Thanks.
$endgroup$
– Pete L. Clark
Oct 6 '14 at 19:51
add a comment |
$begingroup$
So why is it exactly that so many math books and instructors choose to start with n and then substitute k for the inductive step? No one seems to have a justification for doing so other than an apparent convention for summation. It seems quite unnecessary to switch the variable in mid-stream, pretend it's a constant, add one, fiddle around to find a matching pattern and somehow have it all prove itself. That's the impression I get when so many explanations of the steps involved lack the reasons why. Mechanics are more difficult to master when an understanding of the steps is absent.
$endgroup$
– Adam
Sep 26 '14 at 2:57
$begingroup$
@Adam: I sincerely don't know a good answer to your question. Nevertheless, many students seem to find the statement with the variable switched from $n$ to $k$ easier to understand: or more precisely, easier to partially understand, enough to perform certain kinds of induction arguments in practice. I wish I know why. Concerning your last sentence: I (and every other math instructor) agree! However, although switching the variable name is strange, it is certainly acceptable. I have never found a textbook treatment of mathematical induction that I thought was truly inadequate.
$endgroup$
– Pete L. Clark
Sep 28 '14 at 23:05
1
$begingroup$
When my professor was working through another induction example on the board last week, I saw the line which read, something to the effect of "for the case when n = k + 1," where n was an exponent. Suddenly it dawned on me that while "n = n + 1" is completely natural in programming languages (assignment), it would rarely be a true statement in mathematics. I'm satisfied with a single case where this kind of substitution is useful but am still not entirely comfortable with these kinds of proofs. I think part of it comes from a lack of standards for writing proofs (or awareness of them).
$endgroup$
– Adam
Oct 6 '14 at 2:54
$begingroup$
@Adam: I think that's a good explanation. Thanks.
$endgroup$
– Pete L. Clark
Oct 6 '14 at 19:51
$begingroup$
So why is it exactly that so many math books and instructors choose to start with n and then substitute k for the inductive step? No one seems to have a justification for doing so other than an apparent convention for summation. It seems quite unnecessary to switch the variable in mid-stream, pretend it's a constant, add one, fiddle around to find a matching pattern and somehow have it all prove itself. That's the impression I get when so many explanations of the steps involved lack the reasons why. Mechanics are more difficult to master when an understanding of the steps is absent.
$endgroup$
– Adam
Sep 26 '14 at 2:57
$begingroup$
So why is it exactly that so many math books and instructors choose to start with n and then substitute k for the inductive step? No one seems to have a justification for doing so other than an apparent convention for summation. It seems quite unnecessary to switch the variable in mid-stream, pretend it's a constant, add one, fiddle around to find a matching pattern and somehow have it all prove itself. That's the impression I get when so many explanations of the steps involved lack the reasons why. Mechanics are more difficult to master when an understanding of the steps is absent.
$endgroup$
– Adam
Sep 26 '14 at 2:57
$begingroup$
@Adam: I sincerely don't know a good answer to your question. Nevertheless, many students seem to find the statement with the variable switched from $n$ to $k$ easier to understand: or more precisely, easier to partially understand, enough to perform certain kinds of induction arguments in practice. I wish I know why. Concerning your last sentence: I (and every other math instructor) agree! However, although switching the variable name is strange, it is certainly acceptable. I have never found a textbook treatment of mathematical induction that I thought was truly inadequate.
$endgroup$
– Pete L. Clark
Sep 28 '14 at 23:05
$begingroup$
@Adam: I sincerely don't know a good answer to your question. Nevertheless, many students seem to find the statement with the variable switched from $n$ to $k$ easier to understand: or more precisely, easier to partially understand, enough to perform certain kinds of induction arguments in practice. I wish I know why. Concerning your last sentence: I (and every other math instructor) agree! However, although switching the variable name is strange, it is certainly acceptable. I have never found a textbook treatment of mathematical induction that I thought was truly inadequate.
$endgroup$
– Pete L. Clark
Sep 28 '14 at 23:05
1
1
$begingroup$
When my professor was working through another induction example on the board last week, I saw the line which read, something to the effect of "for the case when n = k + 1," where n was an exponent. Suddenly it dawned on me that while "n = n + 1" is completely natural in programming languages (assignment), it would rarely be a true statement in mathematics. I'm satisfied with a single case where this kind of substitution is useful but am still not entirely comfortable with these kinds of proofs. I think part of it comes from a lack of standards for writing proofs (or awareness of them).
$endgroup$
– Adam
Oct 6 '14 at 2:54
$begingroup$
When my professor was working through another induction example on the board last week, I saw the line which read, something to the effect of "for the case when n = k + 1," where n was an exponent. Suddenly it dawned on me that while "n = n + 1" is completely natural in programming languages (assignment), it would rarely be a true statement in mathematics. I'm satisfied with a single case where this kind of substitution is useful but am still not entirely comfortable with these kinds of proofs. I think part of it comes from a lack of standards for writing proofs (or awareness of them).
$endgroup$
– Adam
Oct 6 '14 at 2:54
$begingroup$
@Adam: I think that's a good explanation. Thanks.
$endgroup$
– Pete L. Clark
Oct 6 '14 at 19:51
$begingroup$
@Adam: I think that's a good explanation. Thanks.
$endgroup$
– Pete L. Clark
Oct 6 '14 at 19:51
add a comment |
$begingroup$
You prove the base case for $n=1$ (or $0$, let's use $1$ for the sake of simplicity), and then you use inductive reasoning. The way you do it us by arguing that if the formula holds for $n=k$, it must also hold for $n=k+1$. If is the key here. Once you have proven that, you have effectively proven that the formula will work for all $n$, and here's why -- you can always start with $n=1$ ($n=k$), and prove that $n=1+1$ ($n=k+1$) will work. Then you can treat $n=1+1$ as $n=k$, and prove than $n=1+1+1$ ($n=k+1$) will satisfy the formula too, and so on to the infinity.
In other words, since you have demonstrated that the "base step" works, you only need to demonstrate that the "base step + 1" will work, too. Then the "base step + 1" can be treated as a 'new base step,' crudely put, and the sequence of "base step + 1" can be continued on and on. It does not matter if the statement holds when $n=k$ or not, all that matters is proving the implication -- if it holds for k, it will also hold for $k+1$. That allows you to move further from the base step.
So, maybe, the more correct way of stating the method is "Assuming that statement holds for some n, show that it holds for $n+1$." Once you get that, you can go further infinitely as described above. Obviously, this principle does not work for all proofs, as there are problems where you will need to use Principle of Strong Induction (break the problem down into cases, for which $n$ and $n+1$ actually correspond).
$endgroup$
add a comment |
$begingroup$
You prove the base case for $n=1$ (or $0$, let's use $1$ for the sake of simplicity), and then you use inductive reasoning. The way you do it us by arguing that if the formula holds for $n=k$, it must also hold for $n=k+1$. If is the key here. Once you have proven that, you have effectively proven that the formula will work for all $n$, and here's why -- you can always start with $n=1$ ($n=k$), and prove that $n=1+1$ ($n=k+1$) will work. Then you can treat $n=1+1$ as $n=k$, and prove than $n=1+1+1$ ($n=k+1$) will satisfy the formula too, and so on to the infinity.
In other words, since you have demonstrated that the "base step" works, you only need to demonstrate that the "base step + 1" will work, too. Then the "base step + 1" can be treated as a 'new base step,' crudely put, and the sequence of "base step + 1" can be continued on and on. It does not matter if the statement holds when $n=k$ or not, all that matters is proving the implication -- if it holds for k, it will also hold for $k+1$. That allows you to move further from the base step.
So, maybe, the more correct way of stating the method is "Assuming that statement holds for some n, show that it holds for $n+1$." Once you get that, you can go further infinitely as described above. Obviously, this principle does not work for all proofs, as there are problems where you will need to use Principle of Strong Induction (break the problem down into cases, for which $n$ and $n+1$ actually correspond).
$endgroup$
add a comment |
$begingroup$
You prove the base case for $n=1$ (or $0$, let's use $1$ for the sake of simplicity), and then you use inductive reasoning. The way you do it us by arguing that if the formula holds for $n=k$, it must also hold for $n=k+1$. If is the key here. Once you have proven that, you have effectively proven that the formula will work for all $n$, and here's why -- you can always start with $n=1$ ($n=k$), and prove that $n=1+1$ ($n=k+1$) will work. Then you can treat $n=1+1$ as $n=k$, and prove than $n=1+1+1$ ($n=k+1$) will satisfy the formula too, and so on to the infinity.
In other words, since you have demonstrated that the "base step" works, you only need to demonstrate that the "base step + 1" will work, too. Then the "base step + 1" can be treated as a 'new base step,' crudely put, and the sequence of "base step + 1" can be continued on and on. It does not matter if the statement holds when $n=k$ or not, all that matters is proving the implication -- if it holds for k, it will also hold for $k+1$. That allows you to move further from the base step.
So, maybe, the more correct way of stating the method is "Assuming that statement holds for some n, show that it holds for $n+1$." Once you get that, you can go further infinitely as described above. Obviously, this principle does not work for all proofs, as there are problems where you will need to use Principle of Strong Induction (break the problem down into cases, for which $n$ and $n+1$ actually correspond).
$endgroup$
You prove the base case for $n=1$ (or $0$, let's use $1$ for the sake of simplicity), and then you use inductive reasoning. The way you do it us by arguing that if the formula holds for $n=k$, it must also hold for $n=k+1$. If is the key here. Once you have proven that, you have effectively proven that the formula will work for all $n$, and here's why -- you can always start with $n=1$ ($n=k$), and prove that $n=1+1$ ($n=k+1$) will work. Then you can treat $n=1+1$ as $n=k$, and prove than $n=1+1+1$ ($n=k+1$) will satisfy the formula too, and so on to the infinity.
In other words, since you have demonstrated that the "base step" works, you only need to demonstrate that the "base step + 1" will work, too. Then the "base step + 1" can be treated as a 'new base step,' crudely put, and the sequence of "base step + 1" can be continued on and on. It does not matter if the statement holds when $n=k$ or not, all that matters is proving the implication -- if it holds for k, it will also hold for $k+1$. That allows you to move further from the base step.
So, maybe, the more correct way of stating the method is "Assuming that statement holds for some n, show that it holds for $n+1$." Once you get that, you can go further infinitely as described above. Obviously, this principle does not work for all proofs, as there are problems where you will need to use Principle of Strong Induction (break the problem down into cases, for which $n$ and $n+1$ actually correspond).
edited Oct 22 '18 at 14:43
Babelfish
1,185520
1,185520
answered Jan 29 '11 at 21:27
InterestedGuestInterestedGuest
3,63843048
3,63843048
add a comment |
add a comment |
$begingroup$
Just for another try at the intuition: usually an induction proof is proving something about natural numbers, that is, proving a statement for n, where n goes over all numbers starting from 0 and going on up.
So you want to prove the statement for all $n$. You could prove it for a constant, but that doesn't do it for all $n$. The 'inductive' way is to show that, if your statement is true for some $n$ (you don't (yet) know it's true for all $n$) you can show that the statement is true for $n+1$. The $n$ here isn't -all- $n$, just one you don't know yet. And whichever $n$ this is, you can always reach $n+1$.
The domino image is that the inductive case of a proof shows you can always 'go' from one integer to the next, you -can- always knock over the next domino. The base case just means you can knock over the first domino in a sequence. The intuition is if those two things hold, then -all- the dominos will fall over. Knock over block 0, and it will knock over block 1, and then that will knock over block 2 and...that will knock over block $n$, and then that will knock over $n+1$, and then that will... (go on forever).
Back to the proof itself, you're not assuming the thing you're trying to prove (which is truth for -all- $n$). You're assuming that -if- it is true for some $n$, then it must be true for $n+1$.
In formal symbols you're trying to prove $${rm for all} n, P(n)$$ ($P(n)$ is the formula with $n$ in it).
The inductive part of the proof is $${rm for all} n, (P(n) implies P(n+1) )$$ (any one domino can knock over the next, no matter which domino you're thinking of).
$endgroup$
add a comment |
$begingroup$
Just for another try at the intuition: usually an induction proof is proving something about natural numbers, that is, proving a statement for n, where n goes over all numbers starting from 0 and going on up.
So you want to prove the statement for all $n$. You could prove it for a constant, but that doesn't do it for all $n$. The 'inductive' way is to show that, if your statement is true for some $n$ (you don't (yet) know it's true for all $n$) you can show that the statement is true for $n+1$. The $n$ here isn't -all- $n$, just one you don't know yet. And whichever $n$ this is, you can always reach $n+1$.
The domino image is that the inductive case of a proof shows you can always 'go' from one integer to the next, you -can- always knock over the next domino. The base case just means you can knock over the first domino in a sequence. The intuition is if those two things hold, then -all- the dominos will fall over. Knock over block 0, and it will knock over block 1, and then that will knock over block 2 and...that will knock over block $n$, and then that will knock over $n+1$, and then that will... (go on forever).
Back to the proof itself, you're not assuming the thing you're trying to prove (which is truth for -all- $n$). You're assuming that -if- it is true for some $n$, then it must be true for $n+1$.
In formal symbols you're trying to prove $${rm for all} n, P(n)$$ ($P(n)$ is the formula with $n$ in it).
The inductive part of the proof is $${rm for all} n, (P(n) implies P(n+1) )$$ (any one domino can knock over the next, no matter which domino you're thinking of).
$endgroup$
add a comment |
$begingroup$
Just for another try at the intuition: usually an induction proof is proving something about natural numbers, that is, proving a statement for n, where n goes over all numbers starting from 0 and going on up.
So you want to prove the statement for all $n$. You could prove it for a constant, but that doesn't do it for all $n$. The 'inductive' way is to show that, if your statement is true for some $n$ (you don't (yet) know it's true for all $n$) you can show that the statement is true for $n+1$. The $n$ here isn't -all- $n$, just one you don't know yet. And whichever $n$ this is, you can always reach $n+1$.
The domino image is that the inductive case of a proof shows you can always 'go' from one integer to the next, you -can- always knock over the next domino. The base case just means you can knock over the first domino in a sequence. The intuition is if those two things hold, then -all- the dominos will fall over. Knock over block 0, and it will knock over block 1, and then that will knock over block 2 and...that will knock over block $n$, and then that will knock over $n+1$, and then that will... (go on forever).
Back to the proof itself, you're not assuming the thing you're trying to prove (which is truth for -all- $n$). You're assuming that -if- it is true for some $n$, then it must be true for $n+1$.
In formal symbols you're trying to prove $${rm for all} n, P(n)$$ ($P(n)$ is the formula with $n$ in it).
The inductive part of the proof is $${rm for all} n, (P(n) implies P(n+1) )$$ (any one domino can knock over the next, no matter which domino you're thinking of).
$endgroup$
Just for another try at the intuition: usually an induction proof is proving something about natural numbers, that is, proving a statement for n, where n goes over all numbers starting from 0 and going on up.
So you want to prove the statement for all $n$. You could prove it for a constant, but that doesn't do it for all $n$. The 'inductive' way is to show that, if your statement is true for some $n$ (you don't (yet) know it's true for all $n$) you can show that the statement is true for $n+1$. The $n$ here isn't -all- $n$, just one you don't know yet. And whichever $n$ this is, you can always reach $n+1$.
The domino image is that the inductive case of a proof shows you can always 'go' from one integer to the next, you -can- always knock over the next domino. The base case just means you can knock over the first domino in a sequence. The intuition is if those two things hold, then -all- the dominos will fall over. Knock over block 0, and it will knock over block 1, and then that will knock over block 2 and...that will knock over block $n$, and then that will knock over $n+1$, and then that will... (go on forever).
Back to the proof itself, you're not assuming the thing you're trying to prove (which is truth for -all- $n$). You're assuming that -if- it is true for some $n$, then it must be true for $n+1$.
In formal symbols you're trying to prove $${rm for all} n, P(n)$$ ($P(n)$ is the formula with $n$ in it).
The inductive part of the proof is $${rm for all} n, (P(n) implies P(n+1) )$$ (any one domino can knock over the next, no matter which domino you're thinking of).
answered Jan 30 '11 at 1:24
MitchMitch
6,0462560
6,0462560
add a comment |
add a comment |
$begingroup$
At the risk of seeming repetitive (see "How to prove the mathematical induction is true?"), the principle of mathematical induction is just one of Peano's axioms for the natural numbers (Axiom 5, below):
Axiom 1: 1 is a natural number. That is, the set of natural numbers is not empty; it contains an object called 1 (read "one").
Axiom 2: For each natural number $x$ there exists a unique natural number, called the successor of $x$, which will be denoted by $x'$.
Axiom 3: For each natural number $x$, we have $x'neq 1$.
Axiom 4: If $x'=y'$ then $x=y$.
Axiom 5 (Axiom of Induction): Suppose $M$ is a subset of the natural numbers, such that (a) 1 belongs to $M$, and (b) if $x$ belongs to $M$ then so does $x'$. Then $M$ contains all natural numbers.
Source: http://www.ms.uky.edu/~lee/ma502/notes2/node7.html
If you have trouble visualizing these relationships, I find it helps to think of the set of natural numbers as directed graph (a network of nodes connected by one-way arrows) as follows:
(1) --> (2) --> (3) --> ...
In words: There is a node (call it 1) that has no arrows going into it (Axioms 1 and 3, above). Every node has exactly one arrow leaving it (Axiom 2). Every node has at most one arrow going into it (Axiom 4). You can get to any node by starting at 1 and following the arrows; that is, there are no isolated nodes or portions of the graph off to the side (Axiom 5).
$endgroup$
add a comment |
$begingroup$
At the risk of seeming repetitive (see "How to prove the mathematical induction is true?"), the principle of mathematical induction is just one of Peano's axioms for the natural numbers (Axiom 5, below):
Axiom 1: 1 is a natural number. That is, the set of natural numbers is not empty; it contains an object called 1 (read "one").
Axiom 2: For each natural number $x$ there exists a unique natural number, called the successor of $x$, which will be denoted by $x'$.
Axiom 3: For each natural number $x$, we have $x'neq 1$.
Axiom 4: If $x'=y'$ then $x=y$.
Axiom 5 (Axiom of Induction): Suppose $M$ is a subset of the natural numbers, such that (a) 1 belongs to $M$, and (b) if $x$ belongs to $M$ then so does $x'$. Then $M$ contains all natural numbers.
Source: http://www.ms.uky.edu/~lee/ma502/notes2/node7.html
If you have trouble visualizing these relationships, I find it helps to think of the set of natural numbers as directed graph (a network of nodes connected by one-way arrows) as follows:
(1) --> (2) --> (3) --> ...
In words: There is a node (call it 1) that has no arrows going into it (Axioms 1 and 3, above). Every node has exactly one arrow leaving it (Axiom 2). Every node has at most one arrow going into it (Axiom 4). You can get to any node by starting at 1 and following the arrows; that is, there are no isolated nodes or portions of the graph off to the side (Axiom 5).
$endgroup$
add a comment |
$begingroup$
At the risk of seeming repetitive (see "How to prove the mathematical induction is true?"), the principle of mathematical induction is just one of Peano's axioms for the natural numbers (Axiom 5, below):
Axiom 1: 1 is a natural number. That is, the set of natural numbers is not empty; it contains an object called 1 (read "one").
Axiom 2: For each natural number $x$ there exists a unique natural number, called the successor of $x$, which will be denoted by $x'$.
Axiom 3: For each natural number $x$, we have $x'neq 1$.
Axiom 4: If $x'=y'$ then $x=y$.
Axiom 5 (Axiom of Induction): Suppose $M$ is a subset of the natural numbers, such that (a) 1 belongs to $M$, and (b) if $x$ belongs to $M$ then so does $x'$. Then $M$ contains all natural numbers.
Source: http://www.ms.uky.edu/~lee/ma502/notes2/node7.html
If you have trouble visualizing these relationships, I find it helps to think of the set of natural numbers as directed graph (a network of nodes connected by one-way arrows) as follows:
(1) --> (2) --> (3) --> ...
In words: There is a node (call it 1) that has no arrows going into it (Axioms 1 and 3, above). Every node has exactly one arrow leaving it (Axiom 2). Every node has at most one arrow going into it (Axiom 4). You can get to any node by starting at 1 and following the arrows; that is, there are no isolated nodes or portions of the graph off to the side (Axiom 5).
$endgroup$
At the risk of seeming repetitive (see "How to prove the mathematical induction is true?"), the principle of mathematical induction is just one of Peano's axioms for the natural numbers (Axiom 5, below):
Axiom 1: 1 is a natural number. That is, the set of natural numbers is not empty; it contains an object called 1 (read "one").
Axiom 2: For each natural number $x$ there exists a unique natural number, called the successor of $x$, which will be denoted by $x'$.
Axiom 3: For each natural number $x$, we have $x'neq 1$.
Axiom 4: If $x'=y'$ then $x=y$.
Axiom 5 (Axiom of Induction): Suppose $M$ is a subset of the natural numbers, such that (a) 1 belongs to $M$, and (b) if $x$ belongs to $M$ then so does $x'$. Then $M$ contains all natural numbers.
Source: http://www.ms.uky.edu/~lee/ma502/notes2/node7.html
If you have trouble visualizing these relationships, I find it helps to think of the set of natural numbers as directed graph (a network of nodes connected by one-way arrows) as follows:
(1) --> (2) --> (3) --> ...
In words: There is a node (call it 1) that has no arrows going into it (Axioms 1 and 3, above). Every node has exactly one arrow leaving it (Axiom 2). Every node has at most one arrow going into it (Axiom 4). You can get to any node by starting at 1 and following the arrows; that is, there are no isolated nodes or portions of the graph off to the side (Axiom 5).
edited Aug 1 '12 at 6:11
answered Mar 4 '11 at 6:17
Dan ChristensenDan Christensen
8,64821835
8,64821835
add a comment |
add a comment |
$begingroup$
A slightly different point of view - from a programmer's perspective.
Consider Peano representation of natural numbers.
In short, every natural number is either zero or a successor of a natural number.
This way we can construct an arbitrary natural number n by applying the successor function n times to zero.
We can represent natural numbers as a type:
data Natural : Type where
Zero : Natural
Succ : Natural -> Natural
This simply means that "Zero
is a Natural
" and "if given a Natural
, we can construct an another Natural
with the Succ
function". A mathematician might be more familiar with "Zero
is a constant function" and Succ
is a function with domain Natural
and range Natural
.
Now let us define a simple operation for Natural
so that we have something to prove.
(+) : Natural -> Natural -> Natural
(+) Zero y = y
(+) (Succ x) y = Succ (x + y)
Which simply defines a part-wise specified function (+)
with range (Natural X Natural)
, but written in its curried form (Natural -> Natural)
and range Natural
, hence (+) : Natural -> (Natural -> Natural)
, considering (->)
is right associative.
We state that Zero
plus y
= y
and Succ x
plus y
is the same thing as Succ
of x
plus y
. This is a definition of the operation.
Finally we can write a simple inductive proof!
$(forall x in mathbb{N}) x+0 = x$
In Idris, which is the programming language I'm using here, we say:x_plus_zero_is_x : (x : Natural) -> x + Zero = x
Inductive base:
We need a proof for 0 (Zero
).x_plus_zero_is_x Zero = ?something
What is ?something
then? Performing simple substitution x = Zero
we get Zero + Zero = Zero
. By definition, Zero + Zero
is Zero
(look at (+) Zero y = y
for y = Zero
) hence we are left with: Zero = Zero
. If the left hand side is exactly equal to the right hand side, we can construct equality (=)
by simply saying Refl
(from "reflexive"). Therefore ?something = Refl
ergo:
x_plus_zero_is_x Zero = Refl
Inductive step:
Now for the Succ y
.x_plus_zero_is_x (Succ k) = ?what
Here is an important thing to notice. x_plus_zero_is_x
is in this context a function. By saying we assume something, we can view it as "being given an argument for the function call". That is, if the function is not called (and its arguments are not provided) we're perfectly fine even without its definition. And what are we going to assume? Well, let us assume that this property holds for the immediate next component of the structure, therefore "unwrapping" one Succ
. As said before, to assume can in some sense be viewed as "to actually call the thing", therefore we can create a small inline helper call like so:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in ?what
Now induction
is simply a recursive function call to x_plus_zero_is_x
but not with (Succ k)
, just with k
, which happens to be the immediate component of the structure, whatever its shape might be (Zero
or another Succ k'
). By doing substitution x = k
, we get k + Zero = k
.
What is ?what
however? Let us perform substitution x = Succ k
, we get (Succ k) + Zero = Succ k
and by definition (look at (+) (Succ x) y = Succ (x + y)
) we get:
Succ (k + Zero) = Succ k
.
We cannot use Refl
yet because clearly rhs is not lhs. However, from our assumption (the recursive call of the function with k
), we have that k + Zero = k
, from which Succ k = Succ k
and therefore Refl
. In Idris we say:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in rewrite induction in Refl
The rewrite induction
simply performs the aforementioned k + Zero = k
substitution. Therefore we end up with:
x_plus_zero_is_x : (x : Natural) -> x + Zero = x
x_plus_zero_is_x Zero = Refl -- inductive base
x_plus_zero_is_x (Succ k) = -- inductive step
let induction = x_plus_zero_is_x k in rewrite induction in Refl
Which is a constructive proof that the equality holds. Notice that we simply called the function in question with something, which is our way of assuming things. We can also assume we have x : Nat
because the function was called with some x
! Shiny. The entire correspondence between propositions and types (and further programs as proofs, proof normalization as program execution) is known as Curry-Howard Isomorphism.
$endgroup$
add a comment |
$begingroup$
A slightly different point of view - from a programmer's perspective.
Consider Peano representation of natural numbers.
In short, every natural number is either zero or a successor of a natural number.
This way we can construct an arbitrary natural number n by applying the successor function n times to zero.
We can represent natural numbers as a type:
data Natural : Type where
Zero : Natural
Succ : Natural -> Natural
This simply means that "Zero
is a Natural
" and "if given a Natural
, we can construct an another Natural
with the Succ
function". A mathematician might be more familiar with "Zero
is a constant function" and Succ
is a function with domain Natural
and range Natural
.
Now let us define a simple operation for Natural
so that we have something to prove.
(+) : Natural -> Natural -> Natural
(+) Zero y = y
(+) (Succ x) y = Succ (x + y)
Which simply defines a part-wise specified function (+)
with range (Natural X Natural)
, but written in its curried form (Natural -> Natural)
and range Natural
, hence (+) : Natural -> (Natural -> Natural)
, considering (->)
is right associative.
We state that Zero
plus y
= y
and Succ x
plus y
is the same thing as Succ
of x
plus y
. This is a definition of the operation.
Finally we can write a simple inductive proof!
$(forall x in mathbb{N}) x+0 = x$
In Idris, which is the programming language I'm using here, we say:x_plus_zero_is_x : (x : Natural) -> x + Zero = x
Inductive base:
We need a proof for 0 (Zero
).x_plus_zero_is_x Zero = ?something
What is ?something
then? Performing simple substitution x = Zero
we get Zero + Zero = Zero
. By definition, Zero + Zero
is Zero
(look at (+) Zero y = y
for y = Zero
) hence we are left with: Zero = Zero
. If the left hand side is exactly equal to the right hand side, we can construct equality (=)
by simply saying Refl
(from "reflexive"). Therefore ?something = Refl
ergo:
x_plus_zero_is_x Zero = Refl
Inductive step:
Now for the Succ y
.x_plus_zero_is_x (Succ k) = ?what
Here is an important thing to notice. x_plus_zero_is_x
is in this context a function. By saying we assume something, we can view it as "being given an argument for the function call". That is, if the function is not called (and its arguments are not provided) we're perfectly fine even without its definition. And what are we going to assume? Well, let us assume that this property holds for the immediate next component of the structure, therefore "unwrapping" one Succ
. As said before, to assume can in some sense be viewed as "to actually call the thing", therefore we can create a small inline helper call like so:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in ?what
Now induction
is simply a recursive function call to x_plus_zero_is_x
but not with (Succ k)
, just with k
, which happens to be the immediate component of the structure, whatever its shape might be (Zero
or another Succ k'
). By doing substitution x = k
, we get k + Zero = k
.
What is ?what
however? Let us perform substitution x = Succ k
, we get (Succ k) + Zero = Succ k
and by definition (look at (+) (Succ x) y = Succ (x + y)
) we get:
Succ (k + Zero) = Succ k
.
We cannot use Refl
yet because clearly rhs is not lhs. However, from our assumption (the recursive call of the function with k
), we have that k + Zero = k
, from which Succ k = Succ k
and therefore Refl
. In Idris we say:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in rewrite induction in Refl
The rewrite induction
simply performs the aforementioned k + Zero = k
substitution. Therefore we end up with:
x_plus_zero_is_x : (x : Natural) -> x + Zero = x
x_plus_zero_is_x Zero = Refl -- inductive base
x_plus_zero_is_x (Succ k) = -- inductive step
let induction = x_plus_zero_is_x k in rewrite induction in Refl
Which is a constructive proof that the equality holds. Notice that we simply called the function in question with something, which is our way of assuming things. We can also assume we have x : Nat
because the function was called with some x
! Shiny. The entire correspondence between propositions and types (and further programs as proofs, proof normalization as program execution) is known as Curry-Howard Isomorphism.
$endgroup$
add a comment |
$begingroup$
A slightly different point of view - from a programmer's perspective.
Consider Peano representation of natural numbers.
In short, every natural number is either zero or a successor of a natural number.
This way we can construct an arbitrary natural number n by applying the successor function n times to zero.
We can represent natural numbers as a type:
data Natural : Type where
Zero : Natural
Succ : Natural -> Natural
This simply means that "Zero
is a Natural
" and "if given a Natural
, we can construct an another Natural
with the Succ
function". A mathematician might be more familiar with "Zero
is a constant function" and Succ
is a function with domain Natural
and range Natural
.
Now let us define a simple operation for Natural
so that we have something to prove.
(+) : Natural -> Natural -> Natural
(+) Zero y = y
(+) (Succ x) y = Succ (x + y)
Which simply defines a part-wise specified function (+)
with range (Natural X Natural)
, but written in its curried form (Natural -> Natural)
and range Natural
, hence (+) : Natural -> (Natural -> Natural)
, considering (->)
is right associative.
We state that Zero
plus y
= y
and Succ x
plus y
is the same thing as Succ
of x
plus y
. This is a definition of the operation.
Finally we can write a simple inductive proof!
$(forall x in mathbb{N}) x+0 = x$
In Idris, which is the programming language I'm using here, we say:x_plus_zero_is_x : (x : Natural) -> x + Zero = x
Inductive base:
We need a proof for 0 (Zero
).x_plus_zero_is_x Zero = ?something
What is ?something
then? Performing simple substitution x = Zero
we get Zero + Zero = Zero
. By definition, Zero + Zero
is Zero
(look at (+) Zero y = y
for y = Zero
) hence we are left with: Zero = Zero
. If the left hand side is exactly equal to the right hand side, we can construct equality (=)
by simply saying Refl
(from "reflexive"). Therefore ?something = Refl
ergo:
x_plus_zero_is_x Zero = Refl
Inductive step:
Now for the Succ y
.x_plus_zero_is_x (Succ k) = ?what
Here is an important thing to notice. x_plus_zero_is_x
is in this context a function. By saying we assume something, we can view it as "being given an argument for the function call". That is, if the function is not called (and its arguments are not provided) we're perfectly fine even without its definition. And what are we going to assume? Well, let us assume that this property holds for the immediate next component of the structure, therefore "unwrapping" one Succ
. As said before, to assume can in some sense be viewed as "to actually call the thing", therefore we can create a small inline helper call like so:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in ?what
Now induction
is simply a recursive function call to x_plus_zero_is_x
but not with (Succ k)
, just with k
, which happens to be the immediate component of the structure, whatever its shape might be (Zero
or another Succ k'
). By doing substitution x = k
, we get k + Zero = k
.
What is ?what
however? Let us perform substitution x = Succ k
, we get (Succ k) + Zero = Succ k
and by definition (look at (+) (Succ x) y = Succ (x + y)
) we get:
Succ (k + Zero) = Succ k
.
We cannot use Refl
yet because clearly rhs is not lhs. However, from our assumption (the recursive call of the function with k
), we have that k + Zero = k
, from which Succ k = Succ k
and therefore Refl
. In Idris we say:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in rewrite induction in Refl
The rewrite induction
simply performs the aforementioned k + Zero = k
substitution. Therefore we end up with:
x_plus_zero_is_x : (x : Natural) -> x + Zero = x
x_plus_zero_is_x Zero = Refl -- inductive base
x_plus_zero_is_x (Succ k) = -- inductive step
let induction = x_plus_zero_is_x k in rewrite induction in Refl
Which is a constructive proof that the equality holds. Notice that we simply called the function in question with something, which is our way of assuming things. We can also assume we have x : Nat
because the function was called with some x
! Shiny. The entire correspondence between propositions and types (and further programs as proofs, proof normalization as program execution) is known as Curry-Howard Isomorphism.
$endgroup$
A slightly different point of view - from a programmer's perspective.
Consider Peano representation of natural numbers.
In short, every natural number is either zero or a successor of a natural number.
This way we can construct an arbitrary natural number n by applying the successor function n times to zero.
We can represent natural numbers as a type:
data Natural : Type where
Zero : Natural
Succ : Natural -> Natural
This simply means that "Zero
is a Natural
" and "if given a Natural
, we can construct an another Natural
with the Succ
function". A mathematician might be more familiar with "Zero
is a constant function" and Succ
is a function with domain Natural
and range Natural
.
Now let us define a simple operation for Natural
so that we have something to prove.
(+) : Natural -> Natural -> Natural
(+) Zero y = y
(+) (Succ x) y = Succ (x + y)
Which simply defines a part-wise specified function (+)
with range (Natural X Natural)
, but written in its curried form (Natural -> Natural)
and range Natural
, hence (+) : Natural -> (Natural -> Natural)
, considering (->)
is right associative.
We state that Zero
plus y
= y
and Succ x
plus y
is the same thing as Succ
of x
plus y
. This is a definition of the operation.
Finally we can write a simple inductive proof!
$(forall x in mathbb{N}) x+0 = x$
In Idris, which is the programming language I'm using here, we say:x_plus_zero_is_x : (x : Natural) -> x + Zero = x
Inductive base:
We need a proof for 0 (Zero
).x_plus_zero_is_x Zero = ?something
What is ?something
then? Performing simple substitution x = Zero
we get Zero + Zero = Zero
. By definition, Zero + Zero
is Zero
(look at (+) Zero y = y
for y = Zero
) hence we are left with: Zero = Zero
. If the left hand side is exactly equal to the right hand side, we can construct equality (=)
by simply saying Refl
(from "reflexive"). Therefore ?something = Refl
ergo:
x_plus_zero_is_x Zero = Refl
Inductive step:
Now for the Succ y
.x_plus_zero_is_x (Succ k) = ?what
Here is an important thing to notice. x_plus_zero_is_x
is in this context a function. By saying we assume something, we can view it as "being given an argument for the function call". That is, if the function is not called (and its arguments are not provided) we're perfectly fine even without its definition. And what are we going to assume? Well, let us assume that this property holds for the immediate next component of the structure, therefore "unwrapping" one Succ
. As said before, to assume can in some sense be viewed as "to actually call the thing", therefore we can create a small inline helper call like so:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in ?what
Now induction
is simply a recursive function call to x_plus_zero_is_x
but not with (Succ k)
, just with k
, which happens to be the immediate component of the structure, whatever its shape might be (Zero
or another Succ k'
). By doing substitution x = k
, we get k + Zero = k
.
What is ?what
however? Let us perform substitution x = Succ k
, we get (Succ k) + Zero = Succ k
and by definition (look at (+) (Succ x) y = Succ (x + y)
) we get:
Succ (k + Zero) = Succ k
.
We cannot use Refl
yet because clearly rhs is not lhs. However, from our assumption (the recursive call of the function with k
), we have that k + Zero = k
, from which Succ k = Succ k
and therefore Refl
. In Idris we say:
x_plus_zero_is_x (Succ k) =
let induction = x_plus_zero_is_x k in rewrite induction in Refl
The rewrite induction
simply performs the aforementioned k + Zero = k
substitution. Therefore we end up with:
x_plus_zero_is_x : (x : Natural) -> x + Zero = x
x_plus_zero_is_x Zero = Refl -- inductive base
x_plus_zero_is_x (Succ k) = -- inductive step
let induction = x_plus_zero_is_x k in rewrite induction in Refl
Which is a constructive proof that the equality holds. Notice that we simply called the function in question with something, which is our way of assuming things. We can also assume we have x : Nat
because the function was called with some x
! Shiny. The entire correspondence between propositions and types (and further programs as proofs, proof normalization as program execution) is known as Curry-Howard Isomorphism.
answered Apr 12 '16 at 19:55
ScarletAmaranthScarletAmaranth
1208
1208
add a comment |
add a comment |
$begingroup$
In explanations of the principle of induction, I think the 'domino' example is often not explored enough for it to be intuitive. Let me endeavour a more understandable explanation using the domino analogy.
As always, you first prove that P(1) is true (or P(2) or P(3) or P(4 septillion), depending from where you want your hypothesis to be valid. If you don't care that P(1) be true, you can begin from P(2) or wherever.).
Now, since P(k) implies P(k+1) (Which you prove by algebraic manipulation), and P(1) is true, P(2) is true. Since P(2) is true, P(2+1) is true. Since P(3) is true, P(4) is true. Since P(100,000) is true, P(100,001) is true.
This way, as you see, you can say P(n) is true for any n is the natural n.
$endgroup$
add a comment |
$begingroup$
In explanations of the principle of induction, I think the 'domino' example is often not explored enough for it to be intuitive. Let me endeavour a more understandable explanation using the domino analogy.
As always, you first prove that P(1) is true (or P(2) or P(3) or P(4 septillion), depending from where you want your hypothesis to be valid. If you don't care that P(1) be true, you can begin from P(2) or wherever.).
Now, since P(k) implies P(k+1) (Which you prove by algebraic manipulation), and P(1) is true, P(2) is true. Since P(2) is true, P(2+1) is true. Since P(3) is true, P(4) is true. Since P(100,000) is true, P(100,001) is true.
This way, as you see, you can say P(n) is true for any n is the natural n.
$endgroup$
add a comment |
$begingroup$
In explanations of the principle of induction, I think the 'domino' example is often not explored enough for it to be intuitive. Let me endeavour a more understandable explanation using the domino analogy.
As always, you first prove that P(1) is true (or P(2) or P(3) or P(4 septillion), depending from where you want your hypothesis to be valid. If you don't care that P(1) be true, you can begin from P(2) or wherever.).
Now, since P(k) implies P(k+1) (Which you prove by algebraic manipulation), and P(1) is true, P(2) is true. Since P(2) is true, P(2+1) is true. Since P(3) is true, P(4) is true. Since P(100,000) is true, P(100,001) is true.
This way, as you see, you can say P(n) is true for any n is the natural n.
$endgroup$
In explanations of the principle of induction, I think the 'domino' example is often not explored enough for it to be intuitive. Let me endeavour a more understandable explanation using the domino analogy.
As always, you first prove that P(1) is true (or P(2) or P(3) or P(4 septillion), depending from where you want your hypothesis to be valid. If you don't care that P(1) be true, you can begin from P(2) or wherever.).
Now, since P(k) implies P(k+1) (Which you prove by algebraic manipulation), and P(1) is true, P(2) is true. Since P(2) is true, P(2+1) is true. Since P(3) is true, P(4) is true. Since P(100,000) is true, P(100,001) is true.
This way, as you see, you can say P(n) is true for any n is the natural n.
answered Jun 3 '16 at 9:44
Tristan Henrik CallowayTristan Henrik Calloway
1
1
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f19485%2fdominoes-and-induction-or-how-does-induction-work%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
$begingroup$
I think [math] is a pretty useless tag here; I hope you don't mind that I changed it into [intuition], which seems more apropo.
$endgroup$
– Arturo Magidin
Jan 29 '11 at 21:45
4
$begingroup$
not to pick, but "dominoes" do not actually appear in your question.
$endgroup$
– Pete L. Clark
Jan 29 '11 at 23:43
14
$begingroup$
You're looking at a train of dominoes, and you see that each one, if toppled, will knock down the next one. What happens when you push over the first domino? Induction!
$endgroup$
– I. J. Kennedy
Jan 30 '11 at 3:48