Is $frac{textrm{d}y}{textrm{d}x}$ not a ratio?
$begingroup$
In the book Thomas's Calculus (11th edition) it is mentioned (Section 3.8 pg 225) that the derivative $frac{textrm{d}y}{textrm{d}x}$ is not a ratio. Couldn't it be interpreted as a ratio, because according to the formula $textrm{d}y = f'(x)textrm{d}x$ we are able to plug in values for $textrm{d}x$ and calculate a $textrm{d}y$ (differential). Then if we rearrange we get $frac{textrm{d}y}{textrm{d}x}$ which could be seen as a ratio.
I wonder if the author says this because $textrm{d}x$ is an independent variable, and $textrm{d}y$ is a dependent variable, for $frac{textrm{d}y}{textrm{d}x}$ to be a ratio both variables need to be independent.. don't they?
calculus analysis math-history nonstandard-analysis
$endgroup$
add a comment |
$begingroup$
In the book Thomas's Calculus (11th edition) it is mentioned (Section 3.8 pg 225) that the derivative $frac{textrm{d}y}{textrm{d}x}$ is not a ratio. Couldn't it be interpreted as a ratio, because according to the formula $textrm{d}y = f'(x)textrm{d}x$ we are able to plug in values for $textrm{d}x$ and calculate a $textrm{d}y$ (differential). Then if we rearrange we get $frac{textrm{d}y}{textrm{d}x}$ which could be seen as a ratio.
I wonder if the author says this because $textrm{d}x$ is an independent variable, and $textrm{d}y$ is a dependent variable, for $frac{textrm{d}y}{textrm{d}x}$ to be a ratio both variables need to be independent.. don't they?
calculus analysis math-history nonstandard-analysis
$endgroup$
1
$begingroup$
math.stackexchange.com/questions/1548487/…
$endgroup$
– user117644
Dec 26 '15 at 22:58
$begingroup$
It can be roughly interpreted as a rate of change of $y$ as a function of $x$ . This statement though has many lacks.
$endgroup$
– adityaguharoy
Aug 28 '17 at 17:39
$begingroup$
dy compared to dx is How I look at it.
$endgroup$
– mick
Jun 10 '18 at 21:50
add a comment |
$begingroup$
In the book Thomas's Calculus (11th edition) it is mentioned (Section 3.8 pg 225) that the derivative $frac{textrm{d}y}{textrm{d}x}$ is not a ratio. Couldn't it be interpreted as a ratio, because according to the formula $textrm{d}y = f'(x)textrm{d}x$ we are able to plug in values for $textrm{d}x$ and calculate a $textrm{d}y$ (differential). Then if we rearrange we get $frac{textrm{d}y}{textrm{d}x}$ which could be seen as a ratio.
I wonder if the author says this because $textrm{d}x$ is an independent variable, and $textrm{d}y$ is a dependent variable, for $frac{textrm{d}y}{textrm{d}x}$ to be a ratio both variables need to be independent.. don't they?
calculus analysis math-history nonstandard-analysis
$endgroup$
In the book Thomas's Calculus (11th edition) it is mentioned (Section 3.8 pg 225) that the derivative $frac{textrm{d}y}{textrm{d}x}$ is not a ratio. Couldn't it be interpreted as a ratio, because according to the formula $textrm{d}y = f'(x)textrm{d}x$ we are able to plug in values for $textrm{d}x$ and calculate a $textrm{d}y$ (differential). Then if we rearrange we get $frac{textrm{d}y}{textrm{d}x}$ which could be seen as a ratio.
I wonder if the author says this because $textrm{d}x$ is an independent variable, and $textrm{d}y$ is a dependent variable, for $frac{textrm{d}y}{textrm{d}x}$ to be a ratio both variables need to be independent.. don't they?
calculus analysis math-history nonstandard-analysis
calculus analysis math-history nonstandard-analysis
edited Sep 19 '17 at 5:35
Anonymous196
8352826
8352826
asked Feb 9 '11 at 16:23
BB_MLBB_ML
6,03352544
6,03352544
1
$begingroup$
math.stackexchange.com/questions/1548487/…
$endgroup$
– user117644
Dec 26 '15 at 22:58
$begingroup$
It can be roughly interpreted as a rate of change of $y$ as a function of $x$ . This statement though has many lacks.
$endgroup$
– adityaguharoy
Aug 28 '17 at 17:39
$begingroup$
dy compared to dx is How I look at it.
$endgroup$
– mick
Jun 10 '18 at 21:50
add a comment |
1
$begingroup$
math.stackexchange.com/questions/1548487/…
$endgroup$
– user117644
Dec 26 '15 at 22:58
$begingroup$
It can be roughly interpreted as a rate of change of $y$ as a function of $x$ . This statement though has many lacks.
$endgroup$
– adityaguharoy
Aug 28 '17 at 17:39
$begingroup$
dy compared to dx is How I look at it.
$endgroup$
– mick
Jun 10 '18 at 21:50
1
1
$begingroup$
math.stackexchange.com/questions/1548487/…
$endgroup$
– user117644
Dec 26 '15 at 22:58
$begingroup$
math.stackexchange.com/questions/1548487/…
$endgroup$
– user117644
Dec 26 '15 at 22:58
$begingroup$
It can be roughly interpreted as a rate of change of $y$ as a function of $x$ . This statement though has many lacks.
$endgroup$
– adityaguharoy
Aug 28 '17 at 17:39
$begingroup$
It can be roughly interpreted as a rate of change of $y$ as a function of $x$ . This statement though has many lacks.
$endgroup$
– adityaguharoy
Aug 28 '17 at 17:39
$begingroup$
dy compared to dx is How I look at it.
$endgroup$
– mick
Jun 10 '18 at 21:50
$begingroup$
dy compared to dx is How I look at it.
$endgroup$
– mick
Jun 10 '18 at 21:50
add a comment |
20 Answers
20
active
oldest
votes
$begingroup$
Historically, when Leibniz conceived of the notation, $frac{dy}{dx}$ was supposed to be a quotient: it was the quotient of the "infinitesimal change in $y$ produced by the change in $x$" divided by the "infinitesimal change in $x$".
However, the formulation of calculus with infinitesimals in the usual setting of the real numbers leads to a lot of problems. For one thing, infinitesimals can't exist in the usual setting of real numbers! Because the real numbers satisfy an important property, called the Archimedean Property: given any positive real number $epsilongt 0$, no matter how small, and given any positive real number $Mgt 0$, no matter how big, there exists a natural number $n$ such that $nepsilongt M$. But an "infinitesimal" $xi$ is supposed to be so small that no matter how many times you add it to itself, it never gets to $1$, contradicting the Archimedean Property. Other problems: Leibniz defined the tangent to the graph of $y=f(x)$ at $x=a$ by saying "Take the point $(a,f(a))$; then add an infinitesimal amount to $a$, $a+dx$, and take the point $(a+dx,f(a+dx))$, and draw the line through those two points." But if they are two different points on the graph, then it's not a tangent, and if it's just one point, then you can't define the line because you just have one point. That's just two of the problems with infinitesimals. (See below where it says "However...", though.)
So Calculus was essentially rewritten from the ground up in the following 200 years to avoid these problems, and you are seeing the results of that rewriting (that's where limits came from, for instance). Because of that rewriting, the derivative is no longer a quotient, now it's a limit:
$$lim_{hto0 }frac{f(x+h)-f(x)}{h}.$$
And because we cannot express this limit-of-a-quotient as a-quotient-of-the-limits (both numerator and denominator go to zero), then the derivative is not a quotient.
However, Leibniz's notation is very suggestive and very useful; even though derivatives are not really quotients, in many ways they behave as if they were quotients. So we have the Chain Rule:
$$frac{dy}{dx} = frac{dy}{du};frac{du}{dx}$$
which looks very natural if you think of the derivatives as "fractions". You have the Inverse Function theorem, which tells you that
$$frac{dx}{dy} = frac{1}{quadfrac{dy}{dx}quad},$$
which is again almost "obvious" if you think of the derivatives as fractions. So, because the notation is so nice and so suggestive, we keep the notation even though the notation no longer represents an actual quotient, it now represents a single limit. In fact, Leibniz's notation is so good, so superior to the prime notation and to Newton's notation, that England fell behind all of Europe for centuries in mathematics and science because, due to the fight between Newton's and Leibniz's camp over who had invented Calculus and who stole it from whom (consensus is that they each discovered it independently), England's scientific establishment decided to ignore what was being done in Europe with Leibniz notation and stuck to Newton's... and got stuck in the mud in large part because of it.
(Differentials are part of this same issue: originally, $dy$ and $dx$ really did mean the same thing as those symbols do in $frac{dy}{dx}$, but that leads to all sorts of logical problems, so they no longer mean the same thing, even though they behave as if they did.)
So, even though we write $frac{dy}{dx}$ as if it were a fraction, and many computations look like we are working with it like a fraction, it isn't really a fraction (it just plays one on television).
However... There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things. If you do that, then you can, if you are careful, define derivatives exactly like Leibniz, in terms of infinitesimals and actual quotients; if you do that, then all the rules of Calculus that make use of $frac{dy}{dx}$ as if it were a fraction are justified because, in that setting, it is a fraction. Still, one has to be careful because you have to keep infinitesimals and regular real numbers separate and not let them get confused, or you can run into some serious problems.
$endgroup$
166
$begingroup$
As a physicist, I prefer Leibniz notation simply because it is dimensionally correct regardless of whether it is derived from the limit or from nonstandard analysis. With Newtonian notation, you cannot automatically tell what the units of $y'$ are.
$endgroup$
– rcollyer
Mar 10 '11 at 16:34
30
$begingroup$
Have you any evidence for your claim that "England fell behind Europe for centuries"?
$endgroup$
– Kevin H. Lin
Mar 21 '11 at 22:05
103
$begingroup$
@Kevin: Look at the history of math. Shortly after Newton and his students (Maclaurin, Taylor), all the developments in mathematics came from the Continent. It was the Bernoullis, Euler, who developed Calculus, not the British. It wasn't until Hamilton that they started coming back, and when they reformed math teaching in Oxford and Cambridge, they adopted the continental ideas and notation.
$endgroup$
– Arturo Magidin
Mar 22 '11 at 1:42
36
$begingroup$
Mathematics really did not have a firm hold in England. It was the Physics of Newton that was admired. Unlike in parts of the Continent, mathematics was not thought of as a serious calling. So the "best" people did other things.
$endgroup$
– André Nicolas
Jun 20 '11 at 19:02
30
$begingroup$
There's a free calculus textbook for beginning calculus students based on the nonstandard analysis approach here. Also there is a monograph on infinitesimal calculus aimed at mathematicians and at instructors who might be using the aforementioned book.
$endgroup$
– tzs
Jul 6 '11 at 16:24
|
show 16 more comments
$begingroup$
Just to add some variety to the list of answers, I'm going to go against the grain here and say that you can, in an albeit silly way, interpret $dy/dx$ as a ratio of real numbers.
For every (differentiable) function $f$, we can define a function $df(x; dx)$ of two real variables $x$ and $dx$ via $$df(x; dx) = f'(x),dx.$$
Here, $dx$ is just a real number, and no more. (In particular, it is not a differential 1-form, nor an infinitesimal.) So, when $dx neq 0$, we can write:
$$frac{df(x;dx)}{dx} = f'(x).$$
All of this, however, should come with a few remarks.
It is clear that these notations above do not constitute a definition of the derivative of $f$. Indeed, we needed to know what the derivative $f'$ meant before defining the function $df$. So in some sense, it's just a clever choice of notation.
But if it's just a trick of notation, why do I mention it at all? The reason is that in higher dimensions, the function $df(x;dx)$ actually becomes the focus of study, in part because it contains information about all the partial derivatives.
To be more concrete, for multivariable functions $fcolon R^n to R$, we can define a function $df(x;dx)$ of two n-dimensional variables $x, dx in R^n$ via
$$df(x;dx) = df(x_1,ldots,x_n; dx_1, ldots, dx_n) = frac{partial f}{partial x_1}dx_1 + ldots + frac{partial f}{partial x_n}dx_n.$$
Notice that this map $df$ is linear in the variable $dx$. That is, we can write:
$$df(x;dx) = (frac{partial f}{partial x_1}, ldots, frac{partial f}{partial x_n})
begin{pmatrix}
dx_1 \
vdots \
dx_n \
end{pmatrix}
= A(dx),$$
where $A$ is the $1times n$ row matrix of partial derivatives.
In other words, the function $df(x;dx)$ can be thought of as a linear function of $dx$, whose matrix has variable coefficients (depending on $x$).
So for the $1$-dimensional case, what is really going on is a trick of dimension. That is, we have the variable $1times1$ matrix ($f'(x)$) acting on the vector $dx in R^1$ -- and it just so happens that vectors in $R^1$ can be identified with scalars, and so can be divided.
Finally, I should mention that, as long as we are thinking of $dx$ as a real number, mathematicians multiply and divide by $dx$ all the time -- it's just that they'll usually use another notation. The letter "$h$" is often used in this context, so we usually write $$f'(x) = lim_{h to 0} frac{f(x+h) - f(x)}{h},$$
rather than, say,
$$f'(x) = lim_{dx to 0} frac{f(x+dx) - f(x)}{dx}.$$
My guess is that the main aversion to writing $dx$ is that it conflicts with our notation for differential $1$-forms.
EDIT: Just to be even more technical, and at the risk of being confusing to some, we really shouldn't even be regarding $dx$ as an element of $R^n$, but rather as an element of the tangent space $T_xR^n$. Again, it just so happens that we have a canonical identification between $T_xR^n$ and $R^n$ which makes all of the above okay, but I like distinction between tangent space and euclidean space because it highlights the different roles played by $x in R^n$ and $dx in T_xR^n$.
$endgroup$
15
$begingroup$
Cotangent space. Also, in case of multiple variables if you fix $dx^2,ldots,dx^n = 0$ you can still divide by $dx^1$ and get the derivative. And nothing stops you from defining differential first and then defining derivatives as its coefficients.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 2:30
3
$begingroup$
Well, canonically differentials are members of contangent bundle, and $dx$ is in this case its basis.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 5:54
4
$begingroup$
Maybe I'm misunderstanding you, but I never made any reference to differentials in my post. My point is that $df(x;dx)$ can be likened to the pushforward map $f_*$. Of course one can also make an analogy with the actual differential 1-form $df$, but that's something separate.
$endgroup$
– Jesse Madnick
Feb 11 '11 at 6:18
5
$begingroup$
Sure the input is a vector, that's why these linearizations are called covectors, which are members of cotangent space. I can't see why you are bringing up pushforwards when there is a better description right there.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 9:12
4
$begingroup$
I just don't think pushforward is the best way to view differential of a function with codomain $mathbb{R}$ (although it is perfectly correct), it's just a too complex idea that has more natural treatment.
$endgroup$
– Alexei Averchenko
Feb 12 '11 at 4:25
|
show 5 more comments
$begingroup$
My favorite "counterexample" to the derivative acting like a ratio: the implicit differentiation formula for two variables. We have $$frac{dy}{dx} = -frac{partial F/partial x}{partial F/partial y} $$
The formula is almost what you would expect, except for that pesky minus sign.
See http://en.wikipedia.org/wiki/Implicit_differentiation#Formula_for_two_variables for the rigorous definition of this formula.
$endgroup$
18
$begingroup$
Yes, but their is a fake proof of this that comes from that kind a reasoning. If $f(x,y)$ is a function of two variables, then $df=frac{partial f}{partial x}dx+frac{partial f}{partial y}$. Now if we pick a level curve $f(x,y)=0$, then $df=0$, so solving for $frac{dy}{dx}$ gives us the expression above.
$endgroup$
– Baby Dragon
Jun 16 '13 at 19:03
7
$begingroup$
Pardon me, but how is this a "fake proof"?
$endgroup$
– Lurco
Apr 28 '14 at 0:04
16
$begingroup$
@Lurco: He meant $df = frac{∂f}{∂x} dx + frac{∂f}{∂y} dy$, where those 'infinitesimals' are not proper numbers and hence it is wrong to simply substitute $df=0$, because in fact if we are consistent in our interpretation $df=0$ would imply $dx=dy=0$ and hence we can't get $frac{dy}{dx}$ anyway. But if we are inconsistent, we can ignore that and proceed to get the desired expression. Correct answer but fake proof.
$endgroup$
– user21820
May 13 '14 at 9:58
8
$begingroup$
I agree that this is a good example to show why such notation is not so simple as one might think, but in this case I could say that $dx$ is not the same as $∂x$. Do you have an example where the terms really cancel to give the wrong answer?
$endgroup$
– user21820
May 13 '14 at 10:07
4
$begingroup$
@JohnRobertson of course it is. That's the whole point here, that this sort of "fake proof" thinking leads to wrong results here. I blatantly ignore the fact that $d neq partial$ (essentially), and I also completely ignore what $F$ really is. My only point here is that if you try to use this as a mnemonic (or worse, a "proof" method), you will get completely wrong results.
$endgroup$
– asmeurer
May 19 '15 at 18:58
|
show 5 more comments
$begingroup$
It is best to think of $frac{d}{dx}$ as an operator which takes the derivative, with respect to $x$, of whatever expression follows.
$endgroup$
33
$begingroup$
This is an opinion offered without any justification.
$endgroup$
– Ben Crowell
Apr 30 '14 at 5:20
41
$begingroup$
What kind of justification do you want? It is a very good argument for telling that $frac{dy}{dx}$ is not a fraction! This tells us that $frac{dy}{dx}$ needs to be seen as $frac{d}{dx}(y)$ where $frac{d}{dx}$ is an opeartor.
$endgroup$
– Emin
May 2 '14 at 20:43
25
$begingroup$
Over the hyperreals, $frac{dy}{dx}$ is a ratio and one can view $frac{d}{dx}$ as an operator. Therefore Tobin's reply is not a good argument for "telling that dy/dx is not a fraction".
$endgroup$
– Mikhail Katz
Dec 15 '14 at 16:11
1
$begingroup$
This doesn't address the question. $frac{y}{x}$ is clearly a ratio, but it can be also thought as the operator $frac{1}{x}$ acting on $y$ by multiplication, so "operator" and "ratio" are not exclusive.
$endgroup$
– mlainz
Jan 29 at 10:52
$begingroup$
Thanks for the comments - I think all of these criticisms of my answer are valid.
$endgroup$
– Tobin Fricke
Jan 30 at 19:38
add a comment |
$begingroup$
In Leibniz's mathematics, if $y=x^2$ then $frac{dy}{dx}$ would be "equal" to $2x$, but the meaning of "equality" to Leibniz was not the same as it is to us. He emphasized repeatedly (for example in his 1695 response to Nieuwentijt) that he was working with a generalized notion of equality "up to" a negligible term. Also, Leibniz used several different pieces of notation for "equality". One of them was the symbol "$,{}_{ulcorner!urcorner},$". To emphasize the point, one could write $$y=x^2quad rightarrow quad frac{dy}{dx},{}_{ulcorner!urcorner},2x$$ where $frac{dy}{dx}$ is literally a ratio. When one expresses Leibniz's insight in this fashion, one is less tempted to commit an ahistorical error of accusing him of having committed a logical inaccuracy.
In more detail, $frac{dy}{dx}$ is a true ratio in the following sense. We choose an infinitesimal $Delta x$, and consider the corresponding $y$-increment $Delta y = f(x+Delta x)-f(x)$. The ratio $frac{Delta y}{Delta x}$ is then infinitely close to the derivative $f'(x)$. We then set $dx=Delta x$ and $dy=f'(x)dx$ so that $f'(x)=frac{dy}{dx}$ by definition. One of the advantages of this approach is that one obtains an elegant proof of chain rule $frac{dy}{dx}=frac{dy}{du}frac{du}{dx}$ by applying the standard part function to the equality $frac{Delta y}{Delta x}=frac{Delta y}{Delta u}frac{Delta u}{Delta x}$.
In the real-based approach to the calculus, there are no infinitesimals and therefore it is impossible to interpret $frac{dy}{dx}$ as a true ratio. Therefore claims to that effect have to be relativized modulo anti-infinitesimal foundational commitments.
Note 1. I recently noticed that Leibniz's $,{}_{ulcorner!urcorner},$ notation occurs several times in Margaret Baron's book The origins of infinitesimal calculus, starting on page 282. It's well worth a look.
Note 2. It should be clear that Leibniz did view $frac{dy}{dx}$ as a ratio. (Some of the other answers seem to be worded ambiguously with regard to this point.)
$endgroup$
$begingroup$
This is somewhat beside the point, but I don't think that applying the standard part function to prove the Chain Rule is particularly more (or less) elegant than applying the limit as $Delta{x} to 0$. Both attempts hit a snag since $Delta{u}$ might be $0$ when $Delta{x}$ is not (regardless of whether one is thinking of $Delta{x}$ as an infinitesimal quantity or as a standard variable approaching $0$), as for example when $u = x sin(1/x)$.
$endgroup$
– Toby Bartels
Feb 21 '18 at 23:26
$begingroup$
This snag does exist in the epsilon-delta setting, but it does not exist in the infinitesimal setting because if the derivative is nonzero then one necessarily has $Delta unot=0$, and if the derivative is zero then there is nothing to prove. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 9:47
$begingroup$
Notice that the function you mentioned is undefined (or not differentiable if you define it) at zero, so chain rule does not apply in this case anyway. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 10:15
$begingroup$
Sorry, that should be $u = x^2 sin(1/x)$ (extended by continuity to $x = 0$, which is the argument at issue). If the infinitesimal $Delta{x}$ is $1/(npi)$ for some (necessarily infinite) hyperinteger $n$, then $Delta{u}$ is $0$. It's true that in this case, the derivative $mathrm{d}u/mathrm{d}x$ is $0$ too, but I don't see why that matters; why is there nothing to prove in that case? (Conversely, if there's nothing to prove in that case, then doesn't that save the epsilontic proof as well? That's the only way that $Delta{u}$ can be $0$ arbitrarily close to the argument.)
$endgroup$
– Toby Bartels
Feb 23 '18 at 12:21
$begingroup$
If $Delta u$ is zero then obviously $Delta y$ is also zero and therefore both sides of the formula for chain rule are zero. On the other hand, if the derivative of $u=g(x)$ is nonzero then $Delta u$ is necessarily nonzero. This is not necessarily the case when one works with finite differences. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 24 '18 at 19:28
|
show 3 more comments
$begingroup$
Typically, the $frac{dy}{dx}$ notation is used to denote the derivative, which is defined as the limit we all know and love (see Arturo Magidin's answer). However, when working with differentials, one can interpret $frac{dy}{dx}$ as a genuine ratio of two fixed quantities.
Draw a graph of some smooth function $f$ and its tangent line at $x=a$. Starting from the point $(a, f(a))$, move $dx$ units right along the tangent line (not along the graph of $f$). Let $dy$ be the corresponding change in $y$.
So, we moved $dx$ units right, $dy$ units up, and stayed on the tangent line. Therefore the slope of the tangent line is exactly $frac{dy}{dx}$. However, the slope of the tangent at $x=a$ is also given by $f'(a)$, hence the equation
$$frac{dy}{dx} = f'(a)$$
holds when $dy$ and $dx$ are interpreted as fixed, finite changes in the two variables $x$ and $y$. In this context, we are not taking a limit on the left hand side of this equation, and $frac{dy}{dx}$ is a genuine ratio of two fixed quantities. This is why we can then write $dy = f'(a) dx$.
$endgroup$
7
$begingroup$
This sounds a lot like the explanation of differentials that I recall hearing from my Calculus I instructor (an analyst of note, an expert on Wiener integrals): "$dy$ and $dx$ are any two numbers whose ratio is the derivative . . . they are useful for people who are interested in (sniff) approximations."
$endgroup$
– bof
Dec 28 '13 at 3:34
1
$begingroup$
@bof: But we can't describe almost every real number in the real world, so I guess having approximations is quite good. =)
$endgroup$
– user21820
May 13 '14 at 10:13
1
$begingroup$
@user21820 anything that we can approximate to arbitrary precision we can define... It's the result of that algorithm.
$endgroup$
– k_g
May 18 '15 at 1:34
3
$begingroup$
@k_g: Yes of course. My comment was last year so I don't remember what I meant at that time anymore, but I probably was trying to say that since we already are limited to countably many definable reals, it's much worse if we limit ourselves even further to closed forms of some kind and eschew approximations. Even more so, in the real world we rarely have exact values but just confidence intervals anyway, and so approximations are sufficient for almost all practical purposes.
$endgroup$
– user21820
May 18 '15 at 5:04
add a comment |
$begingroup$
Of course it is a ratio.
$dy$ and $dx$ are differentials. Thus they act on tangent vectors, not on points. That is, they are functions on the tangent manifold that are linear on each fiber. On the tangent manifold the ratio of the two differentials $frac{dy}{dx}$ is just a ratio of two functions and is constant on every fiber (except being ill defined on the zero section) Therefore it descends to a well defined function on the base manifold. We refer to that function as the derivative.
As pointed out in the original question many calculus one books these days even try to define differentials loosely and at least informally point out that for differentials $dy = f'(x) dx$ (Note that both sides of this equation act on vectors, not on points). Both $dy$ and $dx$ are perfectly well defined functions on vectors and their ratio is therefore a perfectly meaningful function on vectors. Since it is constant on fibers (minus the zero section), then that well defined ratio descends to a function on the original space.
At worst one could object that the ratio $frac{dy}{dx}$ is not defined on the zero section.
$endgroup$
4
$begingroup$
Can anything meaningful be made of higher order derivatives in the same way?
$endgroup$
– Francis Davey
Mar 8 '15 at 10:41
2
$begingroup$
You can simply mimic the procedure to get a second or third derivative. As I recall when I worked that out the same higher partial derivatives get realized in it multiple ways which is awkward. The standard approach is more direct. It is called Jets and there is currently a Wikipedia article on Jet (mathematics).
$endgroup$
– John Robertson
May 2 '15 at 16:51
2
$begingroup$
Tangent manifold is the tangent bundle. And what it means is that dy and dx are both perfectly well defined functions on the tangent manifold, so we can divide one by the other giving dy/dx. It turns out that the value of dy/dx on a given tangent vector only depends on the base point of that vector. As its value only depends on the base point, we can take dy/dx as really defining a function on original space. By way of analogy, if f(u,v) = 3*u + sin(u) + 7 then even though f is a function of both u and v, since v doesn't affect the output, we can also consider f to be a function of u alone.
$endgroup$
– John Robertson
Jul 6 '15 at 15:53
3
$begingroup$
Your answer is in the opposition with many other answers here! :) I am confused! So is it a ratio or not or both!?
$endgroup$
– H. R.
Jul 15 '16 at 20:24
3
$begingroup$
How do you simplify all this to the more special-case level of basic calculus where all spaces are Euclidean? The invocations of manifold theory suggest this is an approach that is designed for non-Euclidean geometries.
$endgroup$
– The_Sympathizer
Feb 3 '17 at 7:19
|
show 5 more comments
$begingroup$
The notation $dy/dx$ - in elementary calculus - is simply that: notation to denote the derivative of, in this case, $y$ w.r.t. $x$. (In this case $f'(x)$ is another notation to express essentially the same thing, i.e. $df(x)/dx$ where $f(x)$ signifies the function $f$ w.r.t. the dependent variable $x$. According to what you've written above, $f(x)$ is the function which takes values in the target space $y$).
Furthermore, by definition, $dy/dx$ at a specific point $x_0$ within the domain $x$ is the real number $L$, if it exists. Otherwise, if no such number exists, then the function $f(x)$ does not have a derivative at the point in question, (i.e. in our case $x_0$).
For further information you can read the Wikipedia article: http://en.wikipedia.org/wiki/Derivative
$endgroup$
32
$begingroup$
So glad that wikipedia finally added an entry for the derivative... $$$$
$endgroup$
– The Chaz 2.0
Aug 10 '11 at 17:50
2
$begingroup$
@Steve: I wish there were a way to collect all the comments that I make (spread across multiple forums, social media outlets, etc) and let you upvote them for humor. Most of my audience scoffs at my simplicity.
$endgroup$
– The Chaz 2.0
Jan 27 '12 at 21:39
add a comment |
$begingroup$
It is not a ratio, just as $dx$ is not a product.
$endgroup$
7
$begingroup$
I wonder what motivated the downvote. I do find strange that students tend to confuse Leibniz's notation with a quotient, and not $dx$ (or even $log$!) with a product: they are both indivisible notations... My answer above just makes this point.
$endgroup$
– Mariano Suárez-Álvarez
Feb 10 '11 at 0:12
10
$begingroup$
I think that the reason why this confusion arises in some students may be related to the way in which this notation is used for instance when calculating integrals. Even though as you say, they are indivisible, they are separated "formally" in any calculus course in order to aid in the computation of integrals. I suppose that if the letters in $log$ where separated in a similar way, the students would probably make the same mistake of assuming it is a product.
$endgroup$
– Adrián Barquero
Feb 10 '11 at 4:09
5
$begingroup$
I once heard a story of a university applicant, who was asked at interview to find $dy/dx$, didn't understand the question, no matter how the interviewer phrased it. It was only after the interview wrote it out that the student promptly informed the interviewer that the two $d$'s cancelled and he was in fact mistaken.
$endgroup$
– jClark94
Jan 30 '12 at 19:54
8
$begingroup$
Is this an answer??? Or just an imposition?
$endgroup$
– André Caldas
Sep 12 '13 at 13:12
14
$begingroup$
I find the statement that "students tend to confuse Leibniz's notation with a quotient" a bit problematic. The reason for this is that Leibniz certainly thought of $frac{dy}{dx}$ as a quotient. Since it behaves as a ratio in many contexts (such as the chain rule), it may be more helpful to the student to point out that in fact the derivative can be said to be "equal" to the ratio $frac{dy}{dx}$ if "equality" is interpreted as a more general relation of equality "up to an infinitesimal term", which is how Leibniz thought of it. I don't think this is comparable to thinking of "dx" as a product
$endgroup$
– Mikhail Katz
Oct 2 '13 at 12:57
|
show 8 more comments
$begingroup$
$boldsymbol{dfrac{dy}{dx}}$ is definitely not a ratio - it is the limit (if it exists) of a ratio. This is Leibniz's notation of the derivative (c. 1670) which prevailed to the one of Newton $dot{y}(x)$.
Still, most Engineers and even many Applied Mathematicians treat it as a ratio. A very common such case is when solving separable ODEs, i.e. equations of the form
$$
frac{dy}{dx}=f(x)g(y),
$$
writing the above as
$$f(x),dx=frac{dy}{g(y)},
$$
and then integrating.
Apparently this is not Mathematics, it is a symbolic calculus.
Why are we allowed to integrate the left hand side with respect to to $x$ and
the right hand side with respect to to $y$? What is the meaning of that?
This procedure often leads to the right solution, but not always. For example
applying this method to the IVP
$$
frac{dy}{dx}=y+1, quad y(0)=-1,qquad (star)
$$
we get, for some constant $c$,
$$
ln (y+1)=intfrac{dy}{y+1} = int dx = x+c,
$$
equivalently
$$
y(x)=mathrm{e}^{x+c}-1.
$$
Note that it is impossible to incorporate the initial condition $y(0)=-1$, as $mathrm{e}^{x+c}$ never vanishes. By the way, the solution of $(star)$ is $y(x)equiv -1$.
Even worse, consider the case
$$
y'=frac{3y^{1/3}}{2}, quad y(0)=0,
$$
where, using this symbolic calculus, leads to $y^{2/3}=t$.
In my opinion, Calculus should be taught rigorously, with $delta$'s and $varepsilon$'s. Once these are well understood, then one can use such symbolic calculus, provided that he/she is convinced under which restrictions it is indeed permitted.
$endgroup$
12
$begingroup$
I would disagree with this to some extent with your example as many would write the solution as $y(x)=e^{x+c}rightarrow y(x)=e^Ce^xrightarrow y(x)=Ce^x$ for the 'appropriate' $C$. Then we have $y(0)=Ce-1=-1$, implying $C=0$ avoiding the issue which is how many introductory D.E. students would answer the question so the issue is never noticed. But yes, $frac{dy}{dx}$ is certainly not a ratio.
$endgroup$
– mathematics2x2life
Dec 20 '13 at 21:11
11
$begingroup$
Your example works if $dy/dx$ is handled naively as a quotient. Given $dy/dx = y+1$, we can deduce $dx = dy/(y+1)$, but as even undergraduates know, you can't divide by zero, so this is true only as long as $y+1 ne 0$. Thus we correctly conclude that $(star)$ has no solution such that $y+1 ne 0$. Solving for $y+1=0$, we have $dy/dx = 0$, so $y = int 0 dx = 0 + C$, and $y(0)=-1$ constraints $C=-1$.
$endgroup$
– Gilles
Aug 28 '14 at 14:47
1
$begingroup$
Since you mention Leibniz, it may be helpful to clarify that Leibniz did view $frac{dy}{dx}$ as a ratio, for the sake of historical accuracy.
$endgroup$
– Mikhail Katz
Dec 7 '15 at 18:59
$begingroup$
+1 for the interesting IVP example, I have never noticed that subtlety.
$endgroup$
– electronpusher
Apr 22 '17 at 3:04
4
$begingroup$
You got the wrong answer because you divided by zero, not because there's anything wrong with treating the derivative as a ratio.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:20
|
show 1 more comment
$begingroup$
$frac{dy}{dx}$ is not a ratio - it is a symbol used to represent a limit.
$endgroup$
6
$begingroup$
This is one possible view on $frac{dy}{dx}$, related to the fact that the common number system does not contain infinitesimals, making it impossible to justify this symbol as a ratio in that particular framework. However, Leibniz certainly meant it to be a ratio. Furthermore, it can be justified as a ratio in modern infinitesimal theories, as mentioned in some of the other answers.
$endgroup$
– Mikhail Katz
Nov 17 '13 at 14:57
add a comment |
$begingroup$
In most formulations, $frac{dx}{dy}$ can not be interpreted as a ratio, as $dx$ and $dy$ do not actually exist in them. An exception to this is shown in this book. How it works, as Arturo said, is we allow infinitesimals (by using the hyperreal number system). It is well formulated, and I prefer it to limit notions, as this is how it was invented. Its just that they weren't able to formulate it correctly back then. I will give a slightly simplified example. Let us say you are differentiating $y=x^2$. Now let $dx$ be a miscellaneous infinitesimals (it is the same no matter which you choose if your function is differentiate-able at that point.) $$dy=(x+dx)^2-x^2$$
$$dy=2xtimes dx+dx^2$$
Now when we take the ratio, it is:
$$frac{dy}{dx}=2x+dx$$
(Note:Actually,$frac{Delta y}{Delta x}$ is what we found in the beginning, and $dy$ is defined so that $frac{dy}{dx}$ is $frac{Delta y}{Delta x}$ rounded to the nearest real number.)
$endgroup$
$begingroup$
So, your example is still incomplete. To complete it, you should either take the limit of $dxto0$, or take standard part of the RHS if you treat $dx$ as infinitesimal instead of as $varepsilon$.
$endgroup$
– Ruslan
May 8 '18 at 19:49
add a comment |
$begingroup$
It may be of interest to record Russell's views of the matter:
Leibniz's belief that the Calculus had philosophical importance is now known to be erroneous: there are no infinitesimals in it, and $dx$ and $dy$ are not numerator and denominator of a fraction. (Bertrand Russell, RECENT WORK ON THE PHILOSOPHY OF LEIBNIZ. Mind, 1903).
$endgroup$
5
$begingroup$
So yet another error made by Russell, is what you're saying?
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:24
$begingroup$
Yes, an error indeed, and one that he elaborated on in embarrassing detail in his Principles of Mathematics. @TobyBartels
$endgroup$
– Mikhail Katz
Apr 30 '17 at 12:20
add a comment |
$begingroup$
I realize this is an old post, but I think it's worth while to point out that in the so-called Quantum Calculus $frac{dy}{dx}$ $is$ a ratio. The subject $starts$ off immediately by saying this is a ratio, by defining differentials and then calling derivatives a ratio of differentials:
The $q-$differential is defined as
$$d_q f(x) = f(qx) - f(x)$$
and the $h-$differential as
$$d_h f(x) = f(x+h) - f(x)$$
It follows that $d_q x = (q-1)x$ and $d_h x = h$.
From here, we go on to define the $q-$derivative and $h-$derivative, respectively:
$$D_q f(x) = frac{d_q f(x)}{d_q x} = frac{f(qx) - f(x)}{(q-1)x}$$
$$D_h f(x) = frac{d_h f(x)}{d_q x} = frac{f(x+h) - f(x)}{h}$$
Notice that
$$lim_{q to 1} D_q f(x) = lim_{hto 0} D_h f(x) = frac{df(x)}{x} neq text{a ratio}$$
$endgroup$
$begingroup$
I just want to point out that @Yiorgos S. Smyrlis did already state that dy/dx is not a ratio, but a limit of a ratio (if it exists). I only included my response because this subject seems interesting (I don't think many have heard of it) and in this subject we work in the confines of it being a ratio... but certainly the limit is not really a ratio.
$endgroup$
– Squirtle
Dec 28 '13 at 1:54
$begingroup$
You start out saying that it is a ratio and then end up saying that it is not a ratio. It's interesting that you can define it as a limit of ratios in two different ways, but you've still only given it as a limit of ratios, not as a ratio directly.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:40
2
$begingroup$
I guess you mean to say that the q-derivative and h-derivative are ratios; that the usual derivative may be recovered as limits of these is secondary to your point.
$endgroup$
– Toby Bartels
May 2 '17 at 21:55
$begingroup$
Yes, that is precisely my point.
$endgroup$
– Squirtle
Feb 23 '18 at 4:04
add a comment |
$begingroup$
Anything that can be said in mathematics can be said in at least 3 different ways...all things about derivation/derivatives depend on the meaning that is attached to the word: TANGENT.
It is agreed that the derivative is the "gradient function" for tangents (at a point); and spatially (geometrically) the gradient of a tangent is the "ratio" ( "fraction" would be better ) of the y-distance to the x-distance.
Similar obscurities occur when "spatial and algebraic" are notationally confused.. some people take the word "vector" to mean a track!
$endgroup$
2
$begingroup$
Accordingto John Robinson (2days ago) vectors... elements(points) of vector spaces are different from points
$endgroup$
– kozenko
May 3 '14 at 3:43
add a comment |
$begingroup$
Assuming you're happy with $dy/dx$, when it becomes $ldots dy$ and $ldots dx$ it means that it follows that what precedes $dy$ in terms of $y$ is equal to what precedes $dx$ in terms of $x$.
"in terms of" = "with reference to".
That is, if "$a frac{dy}{dx} = b$", then it follows that "$a$ with reference to $y$ = $b$ with reference to $x$". If the equation has all the terms with $y$ on the left and all with $x$ on the right, then you've got to a good place to continue.
The phrase "it follows that" means you haven't really moved $dx$ as in algebra. It now has a different meaning which is also true.
$endgroup$
add a comment |
$begingroup$
To ask "Is $frac{dy}{dx}$ a ratio or isn't it?" is like asking "Is $sqrt 2$ a number or isn't it?" The answer depends on what you mean by "number". $sqrt 2$ is not an Integer or a Rational number, so if that's what you mean by "number", then the answer is "No, $sqrt 2$ is not a number."
However, the Real numbers are an extension of the Rational numbers that includes irrational numbers such as $sqrt 2$, and so, in this set of numbers, $sqrt 2$ is a number.
In the same way, a differential such as $dx$ is not a Real number, but it is possible to extend the Real numbers to include infinitesimals, and, if you do that, then $frac{dy}{dx}$ is truly a ratio.
When a professor tells you that $dx$ by itself is meaningless, or that $frac{dy}{dx}$ is not a ratio, they are correct, in terms of "normal" number systems such as the Real or Complex systems, which are the number systems typically used in science, engineering and even mathematics. Infinitesimals can be placed on a rigorous footing, but sometimes at the cost of surrendering some important properties of the numbers we rely on for everyday science.
See https://en.wikipedia.org/wiki/Infinitesimal#Number_systems_that_include_infinitesimals for a discussion of number systems that include infinitesimals.
$endgroup$
add a comment |
$begingroup$
I am going to join @Jesse Madnick here, and try to interpret $frac{dy}{dx}$ as a ratio. The idea is: lets interpret $dx$ and $dy$ as functions on $Tmathbb R^2$, as if they were differential forms. For each tangent vector $v$, set $dx(v):=v(x)$. If we identify $Tmathbb R^2$ with $mathbb R^4$, we get that $(x,y,dx,dy)$ is just the canonical coordinate system for $mathbb R^4$. If we exclude the points where $dx=0$, then $frac{dy}{dx} = 2x$ is a perfectly healthy equation, its solutions form a subset of $mathbb R^4$.
Let's see if it makes any sense. If we fix $x$ and $y$, the solutions form a straight line through the origin of the tangent space at $(x,y)$, its slope is $2x$. So, the set of all solutions is a distribution, and the integral manifolds happen to be the parabolas $y=x^2+c$. Exactly the solutions of the differential equation that we would write as $frac{dy}{dx} = 2x$. Of course, we can write it as $dy = 2xdx$ as well. I think this is at least a little bit interesting. Any thoughts?
$endgroup$
add a comment |
$begingroup$
There are many answers here, but the simplest seems to be missing. So here it is:
Yes, it is a ratio, for exactly the reason that you said in your question.
$endgroup$
1
$begingroup$
Some other people have already more-or-less given this answer, but then go into more detail about how it fits into tangent spaces in higher dimensions and whatnot. That is all very interesting, of course, but it may give the impression that the development of the derivative as a ratio that appears in the original question is not enough by itself. But it is enough.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
1
$begingroup$
Nonstandard analysis, while providing an interesting perspective and being closer to what Leibniz himself was thinking, is also not necessary for this. The definition of differential that is cited in the question is not infinitesimal, but it still makes the derivative into a ratio of differentials.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
add a comment |
$begingroup$
The derivate $frac{dy}{dx}$ is not a ratio, but rather a representation of a ratio within a limit.
Similarly, $dx$ is a representation of $Delta x$ inside a limit with interaction. This interaction can be in the form of multiplication, division, etc. with other things inside the same limit.
This interaction inside the limit is what makes the difference. You see, a limit of a ratio is not necessarily the ratio of the limits, and that is one example of why the interaction is considered to be inside the limit. This limit is hidden or left out in the shorthand notation that Liebniz invented.
The simple fact is that most of calculus is a shorthand representation of something else. This shorthand notation allows us to calculate things more quickly and it looks better than what it is actually representative of. The problem comes in when people expect this notation to act like actual maths, which it can't because it is just a representation of actual maths.
So, in order to see the underlying properties of calculus, we always have to convert it to the actual mathematical form and then analyze it from there. Then by memorization of basic properties and combinations of these different properties we can derive even more properties.
$endgroup$
add a comment |
protected by davidlowryduda♦ Mar 23 '14 at 18:41
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
20 Answers
20
active
oldest
votes
20 Answers
20
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Historically, when Leibniz conceived of the notation, $frac{dy}{dx}$ was supposed to be a quotient: it was the quotient of the "infinitesimal change in $y$ produced by the change in $x$" divided by the "infinitesimal change in $x$".
However, the formulation of calculus with infinitesimals in the usual setting of the real numbers leads to a lot of problems. For one thing, infinitesimals can't exist in the usual setting of real numbers! Because the real numbers satisfy an important property, called the Archimedean Property: given any positive real number $epsilongt 0$, no matter how small, and given any positive real number $Mgt 0$, no matter how big, there exists a natural number $n$ such that $nepsilongt M$. But an "infinitesimal" $xi$ is supposed to be so small that no matter how many times you add it to itself, it never gets to $1$, contradicting the Archimedean Property. Other problems: Leibniz defined the tangent to the graph of $y=f(x)$ at $x=a$ by saying "Take the point $(a,f(a))$; then add an infinitesimal amount to $a$, $a+dx$, and take the point $(a+dx,f(a+dx))$, and draw the line through those two points." But if they are two different points on the graph, then it's not a tangent, and if it's just one point, then you can't define the line because you just have one point. That's just two of the problems with infinitesimals. (See below where it says "However...", though.)
So Calculus was essentially rewritten from the ground up in the following 200 years to avoid these problems, and you are seeing the results of that rewriting (that's where limits came from, for instance). Because of that rewriting, the derivative is no longer a quotient, now it's a limit:
$$lim_{hto0 }frac{f(x+h)-f(x)}{h}.$$
And because we cannot express this limit-of-a-quotient as a-quotient-of-the-limits (both numerator and denominator go to zero), then the derivative is not a quotient.
However, Leibniz's notation is very suggestive and very useful; even though derivatives are not really quotients, in many ways they behave as if they were quotients. So we have the Chain Rule:
$$frac{dy}{dx} = frac{dy}{du};frac{du}{dx}$$
which looks very natural if you think of the derivatives as "fractions". You have the Inverse Function theorem, which tells you that
$$frac{dx}{dy} = frac{1}{quadfrac{dy}{dx}quad},$$
which is again almost "obvious" if you think of the derivatives as fractions. So, because the notation is so nice and so suggestive, we keep the notation even though the notation no longer represents an actual quotient, it now represents a single limit. In fact, Leibniz's notation is so good, so superior to the prime notation and to Newton's notation, that England fell behind all of Europe for centuries in mathematics and science because, due to the fight between Newton's and Leibniz's camp over who had invented Calculus and who stole it from whom (consensus is that they each discovered it independently), England's scientific establishment decided to ignore what was being done in Europe with Leibniz notation and stuck to Newton's... and got stuck in the mud in large part because of it.
(Differentials are part of this same issue: originally, $dy$ and $dx$ really did mean the same thing as those symbols do in $frac{dy}{dx}$, but that leads to all sorts of logical problems, so they no longer mean the same thing, even though they behave as if they did.)
So, even though we write $frac{dy}{dx}$ as if it were a fraction, and many computations look like we are working with it like a fraction, it isn't really a fraction (it just plays one on television).
However... There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things. If you do that, then you can, if you are careful, define derivatives exactly like Leibniz, in terms of infinitesimals and actual quotients; if you do that, then all the rules of Calculus that make use of $frac{dy}{dx}$ as if it were a fraction are justified because, in that setting, it is a fraction. Still, one has to be careful because you have to keep infinitesimals and regular real numbers separate and not let them get confused, or you can run into some serious problems.
$endgroup$
166
$begingroup$
As a physicist, I prefer Leibniz notation simply because it is dimensionally correct regardless of whether it is derived from the limit or from nonstandard analysis. With Newtonian notation, you cannot automatically tell what the units of $y'$ are.
$endgroup$
– rcollyer
Mar 10 '11 at 16:34
30
$begingroup$
Have you any evidence for your claim that "England fell behind Europe for centuries"?
$endgroup$
– Kevin H. Lin
Mar 21 '11 at 22:05
103
$begingroup$
@Kevin: Look at the history of math. Shortly after Newton and his students (Maclaurin, Taylor), all the developments in mathematics came from the Continent. It was the Bernoullis, Euler, who developed Calculus, not the British. It wasn't until Hamilton that they started coming back, and when they reformed math teaching in Oxford and Cambridge, they adopted the continental ideas and notation.
$endgroup$
– Arturo Magidin
Mar 22 '11 at 1:42
36
$begingroup$
Mathematics really did not have a firm hold in England. It was the Physics of Newton that was admired. Unlike in parts of the Continent, mathematics was not thought of as a serious calling. So the "best" people did other things.
$endgroup$
– André Nicolas
Jun 20 '11 at 19:02
30
$begingroup$
There's a free calculus textbook for beginning calculus students based on the nonstandard analysis approach here. Also there is a monograph on infinitesimal calculus aimed at mathematicians and at instructors who might be using the aforementioned book.
$endgroup$
– tzs
Jul 6 '11 at 16:24
|
show 16 more comments
$begingroup$
Historically, when Leibniz conceived of the notation, $frac{dy}{dx}$ was supposed to be a quotient: it was the quotient of the "infinitesimal change in $y$ produced by the change in $x$" divided by the "infinitesimal change in $x$".
However, the formulation of calculus with infinitesimals in the usual setting of the real numbers leads to a lot of problems. For one thing, infinitesimals can't exist in the usual setting of real numbers! Because the real numbers satisfy an important property, called the Archimedean Property: given any positive real number $epsilongt 0$, no matter how small, and given any positive real number $Mgt 0$, no matter how big, there exists a natural number $n$ such that $nepsilongt M$. But an "infinitesimal" $xi$ is supposed to be so small that no matter how many times you add it to itself, it never gets to $1$, contradicting the Archimedean Property. Other problems: Leibniz defined the tangent to the graph of $y=f(x)$ at $x=a$ by saying "Take the point $(a,f(a))$; then add an infinitesimal amount to $a$, $a+dx$, and take the point $(a+dx,f(a+dx))$, and draw the line through those two points." But if they are two different points on the graph, then it's not a tangent, and if it's just one point, then you can't define the line because you just have one point. That's just two of the problems with infinitesimals. (See below where it says "However...", though.)
So Calculus was essentially rewritten from the ground up in the following 200 years to avoid these problems, and you are seeing the results of that rewriting (that's where limits came from, for instance). Because of that rewriting, the derivative is no longer a quotient, now it's a limit:
$$lim_{hto0 }frac{f(x+h)-f(x)}{h}.$$
And because we cannot express this limit-of-a-quotient as a-quotient-of-the-limits (both numerator and denominator go to zero), then the derivative is not a quotient.
However, Leibniz's notation is very suggestive and very useful; even though derivatives are not really quotients, in many ways they behave as if they were quotients. So we have the Chain Rule:
$$frac{dy}{dx} = frac{dy}{du};frac{du}{dx}$$
which looks very natural if you think of the derivatives as "fractions". You have the Inverse Function theorem, which tells you that
$$frac{dx}{dy} = frac{1}{quadfrac{dy}{dx}quad},$$
which is again almost "obvious" if you think of the derivatives as fractions. So, because the notation is so nice and so suggestive, we keep the notation even though the notation no longer represents an actual quotient, it now represents a single limit. In fact, Leibniz's notation is so good, so superior to the prime notation and to Newton's notation, that England fell behind all of Europe for centuries in mathematics and science because, due to the fight between Newton's and Leibniz's camp over who had invented Calculus and who stole it from whom (consensus is that they each discovered it independently), England's scientific establishment decided to ignore what was being done in Europe with Leibniz notation and stuck to Newton's... and got stuck in the mud in large part because of it.
(Differentials are part of this same issue: originally, $dy$ and $dx$ really did mean the same thing as those symbols do in $frac{dy}{dx}$, but that leads to all sorts of logical problems, so they no longer mean the same thing, even though they behave as if they did.)
So, even though we write $frac{dy}{dx}$ as if it were a fraction, and many computations look like we are working with it like a fraction, it isn't really a fraction (it just plays one on television).
However... There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things. If you do that, then you can, if you are careful, define derivatives exactly like Leibniz, in terms of infinitesimals and actual quotients; if you do that, then all the rules of Calculus that make use of $frac{dy}{dx}$ as if it were a fraction are justified because, in that setting, it is a fraction. Still, one has to be careful because you have to keep infinitesimals and regular real numbers separate and not let them get confused, or you can run into some serious problems.
$endgroup$
166
$begingroup$
As a physicist, I prefer Leibniz notation simply because it is dimensionally correct regardless of whether it is derived from the limit or from nonstandard analysis. With Newtonian notation, you cannot automatically tell what the units of $y'$ are.
$endgroup$
– rcollyer
Mar 10 '11 at 16:34
30
$begingroup$
Have you any evidence for your claim that "England fell behind Europe for centuries"?
$endgroup$
– Kevin H. Lin
Mar 21 '11 at 22:05
103
$begingroup$
@Kevin: Look at the history of math. Shortly after Newton and his students (Maclaurin, Taylor), all the developments in mathematics came from the Continent. It was the Bernoullis, Euler, who developed Calculus, not the British. It wasn't until Hamilton that they started coming back, and when they reformed math teaching in Oxford and Cambridge, they adopted the continental ideas and notation.
$endgroup$
– Arturo Magidin
Mar 22 '11 at 1:42
36
$begingroup$
Mathematics really did not have a firm hold in England. It was the Physics of Newton that was admired. Unlike in parts of the Continent, mathematics was not thought of as a serious calling. So the "best" people did other things.
$endgroup$
– André Nicolas
Jun 20 '11 at 19:02
30
$begingroup$
There's a free calculus textbook for beginning calculus students based on the nonstandard analysis approach here. Also there is a monograph on infinitesimal calculus aimed at mathematicians and at instructors who might be using the aforementioned book.
$endgroup$
– tzs
Jul 6 '11 at 16:24
|
show 16 more comments
$begingroup$
Historically, when Leibniz conceived of the notation, $frac{dy}{dx}$ was supposed to be a quotient: it was the quotient of the "infinitesimal change in $y$ produced by the change in $x$" divided by the "infinitesimal change in $x$".
However, the formulation of calculus with infinitesimals in the usual setting of the real numbers leads to a lot of problems. For one thing, infinitesimals can't exist in the usual setting of real numbers! Because the real numbers satisfy an important property, called the Archimedean Property: given any positive real number $epsilongt 0$, no matter how small, and given any positive real number $Mgt 0$, no matter how big, there exists a natural number $n$ such that $nepsilongt M$. But an "infinitesimal" $xi$ is supposed to be so small that no matter how many times you add it to itself, it never gets to $1$, contradicting the Archimedean Property. Other problems: Leibniz defined the tangent to the graph of $y=f(x)$ at $x=a$ by saying "Take the point $(a,f(a))$; then add an infinitesimal amount to $a$, $a+dx$, and take the point $(a+dx,f(a+dx))$, and draw the line through those two points." But if they are two different points on the graph, then it's not a tangent, and if it's just one point, then you can't define the line because you just have one point. That's just two of the problems with infinitesimals. (See below where it says "However...", though.)
So Calculus was essentially rewritten from the ground up in the following 200 years to avoid these problems, and you are seeing the results of that rewriting (that's where limits came from, for instance). Because of that rewriting, the derivative is no longer a quotient, now it's a limit:
$$lim_{hto0 }frac{f(x+h)-f(x)}{h}.$$
And because we cannot express this limit-of-a-quotient as a-quotient-of-the-limits (both numerator and denominator go to zero), then the derivative is not a quotient.
However, Leibniz's notation is very suggestive and very useful; even though derivatives are not really quotients, in many ways they behave as if they were quotients. So we have the Chain Rule:
$$frac{dy}{dx} = frac{dy}{du};frac{du}{dx}$$
which looks very natural if you think of the derivatives as "fractions". You have the Inverse Function theorem, which tells you that
$$frac{dx}{dy} = frac{1}{quadfrac{dy}{dx}quad},$$
which is again almost "obvious" if you think of the derivatives as fractions. So, because the notation is so nice and so suggestive, we keep the notation even though the notation no longer represents an actual quotient, it now represents a single limit. In fact, Leibniz's notation is so good, so superior to the prime notation and to Newton's notation, that England fell behind all of Europe for centuries in mathematics and science because, due to the fight between Newton's and Leibniz's camp over who had invented Calculus and who stole it from whom (consensus is that they each discovered it independently), England's scientific establishment decided to ignore what was being done in Europe with Leibniz notation and stuck to Newton's... and got stuck in the mud in large part because of it.
(Differentials are part of this same issue: originally, $dy$ and $dx$ really did mean the same thing as those symbols do in $frac{dy}{dx}$, but that leads to all sorts of logical problems, so they no longer mean the same thing, even though they behave as if they did.)
So, even though we write $frac{dy}{dx}$ as if it were a fraction, and many computations look like we are working with it like a fraction, it isn't really a fraction (it just plays one on television).
However... There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things. If you do that, then you can, if you are careful, define derivatives exactly like Leibniz, in terms of infinitesimals and actual quotients; if you do that, then all the rules of Calculus that make use of $frac{dy}{dx}$ as if it were a fraction are justified because, in that setting, it is a fraction. Still, one has to be careful because you have to keep infinitesimals and regular real numbers separate and not let them get confused, or you can run into some serious problems.
$endgroup$
Historically, when Leibniz conceived of the notation, $frac{dy}{dx}$ was supposed to be a quotient: it was the quotient of the "infinitesimal change in $y$ produced by the change in $x$" divided by the "infinitesimal change in $x$".
However, the formulation of calculus with infinitesimals in the usual setting of the real numbers leads to a lot of problems. For one thing, infinitesimals can't exist in the usual setting of real numbers! Because the real numbers satisfy an important property, called the Archimedean Property: given any positive real number $epsilongt 0$, no matter how small, and given any positive real number $Mgt 0$, no matter how big, there exists a natural number $n$ such that $nepsilongt M$. But an "infinitesimal" $xi$ is supposed to be so small that no matter how many times you add it to itself, it never gets to $1$, contradicting the Archimedean Property. Other problems: Leibniz defined the tangent to the graph of $y=f(x)$ at $x=a$ by saying "Take the point $(a,f(a))$; then add an infinitesimal amount to $a$, $a+dx$, and take the point $(a+dx,f(a+dx))$, and draw the line through those two points." But if they are two different points on the graph, then it's not a tangent, and if it's just one point, then you can't define the line because you just have one point. That's just two of the problems with infinitesimals. (See below where it says "However...", though.)
So Calculus was essentially rewritten from the ground up in the following 200 years to avoid these problems, and you are seeing the results of that rewriting (that's where limits came from, for instance). Because of that rewriting, the derivative is no longer a quotient, now it's a limit:
$$lim_{hto0 }frac{f(x+h)-f(x)}{h}.$$
And because we cannot express this limit-of-a-quotient as a-quotient-of-the-limits (both numerator and denominator go to zero), then the derivative is not a quotient.
However, Leibniz's notation is very suggestive and very useful; even though derivatives are not really quotients, in many ways they behave as if they were quotients. So we have the Chain Rule:
$$frac{dy}{dx} = frac{dy}{du};frac{du}{dx}$$
which looks very natural if you think of the derivatives as "fractions". You have the Inverse Function theorem, which tells you that
$$frac{dx}{dy} = frac{1}{quadfrac{dy}{dx}quad},$$
which is again almost "obvious" if you think of the derivatives as fractions. So, because the notation is so nice and so suggestive, we keep the notation even though the notation no longer represents an actual quotient, it now represents a single limit. In fact, Leibniz's notation is so good, so superior to the prime notation and to Newton's notation, that England fell behind all of Europe for centuries in mathematics and science because, due to the fight between Newton's and Leibniz's camp over who had invented Calculus and who stole it from whom (consensus is that they each discovered it independently), England's scientific establishment decided to ignore what was being done in Europe with Leibniz notation and stuck to Newton's... and got stuck in the mud in large part because of it.
(Differentials are part of this same issue: originally, $dy$ and $dx$ really did mean the same thing as those symbols do in $frac{dy}{dx}$, but that leads to all sorts of logical problems, so they no longer mean the same thing, even though they behave as if they did.)
So, even though we write $frac{dy}{dx}$ as if it were a fraction, and many computations look like we are working with it like a fraction, it isn't really a fraction (it just plays one on television).
However... There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things. If you do that, then you can, if you are careful, define derivatives exactly like Leibniz, in terms of infinitesimals and actual quotients; if you do that, then all the rules of Calculus that make use of $frac{dy}{dx}$ as if it were a fraction are justified because, in that setting, it is a fraction. Still, one has to be careful because you have to keep infinitesimals and regular real numbers separate and not let them get confused, or you can run into some serious problems.
edited Sep 15 '17 at 20:01


Xam
4,56551746
4,56551746
answered Feb 9 '11 at 17:05
Arturo MagidinArturo Magidin
264k34590917
264k34590917
166
$begingroup$
As a physicist, I prefer Leibniz notation simply because it is dimensionally correct regardless of whether it is derived from the limit or from nonstandard analysis. With Newtonian notation, you cannot automatically tell what the units of $y'$ are.
$endgroup$
– rcollyer
Mar 10 '11 at 16:34
30
$begingroup$
Have you any evidence for your claim that "England fell behind Europe for centuries"?
$endgroup$
– Kevin H. Lin
Mar 21 '11 at 22:05
103
$begingroup$
@Kevin: Look at the history of math. Shortly after Newton and his students (Maclaurin, Taylor), all the developments in mathematics came from the Continent. It was the Bernoullis, Euler, who developed Calculus, not the British. It wasn't until Hamilton that they started coming back, and when they reformed math teaching in Oxford and Cambridge, they adopted the continental ideas and notation.
$endgroup$
– Arturo Magidin
Mar 22 '11 at 1:42
36
$begingroup$
Mathematics really did not have a firm hold in England. It was the Physics of Newton that was admired. Unlike in parts of the Continent, mathematics was not thought of as a serious calling. So the "best" people did other things.
$endgroup$
– André Nicolas
Jun 20 '11 at 19:02
30
$begingroup$
There's a free calculus textbook for beginning calculus students based on the nonstandard analysis approach here. Also there is a monograph on infinitesimal calculus aimed at mathematicians and at instructors who might be using the aforementioned book.
$endgroup$
– tzs
Jul 6 '11 at 16:24
|
show 16 more comments
166
$begingroup$
As a physicist, I prefer Leibniz notation simply because it is dimensionally correct regardless of whether it is derived from the limit or from nonstandard analysis. With Newtonian notation, you cannot automatically tell what the units of $y'$ are.
$endgroup$
– rcollyer
Mar 10 '11 at 16:34
30
$begingroup$
Have you any evidence for your claim that "England fell behind Europe for centuries"?
$endgroup$
– Kevin H. Lin
Mar 21 '11 at 22:05
103
$begingroup$
@Kevin: Look at the history of math. Shortly after Newton and his students (Maclaurin, Taylor), all the developments in mathematics came from the Continent. It was the Bernoullis, Euler, who developed Calculus, not the British. It wasn't until Hamilton that they started coming back, and when they reformed math teaching in Oxford and Cambridge, they adopted the continental ideas and notation.
$endgroup$
– Arturo Magidin
Mar 22 '11 at 1:42
36
$begingroup$
Mathematics really did not have a firm hold in England. It was the Physics of Newton that was admired. Unlike in parts of the Continent, mathematics was not thought of as a serious calling. So the "best" people did other things.
$endgroup$
– André Nicolas
Jun 20 '11 at 19:02
30
$begingroup$
There's a free calculus textbook for beginning calculus students based on the nonstandard analysis approach here. Also there is a monograph on infinitesimal calculus aimed at mathematicians and at instructors who might be using the aforementioned book.
$endgroup$
– tzs
Jul 6 '11 at 16:24
166
166
$begingroup$
As a physicist, I prefer Leibniz notation simply because it is dimensionally correct regardless of whether it is derived from the limit or from nonstandard analysis. With Newtonian notation, you cannot automatically tell what the units of $y'$ are.
$endgroup$
– rcollyer
Mar 10 '11 at 16:34
$begingroup$
As a physicist, I prefer Leibniz notation simply because it is dimensionally correct regardless of whether it is derived from the limit or from nonstandard analysis. With Newtonian notation, you cannot automatically tell what the units of $y'$ are.
$endgroup$
– rcollyer
Mar 10 '11 at 16:34
30
30
$begingroup$
Have you any evidence for your claim that "England fell behind Europe for centuries"?
$endgroup$
– Kevin H. Lin
Mar 21 '11 at 22:05
$begingroup$
Have you any evidence for your claim that "England fell behind Europe for centuries"?
$endgroup$
– Kevin H. Lin
Mar 21 '11 at 22:05
103
103
$begingroup$
@Kevin: Look at the history of math. Shortly after Newton and his students (Maclaurin, Taylor), all the developments in mathematics came from the Continent. It was the Bernoullis, Euler, who developed Calculus, not the British. It wasn't until Hamilton that they started coming back, and when they reformed math teaching in Oxford and Cambridge, they adopted the continental ideas and notation.
$endgroup$
– Arturo Magidin
Mar 22 '11 at 1:42
$begingroup$
@Kevin: Look at the history of math. Shortly after Newton and his students (Maclaurin, Taylor), all the developments in mathematics came from the Continent. It was the Bernoullis, Euler, who developed Calculus, not the British. It wasn't until Hamilton that they started coming back, and when they reformed math teaching in Oxford and Cambridge, they adopted the continental ideas and notation.
$endgroup$
– Arturo Magidin
Mar 22 '11 at 1:42
36
36
$begingroup$
Mathematics really did not have a firm hold in England. It was the Physics of Newton that was admired. Unlike in parts of the Continent, mathematics was not thought of as a serious calling. So the "best" people did other things.
$endgroup$
– André Nicolas
Jun 20 '11 at 19:02
$begingroup$
Mathematics really did not have a firm hold in England. It was the Physics of Newton that was admired. Unlike in parts of the Continent, mathematics was not thought of as a serious calling. So the "best" people did other things.
$endgroup$
– André Nicolas
Jun 20 '11 at 19:02
30
30
$begingroup$
There's a free calculus textbook for beginning calculus students based on the nonstandard analysis approach here. Also there is a monograph on infinitesimal calculus aimed at mathematicians and at instructors who might be using the aforementioned book.
$endgroup$
– tzs
Jul 6 '11 at 16:24
$begingroup$
There's a free calculus textbook for beginning calculus students based on the nonstandard analysis approach here. Also there is a monograph on infinitesimal calculus aimed at mathematicians and at instructors who might be using the aforementioned book.
$endgroup$
– tzs
Jul 6 '11 at 16:24
|
show 16 more comments
$begingroup$
Just to add some variety to the list of answers, I'm going to go against the grain here and say that you can, in an albeit silly way, interpret $dy/dx$ as a ratio of real numbers.
For every (differentiable) function $f$, we can define a function $df(x; dx)$ of two real variables $x$ and $dx$ via $$df(x; dx) = f'(x),dx.$$
Here, $dx$ is just a real number, and no more. (In particular, it is not a differential 1-form, nor an infinitesimal.) So, when $dx neq 0$, we can write:
$$frac{df(x;dx)}{dx} = f'(x).$$
All of this, however, should come with a few remarks.
It is clear that these notations above do not constitute a definition of the derivative of $f$. Indeed, we needed to know what the derivative $f'$ meant before defining the function $df$. So in some sense, it's just a clever choice of notation.
But if it's just a trick of notation, why do I mention it at all? The reason is that in higher dimensions, the function $df(x;dx)$ actually becomes the focus of study, in part because it contains information about all the partial derivatives.
To be more concrete, for multivariable functions $fcolon R^n to R$, we can define a function $df(x;dx)$ of two n-dimensional variables $x, dx in R^n$ via
$$df(x;dx) = df(x_1,ldots,x_n; dx_1, ldots, dx_n) = frac{partial f}{partial x_1}dx_1 + ldots + frac{partial f}{partial x_n}dx_n.$$
Notice that this map $df$ is linear in the variable $dx$. That is, we can write:
$$df(x;dx) = (frac{partial f}{partial x_1}, ldots, frac{partial f}{partial x_n})
begin{pmatrix}
dx_1 \
vdots \
dx_n \
end{pmatrix}
= A(dx),$$
where $A$ is the $1times n$ row matrix of partial derivatives.
In other words, the function $df(x;dx)$ can be thought of as a linear function of $dx$, whose matrix has variable coefficients (depending on $x$).
So for the $1$-dimensional case, what is really going on is a trick of dimension. That is, we have the variable $1times1$ matrix ($f'(x)$) acting on the vector $dx in R^1$ -- and it just so happens that vectors in $R^1$ can be identified with scalars, and so can be divided.
Finally, I should mention that, as long as we are thinking of $dx$ as a real number, mathematicians multiply and divide by $dx$ all the time -- it's just that they'll usually use another notation. The letter "$h$" is often used in this context, so we usually write $$f'(x) = lim_{h to 0} frac{f(x+h) - f(x)}{h},$$
rather than, say,
$$f'(x) = lim_{dx to 0} frac{f(x+dx) - f(x)}{dx}.$$
My guess is that the main aversion to writing $dx$ is that it conflicts with our notation for differential $1$-forms.
EDIT: Just to be even more technical, and at the risk of being confusing to some, we really shouldn't even be regarding $dx$ as an element of $R^n$, but rather as an element of the tangent space $T_xR^n$. Again, it just so happens that we have a canonical identification between $T_xR^n$ and $R^n$ which makes all of the above okay, but I like distinction between tangent space and euclidean space because it highlights the different roles played by $x in R^n$ and $dx in T_xR^n$.
$endgroup$
15
$begingroup$
Cotangent space. Also, in case of multiple variables if you fix $dx^2,ldots,dx^n = 0$ you can still divide by $dx^1$ and get the derivative. And nothing stops you from defining differential first and then defining derivatives as its coefficients.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 2:30
3
$begingroup$
Well, canonically differentials are members of contangent bundle, and $dx$ is in this case its basis.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 5:54
4
$begingroup$
Maybe I'm misunderstanding you, but I never made any reference to differentials in my post. My point is that $df(x;dx)$ can be likened to the pushforward map $f_*$. Of course one can also make an analogy with the actual differential 1-form $df$, but that's something separate.
$endgroup$
– Jesse Madnick
Feb 11 '11 at 6:18
5
$begingroup$
Sure the input is a vector, that's why these linearizations are called covectors, which are members of cotangent space. I can't see why you are bringing up pushforwards when there is a better description right there.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 9:12
4
$begingroup$
I just don't think pushforward is the best way to view differential of a function with codomain $mathbb{R}$ (although it is perfectly correct), it's just a too complex idea that has more natural treatment.
$endgroup$
– Alexei Averchenko
Feb 12 '11 at 4:25
|
show 5 more comments
$begingroup$
Just to add some variety to the list of answers, I'm going to go against the grain here and say that you can, in an albeit silly way, interpret $dy/dx$ as a ratio of real numbers.
For every (differentiable) function $f$, we can define a function $df(x; dx)$ of two real variables $x$ and $dx$ via $$df(x; dx) = f'(x),dx.$$
Here, $dx$ is just a real number, and no more. (In particular, it is not a differential 1-form, nor an infinitesimal.) So, when $dx neq 0$, we can write:
$$frac{df(x;dx)}{dx} = f'(x).$$
All of this, however, should come with a few remarks.
It is clear that these notations above do not constitute a definition of the derivative of $f$. Indeed, we needed to know what the derivative $f'$ meant before defining the function $df$. So in some sense, it's just a clever choice of notation.
But if it's just a trick of notation, why do I mention it at all? The reason is that in higher dimensions, the function $df(x;dx)$ actually becomes the focus of study, in part because it contains information about all the partial derivatives.
To be more concrete, for multivariable functions $fcolon R^n to R$, we can define a function $df(x;dx)$ of two n-dimensional variables $x, dx in R^n$ via
$$df(x;dx) = df(x_1,ldots,x_n; dx_1, ldots, dx_n) = frac{partial f}{partial x_1}dx_1 + ldots + frac{partial f}{partial x_n}dx_n.$$
Notice that this map $df$ is linear in the variable $dx$. That is, we can write:
$$df(x;dx) = (frac{partial f}{partial x_1}, ldots, frac{partial f}{partial x_n})
begin{pmatrix}
dx_1 \
vdots \
dx_n \
end{pmatrix}
= A(dx),$$
where $A$ is the $1times n$ row matrix of partial derivatives.
In other words, the function $df(x;dx)$ can be thought of as a linear function of $dx$, whose matrix has variable coefficients (depending on $x$).
So for the $1$-dimensional case, what is really going on is a trick of dimension. That is, we have the variable $1times1$ matrix ($f'(x)$) acting on the vector $dx in R^1$ -- and it just so happens that vectors in $R^1$ can be identified with scalars, and so can be divided.
Finally, I should mention that, as long as we are thinking of $dx$ as a real number, mathematicians multiply and divide by $dx$ all the time -- it's just that they'll usually use another notation. The letter "$h$" is often used in this context, so we usually write $$f'(x) = lim_{h to 0} frac{f(x+h) - f(x)}{h},$$
rather than, say,
$$f'(x) = lim_{dx to 0} frac{f(x+dx) - f(x)}{dx}.$$
My guess is that the main aversion to writing $dx$ is that it conflicts with our notation for differential $1$-forms.
EDIT: Just to be even more technical, and at the risk of being confusing to some, we really shouldn't even be regarding $dx$ as an element of $R^n$, but rather as an element of the tangent space $T_xR^n$. Again, it just so happens that we have a canonical identification between $T_xR^n$ and $R^n$ which makes all of the above okay, but I like distinction between tangent space and euclidean space because it highlights the different roles played by $x in R^n$ and $dx in T_xR^n$.
$endgroup$
15
$begingroup$
Cotangent space. Also, in case of multiple variables if you fix $dx^2,ldots,dx^n = 0$ you can still divide by $dx^1$ and get the derivative. And nothing stops you from defining differential first and then defining derivatives as its coefficients.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 2:30
3
$begingroup$
Well, canonically differentials are members of contangent bundle, and $dx$ is in this case its basis.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 5:54
4
$begingroup$
Maybe I'm misunderstanding you, but I never made any reference to differentials in my post. My point is that $df(x;dx)$ can be likened to the pushforward map $f_*$. Of course one can also make an analogy with the actual differential 1-form $df$, but that's something separate.
$endgroup$
– Jesse Madnick
Feb 11 '11 at 6:18
5
$begingroup$
Sure the input is a vector, that's why these linearizations are called covectors, which are members of cotangent space. I can't see why you are bringing up pushforwards when there is a better description right there.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 9:12
4
$begingroup$
I just don't think pushforward is the best way to view differential of a function with codomain $mathbb{R}$ (although it is perfectly correct), it's just a too complex idea that has more natural treatment.
$endgroup$
– Alexei Averchenko
Feb 12 '11 at 4:25
|
show 5 more comments
$begingroup$
Just to add some variety to the list of answers, I'm going to go against the grain here and say that you can, in an albeit silly way, interpret $dy/dx$ as a ratio of real numbers.
For every (differentiable) function $f$, we can define a function $df(x; dx)$ of two real variables $x$ and $dx$ via $$df(x; dx) = f'(x),dx.$$
Here, $dx$ is just a real number, and no more. (In particular, it is not a differential 1-form, nor an infinitesimal.) So, when $dx neq 0$, we can write:
$$frac{df(x;dx)}{dx} = f'(x).$$
All of this, however, should come with a few remarks.
It is clear that these notations above do not constitute a definition of the derivative of $f$. Indeed, we needed to know what the derivative $f'$ meant before defining the function $df$. So in some sense, it's just a clever choice of notation.
But if it's just a trick of notation, why do I mention it at all? The reason is that in higher dimensions, the function $df(x;dx)$ actually becomes the focus of study, in part because it contains information about all the partial derivatives.
To be more concrete, for multivariable functions $fcolon R^n to R$, we can define a function $df(x;dx)$ of two n-dimensional variables $x, dx in R^n$ via
$$df(x;dx) = df(x_1,ldots,x_n; dx_1, ldots, dx_n) = frac{partial f}{partial x_1}dx_1 + ldots + frac{partial f}{partial x_n}dx_n.$$
Notice that this map $df$ is linear in the variable $dx$. That is, we can write:
$$df(x;dx) = (frac{partial f}{partial x_1}, ldots, frac{partial f}{partial x_n})
begin{pmatrix}
dx_1 \
vdots \
dx_n \
end{pmatrix}
= A(dx),$$
where $A$ is the $1times n$ row matrix of partial derivatives.
In other words, the function $df(x;dx)$ can be thought of as a linear function of $dx$, whose matrix has variable coefficients (depending on $x$).
So for the $1$-dimensional case, what is really going on is a trick of dimension. That is, we have the variable $1times1$ matrix ($f'(x)$) acting on the vector $dx in R^1$ -- and it just so happens that vectors in $R^1$ can be identified with scalars, and so can be divided.
Finally, I should mention that, as long as we are thinking of $dx$ as a real number, mathematicians multiply and divide by $dx$ all the time -- it's just that they'll usually use another notation. The letter "$h$" is often used in this context, so we usually write $$f'(x) = lim_{h to 0} frac{f(x+h) - f(x)}{h},$$
rather than, say,
$$f'(x) = lim_{dx to 0} frac{f(x+dx) - f(x)}{dx}.$$
My guess is that the main aversion to writing $dx$ is that it conflicts with our notation for differential $1$-forms.
EDIT: Just to be even more technical, and at the risk of being confusing to some, we really shouldn't even be regarding $dx$ as an element of $R^n$, but rather as an element of the tangent space $T_xR^n$. Again, it just so happens that we have a canonical identification between $T_xR^n$ and $R^n$ which makes all of the above okay, but I like distinction between tangent space and euclidean space because it highlights the different roles played by $x in R^n$ and $dx in T_xR^n$.
$endgroup$
Just to add some variety to the list of answers, I'm going to go against the grain here and say that you can, in an albeit silly way, interpret $dy/dx$ as a ratio of real numbers.
For every (differentiable) function $f$, we can define a function $df(x; dx)$ of two real variables $x$ and $dx$ via $$df(x; dx) = f'(x),dx.$$
Here, $dx$ is just a real number, and no more. (In particular, it is not a differential 1-form, nor an infinitesimal.) So, when $dx neq 0$, we can write:
$$frac{df(x;dx)}{dx} = f'(x).$$
All of this, however, should come with a few remarks.
It is clear that these notations above do not constitute a definition of the derivative of $f$. Indeed, we needed to know what the derivative $f'$ meant before defining the function $df$. So in some sense, it's just a clever choice of notation.
But if it's just a trick of notation, why do I mention it at all? The reason is that in higher dimensions, the function $df(x;dx)$ actually becomes the focus of study, in part because it contains information about all the partial derivatives.
To be more concrete, for multivariable functions $fcolon R^n to R$, we can define a function $df(x;dx)$ of two n-dimensional variables $x, dx in R^n$ via
$$df(x;dx) = df(x_1,ldots,x_n; dx_1, ldots, dx_n) = frac{partial f}{partial x_1}dx_1 + ldots + frac{partial f}{partial x_n}dx_n.$$
Notice that this map $df$ is linear in the variable $dx$. That is, we can write:
$$df(x;dx) = (frac{partial f}{partial x_1}, ldots, frac{partial f}{partial x_n})
begin{pmatrix}
dx_1 \
vdots \
dx_n \
end{pmatrix}
= A(dx),$$
where $A$ is the $1times n$ row matrix of partial derivatives.
In other words, the function $df(x;dx)$ can be thought of as a linear function of $dx$, whose matrix has variable coefficients (depending on $x$).
So for the $1$-dimensional case, what is really going on is a trick of dimension. That is, we have the variable $1times1$ matrix ($f'(x)$) acting on the vector $dx in R^1$ -- and it just so happens that vectors in $R^1$ can be identified with scalars, and so can be divided.
Finally, I should mention that, as long as we are thinking of $dx$ as a real number, mathematicians multiply and divide by $dx$ all the time -- it's just that they'll usually use another notation. The letter "$h$" is often used in this context, so we usually write $$f'(x) = lim_{h to 0} frac{f(x+h) - f(x)}{h},$$
rather than, say,
$$f'(x) = lim_{dx to 0} frac{f(x+dx) - f(x)}{dx}.$$
My guess is that the main aversion to writing $dx$ is that it conflicts with our notation for differential $1$-forms.
EDIT: Just to be even more technical, and at the risk of being confusing to some, we really shouldn't even be regarding $dx$ as an element of $R^n$, but rather as an element of the tangent space $T_xR^n$. Again, it just so happens that we have a canonical identification between $T_xR^n$ and $R^n$ which makes all of the above okay, but I like distinction between tangent space and euclidean space because it highlights the different roles played by $x in R^n$ and $dx in T_xR^n$.
edited May 10 '12 at 23:31
joriki
171k10188349
171k10188349
answered Feb 10 '11 at 9:25
Jesse MadnickJesse Madnick
19.7k562125
19.7k562125
15
$begingroup$
Cotangent space. Also, in case of multiple variables if you fix $dx^2,ldots,dx^n = 0$ you can still divide by $dx^1$ and get the derivative. And nothing stops you from defining differential first and then defining derivatives as its coefficients.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 2:30
3
$begingroup$
Well, canonically differentials are members of contangent bundle, and $dx$ is in this case its basis.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 5:54
4
$begingroup$
Maybe I'm misunderstanding you, but I never made any reference to differentials in my post. My point is that $df(x;dx)$ can be likened to the pushforward map $f_*$. Of course one can also make an analogy with the actual differential 1-form $df$, but that's something separate.
$endgroup$
– Jesse Madnick
Feb 11 '11 at 6:18
5
$begingroup$
Sure the input is a vector, that's why these linearizations are called covectors, which are members of cotangent space. I can't see why you are bringing up pushforwards when there is a better description right there.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 9:12
4
$begingroup$
I just don't think pushforward is the best way to view differential of a function with codomain $mathbb{R}$ (although it is perfectly correct), it's just a too complex idea that has more natural treatment.
$endgroup$
– Alexei Averchenko
Feb 12 '11 at 4:25
|
show 5 more comments
15
$begingroup$
Cotangent space. Also, in case of multiple variables if you fix $dx^2,ldots,dx^n = 0$ you can still divide by $dx^1$ and get the derivative. And nothing stops you from defining differential first and then defining derivatives as its coefficients.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 2:30
3
$begingroup$
Well, canonically differentials are members of contangent bundle, and $dx$ is in this case its basis.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 5:54
4
$begingroup$
Maybe I'm misunderstanding you, but I never made any reference to differentials in my post. My point is that $df(x;dx)$ can be likened to the pushforward map $f_*$. Of course one can also make an analogy with the actual differential 1-form $df$, but that's something separate.
$endgroup$
– Jesse Madnick
Feb 11 '11 at 6:18
5
$begingroup$
Sure the input is a vector, that's why these linearizations are called covectors, which are members of cotangent space. I can't see why you are bringing up pushforwards when there is a better description right there.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 9:12
4
$begingroup$
I just don't think pushforward is the best way to view differential of a function with codomain $mathbb{R}$ (although it is perfectly correct), it's just a too complex idea that has more natural treatment.
$endgroup$
– Alexei Averchenko
Feb 12 '11 at 4:25
15
15
$begingroup$
Cotangent space. Also, in case of multiple variables if you fix $dx^2,ldots,dx^n = 0$ you can still divide by $dx^1$ and get the derivative. And nothing stops you from defining differential first and then defining derivatives as its coefficients.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 2:30
$begingroup$
Cotangent space. Also, in case of multiple variables if you fix $dx^2,ldots,dx^n = 0$ you can still divide by $dx^1$ and get the derivative. And nothing stops you from defining differential first and then defining derivatives as its coefficients.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 2:30
3
3
$begingroup$
Well, canonically differentials are members of contangent bundle, and $dx$ is in this case its basis.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 5:54
$begingroup$
Well, canonically differentials are members of contangent bundle, and $dx$ is in this case its basis.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 5:54
4
4
$begingroup$
Maybe I'm misunderstanding you, but I never made any reference to differentials in my post. My point is that $df(x;dx)$ can be likened to the pushforward map $f_*$. Of course one can also make an analogy with the actual differential 1-form $df$, but that's something separate.
$endgroup$
– Jesse Madnick
Feb 11 '11 at 6:18
$begingroup$
Maybe I'm misunderstanding you, but I never made any reference to differentials in my post. My point is that $df(x;dx)$ can be likened to the pushforward map $f_*$. Of course one can also make an analogy with the actual differential 1-form $df$, but that's something separate.
$endgroup$
– Jesse Madnick
Feb 11 '11 at 6:18
5
5
$begingroup$
Sure the input is a vector, that's why these linearizations are called covectors, which are members of cotangent space. I can't see why you are bringing up pushforwards when there is a better description right there.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 9:12
$begingroup$
Sure the input is a vector, that's why these linearizations are called covectors, which are members of cotangent space. I can't see why you are bringing up pushforwards when there is a better description right there.
$endgroup$
– Alexei Averchenko
Feb 11 '11 at 9:12
4
4
$begingroup$
I just don't think pushforward is the best way to view differential of a function with codomain $mathbb{R}$ (although it is perfectly correct), it's just a too complex idea that has more natural treatment.
$endgroup$
– Alexei Averchenko
Feb 12 '11 at 4:25
$begingroup$
I just don't think pushforward is the best way to view differential of a function with codomain $mathbb{R}$ (although it is perfectly correct), it's just a too complex idea that has more natural treatment.
$endgroup$
– Alexei Averchenko
Feb 12 '11 at 4:25
|
show 5 more comments
$begingroup$
My favorite "counterexample" to the derivative acting like a ratio: the implicit differentiation formula for two variables. We have $$frac{dy}{dx} = -frac{partial F/partial x}{partial F/partial y} $$
The formula is almost what you would expect, except for that pesky minus sign.
See http://en.wikipedia.org/wiki/Implicit_differentiation#Formula_for_two_variables for the rigorous definition of this formula.
$endgroup$
18
$begingroup$
Yes, but their is a fake proof of this that comes from that kind a reasoning. If $f(x,y)$ is a function of two variables, then $df=frac{partial f}{partial x}dx+frac{partial f}{partial y}$. Now if we pick a level curve $f(x,y)=0$, then $df=0$, so solving for $frac{dy}{dx}$ gives us the expression above.
$endgroup$
– Baby Dragon
Jun 16 '13 at 19:03
7
$begingroup$
Pardon me, but how is this a "fake proof"?
$endgroup$
– Lurco
Apr 28 '14 at 0:04
16
$begingroup$
@Lurco: He meant $df = frac{∂f}{∂x} dx + frac{∂f}{∂y} dy$, where those 'infinitesimals' are not proper numbers and hence it is wrong to simply substitute $df=0$, because in fact if we are consistent in our interpretation $df=0$ would imply $dx=dy=0$ and hence we can't get $frac{dy}{dx}$ anyway. But if we are inconsistent, we can ignore that and proceed to get the desired expression. Correct answer but fake proof.
$endgroup$
– user21820
May 13 '14 at 9:58
8
$begingroup$
I agree that this is a good example to show why such notation is not so simple as one might think, but in this case I could say that $dx$ is not the same as $∂x$. Do you have an example where the terms really cancel to give the wrong answer?
$endgroup$
– user21820
May 13 '14 at 10:07
4
$begingroup$
@JohnRobertson of course it is. That's the whole point here, that this sort of "fake proof" thinking leads to wrong results here. I blatantly ignore the fact that $d neq partial$ (essentially), and I also completely ignore what $F$ really is. My only point here is that if you try to use this as a mnemonic (or worse, a "proof" method), you will get completely wrong results.
$endgroup$
– asmeurer
May 19 '15 at 18:58
|
show 5 more comments
$begingroup$
My favorite "counterexample" to the derivative acting like a ratio: the implicit differentiation formula for two variables. We have $$frac{dy}{dx} = -frac{partial F/partial x}{partial F/partial y} $$
The formula is almost what you would expect, except for that pesky minus sign.
See http://en.wikipedia.org/wiki/Implicit_differentiation#Formula_for_two_variables for the rigorous definition of this formula.
$endgroup$
18
$begingroup$
Yes, but their is a fake proof of this that comes from that kind a reasoning. If $f(x,y)$ is a function of two variables, then $df=frac{partial f}{partial x}dx+frac{partial f}{partial y}$. Now if we pick a level curve $f(x,y)=0$, then $df=0$, so solving for $frac{dy}{dx}$ gives us the expression above.
$endgroup$
– Baby Dragon
Jun 16 '13 at 19:03
7
$begingroup$
Pardon me, but how is this a "fake proof"?
$endgroup$
– Lurco
Apr 28 '14 at 0:04
16
$begingroup$
@Lurco: He meant $df = frac{∂f}{∂x} dx + frac{∂f}{∂y} dy$, where those 'infinitesimals' are not proper numbers and hence it is wrong to simply substitute $df=0$, because in fact if we are consistent in our interpretation $df=0$ would imply $dx=dy=0$ and hence we can't get $frac{dy}{dx}$ anyway. But if we are inconsistent, we can ignore that and proceed to get the desired expression. Correct answer but fake proof.
$endgroup$
– user21820
May 13 '14 at 9:58
8
$begingroup$
I agree that this is a good example to show why such notation is not so simple as one might think, but in this case I could say that $dx$ is not the same as $∂x$. Do you have an example where the terms really cancel to give the wrong answer?
$endgroup$
– user21820
May 13 '14 at 10:07
4
$begingroup$
@JohnRobertson of course it is. That's the whole point here, that this sort of "fake proof" thinking leads to wrong results here. I blatantly ignore the fact that $d neq partial$ (essentially), and I also completely ignore what $F$ really is. My only point here is that if you try to use this as a mnemonic (or worse, a "proof" method), you will get completely wrong results.
$endgroup$
– asmeurer
May 19 '15 at 18:58
|
show 5 more comments
$begingroup$
My favorite "counterexample" to the derivative acting like a ratio: the implicit differentiation formula for two variables. We have $$frac{dy}{dx} = -frac{partial F/partial x}{partial F/partial y} $$
The formula is almost what you would expect, except for that pesky minus sign.
See http://en.wikipedia.org/wiki/Implicit_differentiation#Formula_for_two_variables for the rigorous definition of this formula.
$endgroup$
My favorite "counterexample" to the derivative acting like a ratio: the implicit differentiation formula for two variables. We have $$frac{dy}{dx} = -frac{partial F/partial x}{partial F/partial y} $$
The formula is almost what you would expect, except for that pesky minus sign.
See http://en.wikipedia.org/wiki/Implicit_differentiation#Formula_for_two_variables for the rigorous definition of this formula.
answered Nov 14 '12 at 6:42
asmeurerasmeurer
5,88242443
5,88242443
18
$begingroup$
Yes, but their is a fake proof of this that comes from that kind a reasoning. If $f(x,y)$ is a function of two variables, then $df=frac{partial f}{partial x}dx+frac{partial f}{partial y}$. Now if we pick a level curve $f(x,y)=0$, then $df=0$, so solving for $frac{dy}{dx}$ gives us the expression above.
$endgroup$
– Baby Dragon
Jun 16 '13 at 19:03
7
$begingroup$
Pardon me, but how is this a "fake proof"?
$endgroup$
– Lurco
Apr 28 '14 at 0:04
16
$begingroup$
@Lurco: He meant $df = frac{∂f}{∂x} dx + frac{∂f}{∂y} dy$, where those 'infinitesimals' are not proper numbers and hence it is wrong to simply substitute $df=0$, because in fact if we are consistent in our interpretation $df=0$ would imply $dx=dy=0$ and hence we can't get $frac{dy}{dx}$ anyway. But if we are inconsistent, we can ignore that and proceed to get the desired expression. Correct answer but fake proof.
$endgroup$
– user21820
May 13 '14 at 9:58
8
$begingroup$
I agree that this is a good example to show why such notation is not so simple as one might think, but in this case I could say that $dx$ is not the same as $∂x$. Do you have an example where the terms really cancel to give the wrong answer?
$endgroup$
– user21820
May 13 '14 at 10:07
4
$begingroup$
@JohnRobertson of course it is. That's the whole point here, that this sort of "fake proof" thinking leads to wrong results here. I blatantly ignore the fact that $d neq partial$ (essentially), and I also completely ignore what $F$ really is. My only point here is that if you try to use this as a mnemonic (or worse, a "proof" method), you will get completely wrong results.
$endgroup$
– asmeurer
May 19 '15 at 18:58
|
show 5 more comments
18
$begingroup$
Yes, but their is a fake proof of this that comes from that kind a reasoning. If $f(x,y)$ is a function of two variables, then $df=frac{partial f}{partial x}dx+frac{partial f}{partial y}$. Now if we pick a level curve $f(x,y)=0$, then $df=0$, so solving for $frac{dy}{dx}$ gives us the expression above.
$endgroup$
– Baby Dragon
Jun 16 '13 at 19:03
7
$begingroup$
Pardon me, but how is this a "fake proof"?
$endgroup$
– Lurco
Apr 28 '14 at 0:04
16
$begingroup$
@Lurco: He meant $df = frac{∂f}{∂x} dx + frac{∂f}{∂y} dy$, where those 'infinitesimals' are not proper numbers and hence it is wrong to simply substitute $df=0$, because in fact if we are consistent in our interpretation $df=0$ would imply $dx=dy=0$ and hence we can't get $frac{dy}{dx}$ anyway. But if we are inconsistent, we can ignore that and proceed to get the desired expression. Correct answer but fake proof.
$endgroup$
– user21820
May 13 '14 at 9:58
8
$begingroup$
I agree that this is a good example to show why such notation is not so simple as one might think, but in this case I could say that $dx$ is not the same as $∂x$. Do you have an example where the terms really cancel to give the wrong answer?
$endgroup$
– user21820
May 13 '14 at 10:07
4
$begingroup$
@JohnRobertson of course it is. That's the whole point here, that this sort of "fake proof" thinking leads to wrong results here. I blatantly ignore the fact that $d neq partial$ (essentially), and I also completely ignore what $F$ really is. My only point here is that if you try to use this as a mnemonic (or worse, a "proof" method), you will get completely wrong results.
$endgroup$
– asmeurer
May 19 '15 at 18:58
18
18
$begingroup$
Yes, but their is a fake proof of this that comes from that kind a reasoning. If $f(x,y)$ is a function of two variables, then $df=frac{partial f}{partial x}dx+frac{partial f}{partial y}$. Now if we pick a level curve $f(x,y)=0$, then $df=0$, so solving for $frac{dy}{dx}$ gives us the expression above.
$endgroup$
– Baby Dragon
Jun 16 '13 at 19:03
$begingroup$
Yes, but their is a fake proof of this that comes from that kind a reasoning. If $f(x,y)$ is a function of two variables, then $df=frac{partial f}{partial x}dx+frac{partial f}{partial y}$. Now if we pick a level curve $f(x,y)=0$, then $df=0$, so solving for $frac{dy}{dx}$ gives us the expression above.
$endgroup$
– Baby Dragon
Jun 16 '13 at 19:03
7
7
$begingroup$
Pardon me, but how is this a "fake proof"?
$endgroup$
– Lurco
Apr 28 '14 at 0:04
$begingroup$
Pardon me, but how is this a "fake proof"?
$endgroup$
– Lurco
Apr 28 '14 at 0:04
16
16
$begingroup$
@Lurco: He meant $df = frac{∂f}{∂x} dx + frac{∂f}{∂y} dy$, where those 'infinitesimals' are not proper numbers and hence it is wrong to simply substitute $df=0$, because in fact if we are consistent in our interpretation $df=0$ would imply $dx=dy=0$ and hence we can't get $frac{dy}{dx}$ anyway. But if we are inconsistent, we can ignore that and proceed to get the desired expression. Correct answer but fake proof.
$endgroup$
– user21820
May 13 '14 at 9:58
$begingroup$
@Lurco: He meant $df = frac{∂f}{∂x} dx + frac{∂f}{∂y} dy$, where those 'infinitesimals' are not proper numbers and hence it is wrong to simply substitute $df=0$, because in fact if we are consistent in our interpretation $df=0$ would imply $dx=dy=0$ and hence we can't get $frac{dy}{dx}$ anyway. But if we are inconsistent, we can ignore that and proceed to get the desired expression. Correct answer but fake proof.
$endgroup$
– user21820
May 13 '14 at 9:58
8
8
$begingroup$
I agree that this is a good example to show why such notation is not so simple as one might think, but in this case I could say that $dx$ is not the same as $∂x$. Do you have an example where the terms really cancel to give the wrong answer?
$endgroup$
– user21820
May 13 '14 at 10:07
$begingroup$
I agree that this is a good example to show why such notation is not so simple as one might think, but in this case I could say that $dx$ is not the same as $∂x$. Do you have an example where the terms really cancel to give the wrong answer?
$endgroup$
– user21820
May 13 '14 at 10:07
4
4
$begingroup$
@JohnRobertson of course it is. That's the whole point here, that this sort of "fake proof" thinking leads to wrong results here. I blatantly ignore the fact that $d neq partial$ (essentially), and I also completely ignore what $F$ really is. My only point here is that if you try to use this as a mnemonic (or worse, a "proof" method), you will get completely wrong results.
$endgroup$
– asmeurer
May 19 '15 at 18:58
$begingroup$
@JohnRobertson of course it is. That's the whole point here, that this sort of "fake proof" thinking leads to wrong results here. I blatantly ignore the fact that $d neq partial$ (essentially), and I also completely ignore what $F$ really is. My only point here is that if you try to use this as a mnemonic (or worse, a "proof" method), you will get completely wrong results.
$endgroup$
– asmeurer
May 19 '15 at 18:58
|
show 5 more comments
$begingroup$
It is best to think of $frac{d}{dx}$ as an operator which takes the derivative, with respect to $x$, of whatever expression follows.
$endgroup$
33
$begingroup$
This is an opinion offered without any justification.
$endgroup$
– Ben Crowell
Apr 30 '14 at 5:20
41
$begingroup$
What kind of justification do you want? It is a very good argument for telling that $frac{dy}{dx}$ is not a fraction! This tells us that $frac{dy}{dx}$ needs to be seen as $frac{d}{dx}(y)$ where $frac{d}{dx}$ is an opeartor.
$endgroup$
– Emin
May 2 '14 at 20:43
25
$begingroup$
Over the hyperreals, $frac{dy}{dx}$ is a ratio and one can view $frac{d}{dx}$ as an operator. Therefore Tobin's reply is not a good argument for "telling that dy/dx is not a fraction".
$endgroup$
– Mikhail Katz
Dec 15 '14 at 16:11
1
$begingroup$
This doesn't address the question. $frac{y}{x}$ is clearly a ratio, but it can be also thought as the operator $frac{1}{x}$ acting on $y$ by multiplication, so "operator" and "ratio" are not exclusive.
$endgroup$
– mlainz
Jan 29 at 10:52
$begingroup$
Thanks for the comments - I think all of these criticisms of my answer are valid.
$endgroup$
– Tobin Fricke
Jan 30 at 19:38
add a comment |
$begingroup$
It is best to think of $frac{d}{dx}$ as an operator which takes the derivative, with respect to $x$, of whatever expression follows.
$endgroup$
33
$begingroup$
This is an opinion offered without any justification.
$endgroup$
– Ben Crowell
Apr 30 '14 at 5:20
41
$begingroup$
What kind of justification do you want? It is a very good argument for telling that $frac{dy}{dx}$ is not a fraction! This tells us that $frac{dy}{dx}$ needs to be seen as $frac{d}{dx}(y)$ where $frac{d}{dx}$ is an opeartor.
$endgroup$
– Emin
May 2 '14 at 20:43
25
$begingroup$
Over the hyperreals, $frac{dy}{dx}$ is a ratio and one can view $frac{d}{dx}$ as an operator. Therefore Tobin's reply is not a good argument for "telling that dy/dx is not a fraction".
$endgroup$
– Mikhail Katz
Dec 15 '14 at 16:11
1
$begingroup$
This doesn't address the question. $frac{y}{x}$ is clearly a ratio, but it can be also thought as the operator $frac{1}{x}$ acting on $y$ by multiplication, so "operator" and "ratio" are not exclusive.
$endgroup$
– mlainz
Jan 29 at 10:52
$begingroup$
Thanks for the comments - I think all of these criticisms of my answer are valid.
$endgroup$
– Tobin Fricke
Jan 30 at 19:38
add a comment |
$begingroup$
It is best to think of $frac{d}{dx}$ as an operator which takes the derivative, with respect to $x$, of whatever expression follows.
$endgroup$
It is best to think of $frac{d}{dx}$ as an operator which takes the derivative, with respect to $x$, of whatever expression follows.
edited Dec 15 '14 at 15:53
dustin
6,76093071
6,76093071
answered Feb 9 '11 at 23:42
Tobin FrickeTobin Fricke
1,4911818
1,4911818
33
$begingroup$
This is an opinion offered without any justification.
$endgroup$
– Ben Crowell
Apr 30 '14 at 5:20
41
$begingroup$
What kind of justification do you want? It is a very good argument for telling that $frac{dy}{dx}$ is not a fraction! This tells us that $frac{dy}{dx}$ needs to be seen as $frac{d}{dx}(y)$ where $frac{d}{dx}$ is an opeartor.
$endgroup$
– Emin
May 2 '14 at 20:43
25
$begingroup$
Over the hyperreals, $frac{dy}{dx}$ is a ratio and one can view $frac{d}{dx}$ as an operator. Therefore Tobin's reply is not a good argument for "telling that dy/dx is not a fraction".
$endgroup$
– Mikhail Katz
Dec 15 '14 at 16:11
1
$begingroup$
This doesn't address the question. $frac{y}{x}$ is clearly a ratio, but it can be also thought as the operator $frac{1}{x}$ acting on $y$ by multiplication, so "operator" and "ratio" are not exclusive.
$endgroup$
– mlainz
Jan 29 at 10:52
$begingroup$
Thanks for the comments - I think all of these criticisms of my answer are valid.
$endgroup$
– Tobin Fricke
Jan 30 at 19:38
add a comment |
33
$begingroup$
This is an opinion offered without any justification.
$endgroup$
– Ben Crowell
Apr 30 '14 at 5:20
41
$begingroup$
What kind of justification do you want? It is a very good argument for telling that $frac{dy}{dx}$ is not a fraction! This tells us that $frac{dy}{dx}$ needs to be seen as $frac{d}{dx}(y)$ where $frac{d}{dx}$ is an opeartor.
$endgroup$
– Emin
May 2 '14 at 20:43
25
$begingroup$
Over the hyperreals, $frac{dy}{dx}$ is a ratio and one can view $frac{d}{dx}$ as an operator. Therefore Tobin's reply is not a good argument for "telling that dy/dx is not a fraction".
$endgroup$
– Mikhail Katz
Dec 15 '14 at 16:11
1
$begingroup$
This doesn't address the question. $frac{y}{x}$ is clearly a ratio, but it can be also thought as the operator $frac{1}{x}$ acting on $y$ by multiplication, so "operator" and "ratio" are not exclusive.
$endgroup$
– mlainz
Jan 29 at 10:52
$begingroup$
Thanks for the comments - I think all of these criticisms of my answer are valid.
$endgroup$
– Tobin Fricke
Jan 30 at 19:38
33
33
$begingroup$
This is an opinion offered without any justification.
$endgroup$
– Ben Crowell
Apr 30 '14 at 5:20
$begingroup$
This is an opinion offered without any justification.
$endgroup$
– Ben Crowell
Apr 30 '14 at 5:20
41
41
$begingroup$
What kind of justification do you want? It is a very good argument for telling that $frac{dy}{dx}$ is not a fraction! This tells us that $frac{dy}{dx}$ needs to be seen as $frac{d}{dx}(y)$ where $frac{d}{dx}$ is an opeartor.
$endgroup$
– Emin
May 2 '14 at 20:43
$begingroup$
What kind of justification do you want? It is a very good argument for telling that $frac{dy}{dx}$ is not a fraction! This tells us that $frac{dy}{dx}$ needs to be seen as $frac{d}{dx}(y)$ where $frac{d}{dx}$ is an opeartor.
$endgroup$
– Emin
May 2 '14 at 20:43
25
25
$begingroup$
Over the hyperreals, $frac{dy}{dx}$ is a ratio and one can view $frac{d}{dx}$ as an operator. Therefore Tobin's reply is not a good argument for "telling that dy/dx is not a fraction".
$endgroup$
– Mikhail Katz
Dec 15 '14 at 16:11
$begingroup$
Over the hyperreals, $frac{dy}{dx}$ is a ratio and one can view $frac{d}{dx}$ as an operator. Therefore Tobin's reply is not a good argument for "telling that dy/dx is not a fraction".
$endgroup$
– Mikhail Katz
Dec 15 '14 at 16:11
1
1
$begingroup$
This doesn't address the question. $frac{y}{x}$ is clearly a ratio, but it can be also thought as the operator $frac{1}{x}$ acting on $y$ by multiplication, so "operator" and "ratio" are not exclusive.
$endgroup$
– mlainz
Jan 29 at 10:52
$begingroup$
This doesn't address the question. $frac{y}{x}$ is clearly a ratio, but it can be also thought as the operator $frac{1}{x}$ acting on $y$ by multiplication, so "operator" and "ratio" are not exclusive.
$endgroup$
– mlainz
Jan 29 at 10:52
$begingroup$
Thanks for the comments - I think all of these criticisms of my answer are valid.
$endgroup$
– Tobin Fricke
Jan 30 at 19:38
$begingroup$
Thanks for the comments - I think all of these criticisms of my answer are valid.
$endgroup$
– Tobin Fricke
Jan 30 at 19:38
add a comment |
$begingroup$
In Leibniz's mathematics, if $y=x^2$ then $frac{dy}{dx}$ would be "equal" to $2x$, but the meaning of "equality" to Leibniz was not the same as it is to us. He emphasized repeatedly (for example in his 1695 response to Nieuwentijt) that he was working with a generalized notion of equality "up to" a negligible term. Also, Leibniz used several different pieces of notation for "equality". One of them was the symbol "$,{}_{ulcorner!urcorner},$". To emphasize the point, one could write $$y=x^2quad rightarrow quad frac{dy}{dx},{}_{ulcorner!urcorner},2x$$ where $frac{dy}{dx}$ is literally a ratio. When one expresses Leibniz's insight in this fashion, one is less tempted to commit an ahistorical error of accusing him of having committed a logical inaccuracy.
In more detail, $frac{dy}{dx}$ is a true ratio in the following sense. We choose an infinitesimal $Delta x$, and consider the corresponding $y$-increment $Delta y = f(x+Delta x)-f(x)$. The ratio $frac{Delta y}{Delta x}$ is then infinitely close to the derivative $f'(x)$. We then set $dx=Delta x$ and $dy=f'(x)dx$ so that $f'(x)=frac{dy}{dx}$ by definition. One of the advantages of this approach is that one obtains an elegant proof of chain rule $frac{dy}{dx}=frac{dy}{du}frac{du}{dx}$ by applying the standard part function to the equality $frac{Delta y}{Delta x}=frac{Delta y}{Delta u}frac{Delta u}{Delta x}$.
In the real-based approach to the calculus, there are no infinitesimals and therefore it is impossible to interpret $frac{dy}{dx}$ as a true ratio. Therefore claims to that effect have to be relativized modulo anti-infinitesimal foundational commitments.
Note 1. I recently noticed that Leibniz's $,{}_{ulcorner!urcorner},$ notation occurs several times in Margaret Baron's book The origins of infinitesimal calculus, starting on page 282. It's well worth a look.
Note 2. It should be clear that Leibniz did view $frac{dy}{dx}$ as a ratio. (Some of the other answers seem to be worded ambiguously with regard to this point.)
$endgroup$
$begingroup$
This is somewhat beside the point, but I don't think that applying the standard part function to prove the Chain Rule is particularly more (or less) elegant than applying the limit as $Delta{x} to 0$. Both attempts hit a snag since $Delta{u}$ might be $0$ when $Delta{x}$ is not (regardless of whether one is thinking of $Delta{x}$ as an infinitesimal quantity or as a standard variable approaching $0$), as for example when $u = x sin(1/x)$.
$endgroup$
– Toby Bartels
Feb 21 '18 at 23:26
$begingroup$
This snag does exist in the epsilon-delta setting, but it does not exist in the infinitesimal setting because if the derivative is nonzero then one necessarily has $Delta unot=0$, and if the derivative is zero then there is nothing to prove. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 9:47
$begingroup$
Notice that the function you mentioned is undefined (or not differentiable if you define it) at zero, so chain rule does not apply in this case anyway. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 10:15
$begingroup$
Sorry, that should be $u = x^2 sin(1/x)$ (extended by continuity to $x = 0$, which is the argument at issue). If the infinitesimal $Delta{x}$ is $1/(npi)$ for some (necessarily infinite) hyperinteger $n$, then $Delta{u}$ is $0$. It's true that in this case, the derivative $mathrm{d}u/mathrm{d}x$ is $0$ too, but I don't see why that matters; why is there nothing to prove in that case? (Conversely, if there's nothing to prove in that case, then doesn't that save the epsilontic proof as well? That's the only way that $Delta{u}$ can be $0$ arbitrarily close to the argument.)
$endgroup$
– Toby Bartels
Feb 23 '18 at 12:21
$begingroup$
If $Delta u$ is zero then obviously $Delta y$ is also zero and therefore both sides of the formula for chain rule are zero. On the other hand, if the derivative of $u=g(x)$ is nonzero then $Delta u$ is necessarily nonzero. This is not necessarily the case when one works with finite differences. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 24 '18 at 19:28
|
show 3 more comments
$begingroup$
In Leibniz's mathematics, if $y=x^2$ then $frac{dy}{dx}$ would be "equal" to $2x$, but the meaning of "equality" to Leibniz was not the same as it is to us. He emphasized repeatedly (for example in his 1695 response to Nieuwentijt) that he was working with a generalized notion of equality "up to" a negligible term. Also, Leibniz used several different pieces of notation for "equality". One of them was the symbol "$,{}_{ulcorner!urcorner},$". To emphasize the point, one could write $$y=x^2quad rightarrow quad frac{dy}{dx},{}_{ulcorner!urcorner},2x$$ where $frac{dy}{dx}$ is literally a ratio. When one expresses Leibniz's insight in this fashion, one is less tempted to commit an ahistorical error of accusing him of having committed a logical inaccuracy.
In more detail, $frac{dy}{dx}$ is a true ratio in the following sense. We choose an infinitesimal $Delta x$, and consider the corresponding $y$-increment $Delta y = f(x+Delta x)-f(x)$. The ratio $frac{Delta y}{Delta x}$ is then infinitely close to the derivative $f'(x)$. We then set $dx=Delta x$ and $dy=f'(x)dx$ so that $f'(x)=frac{dy}{dx}$ by definition. One of the advantages of this approach is that one obtains an elegant proof of chain rule $frac{dy}{dx}=frac{dy}{du}frac{du}{dx}$ by applying the standard part function to the equality $frac{Delta y}{Delta x}=frac{Delta y}{Delta u}frac{Delta u}{Delta x}$.
In the real-based approach to the calculus, there are no infinitesimals and therefore it is impossible to interpret $frac{dy}{dx}$ as a true ratio. Therefore claims to that effect have to be relativized modulo anti-infinitesimal foundational commitments.
Note 1. I recently noticed that Leibniz's $,{}_{ulcorner!urcorner},$ notation occurs several times in Margaret Baron's book The origins of infinitesimal calculus, starting on page 282. It's well worth a look.
Note 2. It should be clear that Leibniz did view $frac{dy}{dx}$ as a ratio. (Some of the other answers seem to be worded ambiguously with regard to this point.)
$endgroup$
$begingroup$
This is somewhat beside the point, but I don't think that applying the standard part function to prove the Chain Rule is particularly more (or less) elegant than applying the limit as $Delta{x} to 0$. Both attempts hit a snag since $Delta{u}$ might be $0$ when $Delta{x}$ is not (regardless of whether one is thinking of $Delta{x}$ as an infinitesimal quantity or as a standard variable approaching $0$), as for example when $u = x sin(1/x)$.
$endgroup$
– Toby Bartels
Feb 21 '18 at 23:26
$begingroup$
This snag does exist in the epsilon-delta setting, but it does not exist in the infinitesimal setting because if the derivative is nonzero then one necessarily has $Delta unot=0$, and if the derivative is zero then there is nothing to prove. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 9:47
$begingroup$
Notice that the function you mentioned is undefined (or not differentiable if you define it) at zero, so chain rule does not apply in this case anyway. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 10:15
$begingroup$
Sorry, that should be $u = x^2 sin(1/x)$ (extended by continuity to $x = 0$, which is the argument at issue). If the infinitesimal $Delta{x}$ is $1/(npi)$ for some (necessarily infinite) hyperinteger $n$, then $Delta{u}$ is $0$. It's true that in this case, the derivative $mathrm{d}u/mathrm{d}x$ is $0$ too, but I don't see why that matters; why is there nothing to prove in that case? (Conversely, if there's nothing to prove in that case, then doesn't that save the epsilontic proof as well? That's the only way that $Delta{u}$ can be $0$ arbitrarily close to the argument.)
$endgroup$
– Toby Bartels
Feb 23 '18 at 12:21
$begingroup$
If $Delta u$ is zero then obviously $Delta y$ is also zero and therefore both sides of the formula for chain rule are zero. On the other hand, if the derivative of $u=g(x)$ is nonzero then $Delta u$ is necessarily nonzero. This is not necessarily the case when one works with finite differences. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 24 '18 at 19:28
|
show 3 more comments
$begingroup$
In Leibniz's mathematics, if $y=x^2$ then $frac{dy}{dx}$ would be "equal" to $2x$, but the meaning of "equality" to Leibniz was not the same as it is to us. He emphasized repeatedly (for example in his 1695 response to Nieuwentijt) that he was working with a generalized notion of equality "up to" a negligible term. Also, Leibniz used several different pieces of notation for "equality". One of them was the symbol "$,{}_{ulcorner!urcorner},$". To emphasize the point, one could write $$y=x^2quad rightarrow quad frac{dy}{dx},{}_{ulcorner!urcorner},2x$$ where $frac{dy}{dx}$ is literally a ratio. When one expresses Leibniz's insight in this fashion, one is less tempted to commit an ahistorical error of accusing him of having committed a logical inaccuracy.
In more detail, $frac{dy}{dx}$ is a true ratio in the following sense. We choose an infinitesimal $Delta x$, and consider the corresponding $y$-increment $Delta y = f(x+Delta x)-f(x)$. The ratio $frac{Delta y}{Delta x}$ is then infinitely close to the derivative $f'(x)$. We then set $dx=Delta x$ and $dy=f'(x)dx$ so that $f'(x)=frac{dy}{dx}$ by definition. One of the advantages of this approach is that one obtains an elegant proof of chain rule $frac{dy}{dx}=frac{dy}{du}frac{du}{dx}$ by applying the standard part function to the equality $frac{Delta y}{Delta x}=frac{Delta y}{Delta u}frac{Delta u}{Delta x}$.
In the real-based approach to the calculus, there are no infinitesimals and therefore it is impossible to interpret $frac{dy}{dx}$ as a true ratio. Therefore claims to that effect have to be relativized modulo anti-infinitesimal foundational commitments.
Note 1. I recently noticed that Leibniz's $,{}_{ulcorner!urcorner},$ notation occurs several times in Margaret Baron's book The origins of infinitesimal calculus, starting on page 282. It's well worth a look.
Note 2. It should be clear that Leibniz did view $frac{dy}{dx}$ as a ratio. (Some of the other answers seem to be worded ambiguously with regard to this point.)
$endgroup$
In Leibniz's mathematics, if $y=x^2$ then $frac{dy}{dx}$ would be "equal" to $2x$, but the meaning of "equality" to Leibniz was not the same as it is to us. He emphasized repeatedly (for example in his 1695 response to Nieuwentijt) that he was working with a generalized notion of equality "up to" a negligible term. Also, Leibniz used several different pieces of notation for "equality". One of them was the symbol "$,{}_{ulcorner!urcorner},$". To emphasize the point, one could write $$y=x^2quad rightarrow quad frac{dy}{dx},{}_{ulcorner!urcorner},2x$$ where $frac{dy}{dx}$ is literally a ratio. When one expresses Leibniz's insight in this fashion, one is less tempted to commit an ahistorical error of accusing him of having committed a logical inaccuracy.
In more detail, $frac{dy}{dx}$ is a true ratio in the following sense. We choose an infinitesimal $Delta x$, and consider the corresponding $y$-increment $Delta y = f(x+Delta x)-f(x)$. The ratio $frac{Delta y}{Delta x}$ is then infinitely close to the derivative $f'(x)$. We then set $dx=Delta x$ and $dy=f'(x)dx$ so that $f'(x)=frac{dy}{dx}$ by definition. One of the advantages of this approach is that one obtains an elegant proof of chain rule $frac{dy}{dx}=frac{dy}{du}frac{du}{dx}$ by applying the standard part function to the equality $frac{Delta y}{Delta x}=frac{Delta y}{Delta u}frac{Delta u}{Delta x}$.
In the real-based approach to the calculus, there are no infinitesimals and therefore it is impossible to interpret $frac{dy}{dx}$ as a true ratio. Therefore claims to that effect have to be relativized modulo anti-infinitesimal foundational commitments.
Note 1. I recently noticed that Leibniz's $,{}_{ulcorner!urcorner},$ notation occurs several times in Margaret Baron's book The origins of infinitesimal calculus, starting on page 282. It's well worth a look.
Note 2. It should be clear that Leibniz did view $frac{dy}{dx}$ as a ratio. (Some of the other answers seem to be worded ambiguously with regard to this point.)
edited May 29 '16 at 6:53
answered Aug 12 '13 at 19:31
Mikhail KatzMikhail Katz
30.7k14399
30.7k14399
$begingroup$
This is somewhat beside the point, but I don't think that applying the standard part function to prove the Chain Rule is particularly more (or less) elegant than applying the limit as $Delta{x} to 0$. Both attempts hit a snag since $Delta{u}$ might be $0$ when $Delta{x}$ is not (regardless of whether one is thinking of $Delta{x}$ as an infinitesimal quantity or as a standard variable approaching $0$), as for example when $u = x sin(1/x)$.
$endgroup$
– Toby Bartels
Feb 21 '18 at 23:26
$begingroup$
This snag does exist in the epsilon-delta setting, but it does not exist in the infinitesimal setting because if the derivative is nonzero then one necessarily has $Delta unot=0$, and if the derivative is zero then there is nothing to prove. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 9:47
$begingroup$
Notice that the function you mentioned is undefined (or not differentiable if you define it) at zero, so chain rule does not apply in this case anyway. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 10:15
$begingroup$
Sorry, that should be $u = x^2 sin(1/x)$ (extended by continuity to $x = 0$, which is the argument at issue). If the infinitesimal $Delta{x}$ is $1/(npi)$ for some (necessarily infinite) hyperinteger $n$, then $Delta{u}$ is $0$. It's true that in this case, the derivative $mathrm{d}u/mathrm{d}x$ is $0$ too, but I don't see why that matters; why is there nothing to prove in that case? (Conversely, if there's nothing to prove in that case, then doesn't that save the epsilontic proof as well? That's the only way that $Delta{u}$ can be $0$ arbitrarily close to the argument.)
$endgroup$
– Toby Bartels
Feb 23 '18 at 12:21
$begingroup$
If $Delta u$ is zero then obviously $Delta y$ is also zero and therefore both sides of the formula for chain rule are zero. On the other hand, if the derivative of $u=g(x)$ is nonzero then $Delta u$ is necessarily nonzero. This is not necessarily the case when one works with finite differences. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 24 '18 at 19:28
|
show 3 more comments
$begingroup$
This is somewhat beside the point, but I don't think that applying the standard part function to prove the Chain Rule is particularly more (or less) elegant than applying the limit as $Delta{x} to 0$. Both attempts hit a snag since $Delta{u}$ might be $0$ when $Delta{x}$ is not (regardless of whether one is thinking of $Delta{x}$ as an infinitesimal quantity or as a standard variable approaching $0$), as for example when $u = x sin(1/x)$.
$endgroup$
– Toby Bartels
Feb 21 '18 at 23:26
$begingroup$
This snag does exist in the epsilon-delta setting, but it does not exist in the infinitesimal setting because if the derivative is nonzero then one necessarily has $Delta unot=0$, and if the derivative is zero then there is nothing to prove. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 9:47
$begingroup$
Notice that the function you mentioned is undefined (or not differentiable if you define it) at zero, so chain rule does not apply in this case anyway. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 10:15
$begingroup$
Sorry, that should be $u = x^2 sin(1/x)$ (extended by continuity to $x = 0$, which is the argument at issue). If the infinitesimal $Delta{x}$ is $1/(npi)$ for some (necessarily infinite) hyperinteger $n$, then $Delta{u}$ is $0$. It's true that in this case, the derivative $mathrm{d}u/mathrm{d}x$ is $0$ too, but I don't see why that matters; why is there nothing to prove in that case? (Conversely, if there's nothing to prove in that case, then doesn't that save the epsilontic proof as well? That's the only way that $Delta{u}$ can be $0$ arbitrarily close to the argument.)
$endgroup$
– Toby Bartels
Feb 23 '18 at 12:21
$begingroup$
If $Delta u$ is zero then obviously $Delta y$ is also zero and therefore both sides of the formula for chain rule are zero. On the other hand, if the derivative of $u=g(x)$ is nonzero then $Delta u$ is necessarily nonzero. This is not necessarily the case when one works with finite differences. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 24 '18 at 19:28
$begingroup$
This is somewhat beside the point, but I don't think that applying the standard part function to prove the Chain Rule is particularly more (or less) elegant than applying the limit as $Delta{x} to 0$. Both attempts hit a snag since $Delta{u}$ might be $0$ when $Delta{x}$ is not (regardless of whether one is thinking of $Delta{x}$ as an infinitesimal quantity or as a standard variable approaching $0$), as for example when $u = x sin(1/x)$.
$endgroup$
– Toby Bartels
Feb 21 '18 at 23:26
$begingroup$
This is somewhat beside the point, but I don't think that applying the standard part function to prove the Chain Rule is particularly more (or less) elegant than applying the limit as $Delta{x} to 0$. Both attempts hit a snag since $Delta{u}$ might be $0$ when $Delta{x}$ is not (regardless of whether one is thinking of $Delta{x}$ as an infinitesimal quantity or as a standard variable approaching $0$), as for example when $u = x sin(1/x)$.
$endgroup$
– Toby Bartels
Feb 21 '18 at 23:26
$begingroup$
This snag does exist in the epsilon-delta setting, but it does not exist in the infinitesimal setting because if the derivative is nonzero then one necessarily has $Delta unot=0$, and if the derivative is zero then there is nothing to prove. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 9:47
$begingroup$
This snag does exist in the epsilon-delta setting, but it does not exist in the infinitesimal setting because if the derivative is nonzero then one necessarily has $Delta unot=0$, and if the derivative is zero then there is nothing to prove. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 9:47
$begingroup$
Notice that the function you mentioned is undefined (or not differentiable if you define it) at zero, so chain rule does not apply in this case anyway. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 10:15
$begingroup$
Notice that the function you mentioned is undefined (or not differentiable if you define it) at zero, so chain rule does not apply in this case anyway. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 22 '18 at 10:15
$begingroup$
Sorry, that should be $u = x^2 sin(1/x)$ (extended by continuity to $x = 0$, which is the argument at issue). If the infinitesimal $Delta{x}$ is $1/(npi)$ for some (necessarily infinite) hyperinteger $n$, then $Delta{u}$ is $0$. It's true that in this case, the derivative $mathrm{d}u/mathrm{d}x$ is $0$ too, but I don't see why that matters; why is there nothing to prove in that case? (Conversely, if there's nothing to prove in that case, then doesn't that save the epsilontic proof as well? That's the only way that $Delta{u}$ can be $0$ arbitrarily close to the argument.)
$endgroup$
– Toby Bartels
Feb 23 '18 at 12:21
$begingroup$
Sorry, that should be $u = x^2 sin(1/x)$ (extended by continuity to $x = 0$, which is the argument at issue). If the infinitesimal $Delta{x}$ is $1/(npi)$ for some (necessarily infinite) hyperinteger $n$, then $Delta{u}$ is $0$. It's true that in this case, the derivative $mathrm{d}u/mathrm{d}x$ is $0$ too, but I don't see why that matters; why is there nothing to prove in that case? (Conversely, if there's nothing to prove in that case, then doesn't that save the epsilontic proof as well? That's the only way that $Delta{u}$ can be $0$ arbitrarily close to the argument.)
$endgroup$
– Toby Bartels
Feb 23 '18 at 12:21
$begingroup$
If $Delta u$ is zero then obviously $Delta y$ is also zero and therefore both sides of the formula for chain rule are zero. On the other hand, if the derivative of $u=g(x)$ is nonzero then $Delta u$ is necessarily nonzero. This is not necessarily the case when one works with finite differences. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 24 '18 at 19:28
$begingroup$
If $Delta u$ is zero then obviously $Delta y$ is also zero and therefore both sides of the formula for chain rule are zero. On the other hand, if the derivative of $u=g(x)$ is nonzero then $Delta u$ is necessarily nonzero. This is not necessarily the case when one works with finite differences. @TobyBartels
$endgroup$
– Mikhail Katz
Feb 24 '18 at 19:28
|
show 3 more comments
$begingroup$
Typically, the $frac{dy}{dx}$ notation is used to denote the derivative, which is defined as the limit we all know and love (see Arturo Magidin's answer). However, when working with differentials, one can interpret $frac{dy}{dx}$ as a genuine ratio of two fixed quantities.
Draw a graph of some smooth function $f$ and its tangent line at $x=a$. Starting from the point $(a, f(a))$, move $dx$ units right along the tangent line (not along the graph of $f$). Let $dy$ be the corresponding change in $y$.
So, we moved $dx$ units right, $dy$ units up, and stayed on the tangent line. Therefore the slope of the tangent line is exactly $frac{dy}{dx}$. However, the slope of the tangent at $x=a$ is also given by $f'(a)$, hence the equation
$$frac{dy}{dx} = f'(a)$$
holds when $dy$ and $dx$ are interpreted as fixed, finite changes in the two variables $x$ and $y$. In this context, we are not taking a limit on the left hand side of this equation, and $frac{dy}{dx}$ is a genuine ratio of two fixed quantities. This is why we can then write $dy = f'(a) dx$.
$endgroup$
7
$begingroup$
This sounds a lot like the explanation of differentials that I recall hearing from my Calculus I instructor (an analyst of note, an expert on Wiener integrals): "$dy$ and $dx$ are any two numbers whose ratio is the derivative . . . they are useful for people who are interested in (sniff) approximations."
$endgroup$
– bof
Dec 28 '13 at 3:34
1
$begingroup$
@bof: But we can't describe almost every real number in the real world, so I guess having approximations is quite good. =)
$endgroup$
– user21820
May 13 '14 at 10:13
1
$begingroup$
@user21820 anything that we can approximate to arbitrary precision we can define... It's the result of that algorithm.
$endgroup$
– k_g
May 18 '15 at 1:34
3
$begingroup$
@k_g: Yes of course. My comment was last year so I don't remember what I meant at that time anymore, but I probably was trying to say that since we already are limited to countably many definable reals, it's much worse if we limit ourselves even further to closed forms of some kind and eschew approximations. Even more so, in the real world we rarely have exact values but just confidence intervals anyway, and so approximations are sufficient for almost all practical purposes.
$endgroup$
– user21820
May 18 '15 at 5:04
add a comment |
$begingroup$
Typically, the $frac{dy}{dx}$ notation is used to denote the derivative, which is defined as the limit we all know and love (see Arturo Magidin's answer). However, when working with differentials, one can interpret $frac{dy}{dx}$ as a genuine ratio of two fixed quantities.
Draw a graph of some smooth function $f$ and its tangent line at $x=a$. Starting from the point $(a, f(a))$, move $dx$ units right along the tangent line (not along the graph of $f$). Let $dy$ be the corresponding change in $y$.
So, we moved $dx$ units right, $dy$ units up, and stayed on the tangent line. Therefore the slope of the tangent line is exactly $frac{dy}{dx}$. However, the slope of the tangent at $x=a$ is also given by $f'(a)$, hence the equation
$$frac{dy}{dx} = f'(a)$$
holds when $dy$ and $dx$ are interpreted as fixed, finite changes in the two variables $x$ and $y$. In this context, we are not taking a limit on the left hand side of this equation, and $frac{dy}{dx}$ is a genuine ratio of two fixed quantities. This is why we can then write $dy = f'(a) dx$.
$endgroup$
7
$begingroup$
This sounds a lot like the explanation of differentials that I recall hearing from my Calculus I instructor (an analyst of note, an expert on Wiener integrals): "$dy$ and $dx$ are any two numbers whose ratio is the derivative . . . they are useful for people who are interested in (sniff) approximations."
$endgroup$
– bof
Dec 28 '13 at 3:34
1
$begingroup$
@bof: But we can't describe almost every real number in the real world, so I guess having approximations is quite good. =)
$endgroup$
– user21820
May 13 '14 at 10:13
1
$begingroup$
@user21820 anything that we can approximate to arbitrary precision we can define... It's the result of that algorithm.
$endgroup$
– k_g
May 18 '15 at 1:34
3
$begingroup$
@k_g: Yes of course. My comment was last year so I don't remember what I meant at that time anymore, but I probably was trying to say that since we already are limited to countably many definable reals, it's much worse if we limit ourselves even further to closed forms of some kind and eschew approximations. Even more so, in the real world we rarely have exact values but just confidence intervals anyway, and so approximations are sufficient for almost all practical purposes.
$endgroup$
– user21820
May 18 '15 at 5:04
add a comment |
$begingroup$
Typically, the $frac{dy}{dx}$ notation is used to denote the derivative, which is defined as the limit we all know and love (see Arturo Magidin's answer). However, when working with differentials, one can interpret $frac{dy}{dx}$ as a genuine ratio of two fixed quantities.
Draw a graph of some smooth function $f$ and its tangent line at $x=a$. Starting from the point $(a, f(a))$, move $dx$ units right along the tangent line (not along the graph of $f$). Let $dy$ be the corresponding change in $y$.
So, we moved $dx$ units right, $dy$ units up, and stayed on the tangent line. Therefore the slope of the tangent line is exactly $frac{dy}{dx}$. However, the slope of the tangent at $x=a$ is also given by $f'(a)$, hence the equation
$$frac{dy}{dx} = f'(a)$$
holds when $dy$ and $dx$ are interpreted as fixed, finite changes in the two variables $x$ and $y$. In this context, we are not taking a limit on the left hand side of this equation, and $frac{dy}{dx}$ is a genuine ratio of two fixed quantities. This is why we can then write $dy = f'(a) dx$.
$endgroup$
Typically, the $frac{dy}{dx}$ notation is used to denote the derivative, which is defined as the limit we all know and love (see Arturo Magidin's answer). However, when working with differentials, one can interpret $frac{dy}{dx}$ as a genuine ratio of two fixed quantities.
Draw a graph of some smooth function $f$ and its tangent line at $x=a$. Starting from the point $(a, f(a))$, move $dx$ units right along the tangent line (not along the graph of $f$). Let $dy$ be the corresponding change in $y$.
So, we moved $dx$ units right, $dy$ units up, and stayed on the tangent line. Therefore the slope of the tangent line is exactly $frac{dy}{dx}$. However, the slope of the tangent at $x=a$ is also given by $f'(a)$, hence the equation
$$frac{dy}{dx} = f'(a)$$
holds when $dy$ and $dx$ are interpreted as fixed, finite changes in the two variables $x$ and $y$. In this context, we are not taking a limit on the left hand side of this equation, and $frac{dy}{dx}$ is a genuine ratio of two fixed quantities. This is why we can then write $dy = f'(a) dx$.
edited Mar 14 '13 at 3:35
isomorphismes
2,39312743
2,39312743
answered Nov 5 '11 at 16:31
Brendan CordyBrendan Cordy
1,1371110
1,1371110
7
$begingroup$
This sounds a lot like the explanation of differentials that I recall hearing from my Calculus I instructor (an analyst of note, an expert on Wiener integrals): "$dy$ and $dx$ are any two numbers whose ratio is the derivative . . . they are useful for people who are interested in (sniff) approximations."
$endgroup$
– bof
Dec 28 '13 at 3:34
1
$begingroup$
@bof: But we can't describe almost every real number in the real world, so I guess having approximations is quite good. =)
$endgroup$
– user21820
May 13 '14 at 10:13
1
$begingroup$
@user21820 anything that we can approximate to arbitrary precision we can define... It's the result of that algorithm.
$endgroup$
– k_g
May 18 '15 at 1:34
3
$begingroup$
@k_g: Yes of course. My comment was last year so I don't remember what I meant at that time anymore, but I probably was trying to say that since we already are limited to countably many definable reals, it's much worse if we limit ourselves even further to closed forms of some kind and eschew approximations. Even more so, in the real world we rarely have exact values but just confidence intervals anyway, and so approximations are sufficient for almost all practical purposes.
$endgroup$
– user21820
May 18 '15 at 5:04
add a comment |
7
$begingroup$
This sounds a lot like the explanation of differentials that I recall hearing from my Calculus I instructor (an analyst of note, an expert on Wiener integrals): "$dy$ and $dx$ are any two numbers whose ratio is the derivative . . . they are useful for people who are interested in (sniff) approximations."
$endgroup$
– bof
Dec 28 '13 at 3:34
1
$begingroup$
@bof: But we can't describe almost every real number in the real world, so I guess having approximations is quite good. =)
$endgroup$
– user21820
May 13 '14 at 10:13
1
$begingroup$
@user21820 anything that we can approximate to arbitrary precision we can define... It's the result of that algorithm.
$endgroup$
– k_g
May 18 '15 at 1:34
3
$begingroup$
@k_g: Yes of course. My comment was last year so I don't remember what I meant at that time anymore, but I probably was trying to say that since we already are limited to countably many definable reals, it's much worse if we limit ourselves even further to closed forms of some kind and eschew approximations. Even more so, in the real world we rarely have exact values but just confidence intervals anyway, and so approximations are sufficient for almost all practical purposes.
$endgroup$
– user21820
May 18 '15 at 5:04
7
7
$begingroup$
This sounds a lot like the explanation of differentials that I recall hearing from my Calculus I instructor (an analyst of note, an expert on Wiener integrals): "$dy$ and $dx$ are any two numbers whose ratio is the derivative . . . they are useful for people who are interested in (sniff) approximations."
$endgroup$
– bof
Dec 28 '13 at 3:34
$begingroup$
This sounds a lot like the explanation of differentials that I recall hearing from my Calculus I instructor (an analyst of note, an expert on Wiener integrals): "$dy$ and $dx$ are any two numbers whose ratio is the derivative . . . they are useful for people who are interested in (sniff) approximations."
$endgroup$
– bof
Dec 28 '13 at 3:34
1
1
$begingroup$
@bof: But we can't describe almost every real number in the real world, so I guess having approximations is quite good. =)
$endgroup$
– user21820
May 13 '14 at 10:13
$begingroup$
@bof: But we can't describe almost every real number in the real world, so I guess having approximations is quite good. =)
$endgroup$
– user21820
May 13 '14 at 10:13
1
1
$begingroup$
@user21820 anything that we can approximate to arbitrary precision we can define... It's the result of that algorithm.
$endgroup$
– k_g
May 18 '15 at 1:34
$begingroup$
@user21820 anything that we can approximate to arbitrary precision we can define... It's the result of that algorithm.
$endgroup$
– k_g
May 18 '15 at 1:34
3
3
$begingroup$
@k_g: Yes of course. My comment was last year so I don't remember what I meant at that time anymore, but I probably was trying to say that since we already are limited to countably many definable reals, it's much worse if we limit ourselves even further to closed forms of some kind and eschew approximations. Even more so, in the real world we rarely have exact values but just confidence intervals anyway, and so approximations are sufficient for almost all practical purposes.
$endgroup$
– user21820
May 18 '15 at 5:04
$begingroup$
@k_g: Yes of course. My comment was last year so I don't remember what I meant at that time anymore, but I probably was trying to say that since we already are limited to countably many definable reals, it's much worse if we limit ourselves even further to closed forms of some kind and eschew approximations. Even more so, in the real world we rarely have exact values but just confidence intervals anyway, and so approximations are sufficient for almost all practical purposes.
$endgroup$
– user21820
May 18 '15 at 5:04
add a comment |
$begingroup$
Of course it is a ratio.
$dy$ and $dx$ are differentials. Thus they act on tangent vectors, not on points. That is, they are functions on the tangent manifold that are linear on each fiber. On the tangent manifold the ratio of the two differentials $frac{dy}{dx}$ is just a ratio of two functions and is constant on every fiber (except being ill defined on the zero section) Therefore it descends to a well defined function on the base manifold. We refer to that function as the derivative.
As pointed out in the original question many calculus one books these days even try to define differentials loosely and at least informally point out that for differentials $dy = f'(x) dx$ (Note that both sides of this equation act on vectors, not on points). Both $dy$ and $dx$ are perfectly well defined functions on vectors and their ratio is therefore a perfectly meaningful function on vectors. Since it is constant on fibers (minus the zero section), then that well defined ratio descends to a function on the original space.
At worst one could object that the ratio $frac{dy}{dx}$ is not defined on the zero section.
$endgroup$
4
$begingroup$
Can anything meaningful be made of higher order derivatives in the same way?
$endgroup$
– Francis Davey
Mar 8 '15 at 10:41
2
$begingroup$
You can simply mimic the procedure to get a second or third derivative. As I recall when I worked that out the same higher partial derivatives get realized in it multiple ways which is awkward. The standard approach is more direct. It is called Jets and there is currently a Wikipedia article on Jet (mathematics).
$endgroup$
– John Robertson
May 2 '15 at 16:51
2
$begingroup$
Tangent manifold is the tangent bundle. And what it means is that dy and dx are both perfectly well defined functions on the tangent manifold, so we can divide one by the other giving dy/dx. It turns out that the value of dy/dx on a given tangent vector only depends on the base point of that vector. As its value only depends on the base point, we can take dy/dx as really defining a function on original space. By way of analogy, if f(u,v) = 3*u + sin(u) + 7 then even though f is a function of both u and v, since v doesn't affect the output, we can also consider f to be a function of u alone.
$endgroup$
– John Robertson
Jul 6 '15 at 15:53
3
$begingroup$
Your answer is in the opposition with many other answers here! :) I am confused! So is it a ratio or not or both!?
$endgroup$
– H. R.
Jul 15 '16 at 20:24
3
$begingroup$
How do you simplify all this to the more special-case level of basic calculus where all spaces are Euclidean? The invocations of manifold theory suggest this is an approach that is designed for non-Euclidean geometries.
$endgroup$
– The_Sympathizer
Feb 3 '17 at 7:19
|
show 5 more comments
$begingroup$
Of course it is a ratio.
$dy$ and $dx$ are differentials. Thus they act on tangent vectors, not on points. That is, they are functions on the tangent manifold that are linear on each fiber. On the tangent manifold the ratio of the two differentials $frac{dy}{dx}$ is just a ratio of two functions and is constant on every fiber (except being ill defined on the zero section) Therefore it descends to a well defined function on the base manifold. We refer to that function as the derivative.
As pointed out in the original question many calculus one books these days even try to define differentials loosely and at least informally point out that for differentials $dy = f'(x) dx$ (Note that both sides of this equation act on vectors, not on points). Both $dy$ and $dx$ are perfectly well defined functions on vectors and their ratio is therefore a perfectly meaningful function on vectors. Since it is constant on fibers (minus the zero section), then that well defined ratio descends to a function on the original space.
At worst one could object that the ratio $frac{dy}{dx}$ is not defined on the zero section.
$endgroup$
4
$begingroup$
Can anything meaningful be made of higher order derivatives in the same way?
$endgroup$
– Francis Davey
Mar 8 '15 at 10:41
2
$begingroup$
You can simply mimic the procedure to get a second or third derivative. As I recall when I worked that out the same higher partial derivatives get realized in it multiple ways which is awkward. The standard approach is more direct. It is called Jets and there is currently a Wikipedia article on Jet (mathematics).
$endgroup$
– John Robertson
May 2 '15 at 16:51
2
$begingroup$
Tangent manifold is the tangent bundle. And what it means is that dy and dx are both perfectly well defined functions on the tangent manifold, so we can divide one by the other giving dy/dx. It turns out that the value of dy/dx on a given tangent vector only depends on the base point of that vector. As its value only depends on the base point, we can take dy/dx as really defining a function on original space. By way of analogy, if f(u,v) = 3*u + sin(u) + 7 then even though f is a function of both u and v, since v doesn't affect the output, we can also consider f to be a function of u alone.
$endgroup$
– John Robertson
Jul 6 '15 at 15:53
3
$begingroup$
Your answer is in the opposition with many other answers here! :) I am confused! So is it a ratio or not or both!?
$endgroup$
– H. R.
Jul 15 '16 at 20:24
3
$begingroup$
How do you simplify all this to the more special-case level of basic calculus where all spaces are Euclidean? The invocations of manifold theory suggest this is an approach that is designed for non-Euclidean geometries.
$endgroup$
– The_Sympathizer
Feb 3 '17 at 7:19
|
show 5 more comments
$begingroup$
Of course it is a ratio.
$dy$ and $dx$ are differentials. Thus they act on tangent vectors, not on points. That is, they are functions on the tangent manifold that are linear on each fiber. On the tangent manifold the ratio of the two differentials $frac{dy}{dx}$ is just a ratio of two functions and is constant on every fiber (except being ill defined on the zero section) Therefore it descends to a well defined function on the base manifold. We refer to that function as the derivative.
As pointed out in the original question many calculus one books these days even try to define differentials loosely and at least informally point out that for differentials $dy = f'(x) dx$ (Note that both sides of this equation act on vectors, not on points). Both $dy$ and $dx$ are perfectly well defined functions on vectors and their ratio is therefore a perfectly meaningful function on vectors. Since it is constant on fibers (minus the zero section), then that well defined ratio descends to a function on the original space.
At worst one could object that the ratio $frac{dy}{dx}$ is not defined on the zero section.
$endgroup$
Of course it is a ratio.
$dy$ and $dx$ are differentials. Thus they act on tangent vectors, not on points. That is, they are functions on the tangent manifold that are linear on each fiber. On the tangent manifold the ratio of the two differentials $frac{dy}{dx}$ is just a ratio of two functions and is constant on every fiber (except being ill defined on the zero section) Therefore it descends to a well defined function on the base manifold. We refer to that function as the derivative.
As pointed out in the original question many calculus one books these days even try to define differentials loosely and at least informally point out that for differentials $dy = f'(x) dx$ (Note that both sides of this equation act on vectors, not on points). Both $dy$ and $dx$ are perfectly well defined functions on vectors and their ratio is therefore a perfectly meaningful function on vectors. Since it is constant on fibers (minus the zero section), then that well defined ratio descends to a function on the original space.
At worst one could object that the ratio $frac{dy}{dx}$ is not defined on the zero section.
edited Jan 29 '17 at 3:18


Harsh Kumar
2,36141640
2,36141640
answered Apr 30 '14 at 5:16
John RobertsonJohn Robertson
91979
91979
4
$begingroup$
Can anything meaningful be made of higher order derivatives in the same way?
$endgroup$
– Francis Davey
Mar 8 '15 at 10:41
2
$begingroup$
You can simply mimic the procedure to get a second or third derivative. As I recall when I worked that out the same higher partial derivatives get realized in it multiple ways which is awkward. The standard approach is more direct. It is called Jets and there is currently a Wikipedia article on Jet (mathematics).
$endgroup$
– John Robertson
May 2 '15 at 16:51
2
$begingroup$
Tangent manifold is the tangent bundle. And what it means is that dy and dx are both perfectly well defined functions on the tangent manifold, so we can divide one by the other giving dy/dx. It turns out that the value of dy/dx on a given tangent vector only depends on the base point of that vector. As its value only depends on the base point, we can take dy/dx as really defining a function on original space. By way of analogy, if f(u,v) = 3*u + sin(u) + 7 then even though f is a function of both u and v, since v doesn't affect the output, we can also consider f to be a function of u alone.
$endgroup$
– John Robertson
Jul 6 '15 at 15:53
3
$begingroup$
Your answer is in the opposition with many other answers here! :) I am confused! So is it a ratio or not or both!?
$endgroup$
– H. R.
Jul 15 '16 at 20:24
3
$begingroup$
How do you simplify all this to the more special-case level of basic calculus where all spaces are Euclidean? The invocations of manifold theory suggest this is an approach that is designed for non-Euclidean geometries.
$endgroup$
– The_Sympathizer
Feb 3 '17 at 7:19
|
show 5 more comments
4
$begingroup$
Can anything meaningful be made of higher order derivatives in the same way?
$endgroup$
– Francis Davey
Mar 8 '15 at 10:41
2
$begingroup$
You can simply mimic the procedure to get a second or third derivative. As I recall when I worked that out the same higher partial derivatives get realized in it multiple ways which is awkward. The standard approach is more direct. It is called Jets and there is currently a Wikipedia article on Jet (mathematics).
$endgroup$
– John Robertson
May 2 '15 at 16:51
2
$begingroup$
Tangent manifold is the tangent bundle. And what it means is that dy and dx are both perfectly well defined functions on the tangent manifold, so we can divide one by the other giving dy/dx. It turns out that the value of dy/dx on a given tangent vector only depends on the base point of that vector. As its value only depends on the base point, we can take dy/dx as really defining a function on original space. By way of analogy, if f(u,v) = 3*u + sin(u) + 7 then even though f is a function of both u and v, since v doesn't affect the output, we can also consider f to be a function of u alone.
$endgroup$
– John Robertson
Jul 6 '15 at 15:53
3
$begingroup$
Your answer is in the opposition with many other answers here! :) I am confused! So is it a ratio or not or both!?
$endgroup$
– H. R.
Jul 15 '16 at 20:24
3
$begingroup$
How do you simplify all this to the more special-case level of basic calculus where all spaces are Euclidean? The invocations of manifold theory suggest this is an approach that is designed for non-Euclidean geometries.
$endgroup$
– The_Sympathizer
Feb 3 '17 at 7:19
4
4
$begingroup$
Can anything meaningful be made of higher order derivatives in the same way?
$endgroup$
– Francis Davey
Mar 8 '15 at 10:41
$begingroup$
Can anything meaningful be made of higher order derivatives in the same way?
$endgroup$
– Francis Davey
Mar 8 '15 at 10:41
2
2
$begingroup$
You can simply mimic the procedure to get a second or third derivative. As I recall when I worked that out the same higher partial derivatives get realized in it multiple ways which is awkward. The standard approach is more direct. It is called Jets and there is currently a Wikipedia article on Jet (mathematics).
$endgroup$
– John Robertson
May 2 '15 at 16:51
$begingroup$
You can simply mimic the procedure to get a second or third derivative. As I recall when I worked that out the same higher partial derivatives get realized in it multiple ways which is awkward. The standard approach is more direct. It is called Jets and there is currently a Wikipedia article on Jet (mathematics).
$endgroup$
– John Robertson
May 2 '15 at 16:51
2
2
$begingroup$
Tangent manifold is the tangent bundle. And what it means is that dy and dx are both perfectly well defined functions on the tangent manifold, so we can divide one by the other giving dy/dx. It turns out that the value of dy/dx on a given tangent vector only depends on the base point of that vector. As its value only depends on the base point, we can take dy/dx as really defining a function on original space. By way of analogy, if f(u,v) = 3*u + sin(u) + 7 then even though f is a function of both u and v, since v doesn't affect the output, we can also consider f to be a function of u alone.
$endgroup$
– John Robertson
Jul 6 '15 at 15:53
$begingroup$
Tangent manifold is the tangent bundle. And what it means is that dy and dx are both perfectly well defined functions on the tangent manifold, so we can divide one by the other giving dy/dx. It turns out that the value of dy/dx on a given tangent vector only depends on the base point of that vector. As its value only depends on the base point, we can take dy/dx as really defining a function on original space. By way of analogy, if f(u,v) = 3*u + sin(u) + 7 then even though f is a function of both u and v, since v doesn't affect the output, we can also consider f to be a function of u alone.
$endgroup$
– John Robertson
Jul 6 '15 at 15:53
3
3
$begingroup$
Your answer is in the opposition with many other answers here! :) I am confused! So is it a ratio or not or both!?
$endgroup$
– H. R.
Jul 15 '16 at 20:24
$begingroup$
Your answer is in the opposition with many other answers here! :) I am confused! So is it a ratio or not or both!?
$endgroup$
– H. R.
Jul 15 '16 at 20:24
3
3
$begingroup$
How do you simplify all this to the more special-case level of basic calculus where all spaces are Euclidean? The invocations of manifold theory suggest this is an approach that is designed for non-Euclidean geometries.
$endgroup$
– The_Sympathizer
Feb 3 '17 at 7:19
$begingroup$
How do you simplify all this to the more special-case level of basic calculus where all spaces are Euclidean? The invocations of manifold theory suggest this is an approach that is designed for non-Euclidean geometries.
$endgroup$
– The_Sympathizer
Feb 3 '17 at 7:19
|
show 5 more comments
$begingroup$
The notation $dy/dx$ - in elementary calculus - is simply that: notation to denote the derivative of, in this case, $y$ w.r.t. $x$. (In this case $f'(x)$ is another notation to express essentially the same thing, i.e. $df(x)/dx$ where $f(x)$ signifies the function $f$ w.r.t. the dependent variable $x$. According to what you've written above, $f(x)$ is the function which takes values in the target space $y$).
Furthermore, by definition, $dy/dx$ at a specific point $x_0$ within the domain $x$ is the real number $L$, if it exists. Otherwise, if no such number exists, then the function $f(x)$ does not have a derivative at the point in question, (i.e. in our case $x_0$).
For further information you can read the Wikipedia article: http://en.wikipedia.org/wiki/Derivative
$endgroup$
32
$begingroup$
So glad that wikipedia finally added an entry for the derivative... $$$$
$endgroup$
– The Chaz 2.0
Aug 10 '11 at 17:50
2
$begingroup$
@Steve: I wish there were a way to collect all the comments that I make (spread across multiple forums, social media outlets, etc) and let you upvote them for humor. Most of my audience scoffs at my simplicity.
$endgroup$
– The Chaz 2.0
Jan 27 '12 at 21:39
add a comment |
$begingroup$
The notation $dy/dx$ - in elementary calculus - is simply that: notation to denote the derivative of, in this case, $y$ w.r.t. $x$. (In this case $f'(x)$ is another notation to express essentially the same thing, i.e. $df(x)/dx$ where $f(x)$ signifies the function $f$ w.r.t. the dependent variable $x$. According to what you've written above, $f(x)$ is the function which takes values in the target space $y$).
Furthermore, by definition, $dy/dx$ at a specific point $x_0$ within the domain $x$ is the real number $L$, if it exists. Otherwise, if no such number exists, then the function $f(x)$ does not have a derivative at the point in question, (i.e. in our case $x_0$).
For further information you can read the Wikipedia article: http://en.wikipedia.org/wiki/Derivative
$endgroup$
32
$begingroup$
So glad that wikipedia finally added an entry for the derivative... $$$$
$endgroup$
– The Chaz 2.0
Aug 10 '11 at 17:50
2
$begingroup$
@Steve: I wish there were a way to collect all the comments that I make (spread across multiple forums, social media outlets, etc) and let you upvote them for humor. Most of my audience scoffs at my simplicity.
$endgroup$
– The Chaz 2.0
Jan 27 '12 at 21:39
add a comment |
$begingroup$
The notation $dy/dx$ - in elementary calculus - is simply that: notation to denote the derivative of, in this case, $y$ w.r.t. $x$. (In this case $f'(x)$ is another notation to express essentially the same thing, i.e. $df(x)/dx$ where $f(x)$ signifies the function $f$ w.r.t. the dependent variable $x$. According to what you've written above, $f(x)$ is the function which takes values in the target space $y$).
Furthermore, by definition, $dy/dx$ at a specific point $x_0$ within the domain $x$ is the real number $L$, if it exists. Otherwise, if no such number exists, then the function $f(x)$ does not have a derivative at the point in question, (i.e. in our case $x_0$).
For further information you can read the Wikipedia article: http://en.wikipedia.org/wiki/Derivative
$endgroup$
The notation $dy/dx$ - in elementary calculus - is simply that: notation to denote the derivative of, in this case, $y$ w.r.t. $x$. (In this case $f'(x)$ is another notation to express essentially the same thing, i.e. $df(x)/dx$ where $f(x)$ signifies the function $f$ w.r.t. the dependent variable $x$. According to what you've written above, $f(x)$ is the function which takes values in the target space $y$).
Furthermore, by definition, $dy/dx$ at a specific point $x_0$ within the domain $x$ is the real number $L$, if it exists. Otherwise, if no such number exists, then the function $f(x)$ does not have a derivative at the point in question, (i.e. in our case $x_0$).
For further information you can read the Wikipedia article: http://en.wikipedia.org/wiki/Derivative
answered Feb 9 '11 at 17:00
AnonymousAnonymous
57147
57147
32
$begingroup$
So glad that wikipedia finally added an entry for the derivative... $$$$
$endgroup$
– The Chaz 2.0
Aug 10 '11 at 17:50
2
$begingroup$
@Steve: I wish there were a way to collect all the comments that I make (spread across multiple forums, social media outlets, etc) and let you upvote them for humor. Most of my audience scoffs at my simplicity.
$endgroup$
– The Chaz 2.0
Jan 27 '12 at 21:39
add a comment |
32
$begingroup$
So glad that wikipedia finally added an entry for the derivative... $$$$
$endgroup$
– The Chaz 2.0
Aug 10 '11 at 17:50
2
$begingroup$
@Steve: I wish there were a way to collect all the comments that I make (spread across multiple forums, social media outlets, etc) and let you upvote them for humor. Most of my audience scoffs at my simplicity.
$endgroup$
– The Chaz 2.0
Jan 27 '12 at 21:39
32
32
$begingroup$
So glad that wikipedia finally added an entry for the derivative... $$$$
$endgroup$
– The Chaz 2.0
Aug 10 '11 at 17:50
$begingroup$
So glad that wikipedia finally added an entry for the derivative... $$$$
$endgroup$
– The Chaz 2.0
Aug 10 '11 at 17:50
2
2
$begingroup$
@Steve: I wish there were a way to collect all the comments that I make (spread across multiple forums, social media outlets, etc) and let you upvote them for humor. Most of my audience scoffs at my simplicity.
$endgroup$
– The Chaz 2.0
Jan 27 '12 at 21:39
$begingroup$
@Steve: I wish there were a way to collect all the comments that I make (spread across multiple forums, social media outlets, etc) and let you upvote them for humor. Most of my audience scoffs at my simplicity.
$endgroup$
– The Chaz 2.0
Jan 27 '12 at 21:39
add a comment |
$begingroup$
It is not a ratio, just as $dx$ is not a product.
$endgroup$
7
$begingroup$
I wonder what motivated the downvote. I do find strange that students tend to confuse Leibniz's notation with a quotient, and not $dx$ (or even $log$!) with a product: they are both indivisible notations... My answer above just makes this point.
$endgroup$
– Mariano Suárez-Álvarez
Feb 10 '11 at 0:12
10
$begingroup$
I think that the reason why this confusion arises in some students may be related to the way in which this notation is used for instance when calculating integrals. Even though as you say, they are indivisible, they are separated "formally" in any calculus course in order to aid in the computation of integrals. I suppose that if the letters in $log$ where separated in a similar way, the students would probably make the same mistake of assuming it is a product.
$endgroup$
– Adrián Barquero
Feb 10 '11 at 4:09
5
$begingroup$
I once heard a story of a university applicant, who was asked at interview to find $dy/dx$, didn't understand the question, no matter how the interviewer phrased it. It was only after the interview wrote it out that the student promptly informed the interviewer that the two $d$'s cancelled and he was in fact mistaken.
$endgroup$
– jClark94
Jan 30 '12 at 19:54
8
$begingroup$
Is this an answer??? Or just an imposition?
$endgroup$
– André Caldas
Sep 12 '13 at 13:12
14
$begingroup$
I find the statement that "students tend to confuse Leibniz's notation with a quotient" a bit problematic. The reason for this is that Leibniz certainly thought of $frac{dy}{dx}$ as a quotient. Since it behaves as a ratio in many contexts (such as the chain rule), it may be more helpful to the student to point out that in fact the derivative can be said to be "equal" to the ratio $frac{dy}{dx}$ if "equality" is interpreted as a more general relation of equality "up to an infinitesimal term", which is how Leibniz thought of it. I don't think this is comparable to thinking of "dx" as a product
$endgroup$
– Mikhail Katz
Oct 2 '13 at 12:57
|
show 8 more comments
$begingroup$
It is not a ratio, just as $dx$ is not a product.
$endgroup$
7
$begingroup$
I wonder what motivated the downvote. I do find strange that students tend to confuse Leibniz's notation with a quotient, and not $dx$ (or even $log$!) with a product: they are both indivisible notations... My answer above just makes this point.
$endgroup$
– Mariano Suárez-Álvarez
Feb 10 '11 at 0:12
10
$begingroup$
I think that the reason why this confusion arises in some students may be related to the way in which this notation is used for instance when calculating integrals. Even though as you say, they are indivisible, they are separated "formally" in any calculus course in order to aid in the computation of integrals. I suppose that if the letters in $log$ where separated in a similar way, the students would probably make the same mistake of assuming it is a product.
$endgroup$
– Adrián Barquero
Feb 10 '11 at 4:09
5
$begingroup$
I once heard a story of a university applicant, who was asked at interview to find $dy/dx$, didn't understand the question, no matter how the interviewer phrased it. It was only after the interview wrote it out that the student promptly informed the interviewer that the two $d$'s cancelled and he was in fact mistaken.
$endgroup$
– jClark94
Jan 30 '12 at 19:54
8
$begingroup$
Is this an answer??? Or just an imposition?
$endgroup$
– André Caldas
Sep 12 '13 at 13:12
14
$begingroup$
I find the statement that "students tend to confuse Leibniz's notation with a quotient" a bit problematic. The reason for this is that Leibniz certainly thought of $frac{dy}{dx}$ as a quotient. Since it behaves as a ratio in many contexts (such as the chain rule), it may be more helpful to the student to point out that in fact the derivative can be said to be "equal" to the ratio $frac{dy}{dx}$ if "equality" is interpreted as a more general relation of equality "up to an infinitesimal term", which is how Leibniz thought of it. I don't think this is comparable to thinking of "dx" as a product
$endgroup$
– Mikhail Katz
Oct 2 '13 at 12:57
|
show 8 more comments
$begingroup$
It is not a ratio, just as $dx$ is not a product.
$endgroup$
It is not a ratio, just as $dx$ is not a product.
answered Feb 9 '11 at 17:06
Mariano Suárez-ÁlvarezMariano Suárez-Álvarez
111k7157288
111k7157288
7
$begingroup$
I wonder what motivated the downvote. I do find strange that students tend to confuse Leibniz's notation with a quotient, and not $dx$ (or even $log$!) with a product: they are both indivisible notations... My answer above just makes this point.
$endgroup$
– Mariano Suárez-Álvarez
Feb 10 '11 at 0:12
10
$begingroup$
I think that the reason why this confusion arises in some students may be related to the way in which this notation is used for instance when calculating integrals. Even though as you say, they are indivisible, they are separated "formally" in any calculus course in order to aid in the computation of integrals. I suppose that if the letters in $log$ where separated in a similar way, the students would probably make the same mistake of assuming it is a product.
$endgroup$
– Adrián Barquero
Feb 10 '11 at 4:09
5
$begingroup$
I once heard a story of a university applicant, who was asked at interview to find $dy/dx$, didn't understand the question, no matter how the interviewer phrased it. It was only after the interview wrote it out that the student promptly informed the interviewer that the two $d$'s cancelled and he was in fact mistaken.
$endgroup$
– jClark94
Jan 30 '12 at 19:54
8
$begingroup$
Is this an answer??? Or just an imposition?
$endgroup$
– André Caldas
Sep 12 '13 at 13:12
14
$begingroup$
I find the statement that "students tend to confuse Leibniz's notation with a quotient" a bit problematic. The reason for this is that Leibniz certainly thought of $frac{dy}{dx}$ as a quotient. Since it behaves as a ratio in many contexts (such as the chain rule), it may be more helpful to the student to point out that in fact the derivative can be said to be "equal" to the ratio $frac{dy}{dx}$ if "equality" is interpreted as a more general relation of equality "up to an infinitesimal term", which is how Leibniz thought of it. I don't think this is comparable to thinking of "dx" as a product
$endgroup$
– Mikhail Katz
Oct 2 '13 at 12:57
|
show 8 more comments
7
$begingroup$
I wonder what motivated the downvote. I do find strange that students tend to confuse Leibniz's notation with a quotient, and not $dx$ (or even $log$!) with a product: they are both indivisible notations... My answer above just makes this point.
$endgroup$
– Mariano Suárez-Álvarez
Feb 10 '11 at 0:12
10
$begingroup$
I think that the reason why this confusion arises in some students may be related to the way in which this notation is used for instance when calculating integrals. Even though as you say, they are indivisible, they are separated "formally" in any calculus course in order to aid in the computation of integrals. I suppose that if the letters in $log$ where separated in a similar way, the students would probably make the same mistake of assuming it is a product.
$endgroup$
– Adrián Barquero
Feb 10 '11 at 4:09
5
$begingroup$
I once heard a story of a university applicant, who was asked at interview to find $dy/dx$, didn't understand the question, no matter how the interviewer phrased it. It was only after the interview wrote it out that the student promptly informed the interviewer that the two $d$'s cancelled and he was in fact mistaken.
$endgroup$
– jClark94
Jan 30 '12 at 19:54
8
$begingroup$
Is this an answer??? Or just an imposition?
$endgroup$
– André Caldas
Sep 12 '13 at 13:12
14
$begingroup$
I find the statement that "students tend to confuse Leibniz's notation with a quotient" a bit problematic. The reason for this is that Leibniz certainly thought of $frac{dy}{dx}$ as a quotient. Since it behaves as a ratio in many contexts (such as the chain rule), it may be more helpful to the student to point out that in fact the derivative can be said to be "equal" to the ratio $frac{dy}{dx}$ if "equality" is interpreted as a more general relation of equality "up to an infinitesimal term", which is how Leibniz thought of it. I don't think this is comparable to thinking of "dx" as a product
$endgroup$
– Mikhail Katz
Oct 2 '13 at 12:57
7
7
$begingroup$
I wonder what motivated the downvote. I do find strange that students tend to confuse Leibniz's notation with a quotient, and not $dx$ (or even $log$!) with a product: they are both indivisible notations... My answer above just makes this point.
$endgroup$
– Mariano Suárez-Álvarez
Feb 10 '11 at 0:12
$begingroup$
I wonder what motivated the downvote. I do find strange that students tend to confuse Leibniz's notation with a quotient, and not $dx$ (or even $log$!) with a product: they are both indivisible notations... My answer above just makes this point.
$endgroup$
– Mariano Suárez-Álvarez
Feb 10 '11 at 0:12
10
10
$begingroup$
I think that the reason why this confusion arises in some students may be related to the way in which this notation is used for instance when calculating integrals. Even though as you say, they are indivisible, they are separated "formally" in any calculus course in order to aid in the computation of integrals. I suppose that if the letters in $log$ where separated in a similar way, the students would probably make the same mistake of assuming it is a product.
$endgroup$
– Adrián Barquero
Feb 10 '11 at 4:09
$begingroup$
I think that the reason why this confusion arises in some students may be related to the way in which this notation is used for instance when calculating integrals. Even though as you say, they are indivisible, they are separated "formally" in any calculus course in order to aid in the computation of integrals. I suppose that if the letters in $log$ where separated in a similar way, the students would probably make the same mistake of assuming it is a product.
$endgroup$
– Adrián Barquero
Feb 10 '11 at 4:09
5
5
$begingroup$
I once heard a story of a university applicant, who was asked at interview to find $dy/dx$, didn't understand the question, no matter how the interviewer phrased it. It was only after the interview wrote it out that the student promptly informed the interviewer that the two $d$'s cancelled and he was in fact mistaken.
$endgroup$
– jClark94
Jan 30 '12 at 19:54
$begingroup$
I once heard a story of a university applicant, who was asked at interview to find $dy/dx$, didn't understand the question, no matter how the interviewer phrased it. It was only after the interview wrote it out that the student promptly informed the interviewer that the two $d$'s cancelled and he was in fact mistaken.
$endgroup$
– jClark94
Jan 30 '12 at 19:54
8
8
$begingroup$
Is this an answer??? Or just an imposition?
$endgroup$
– André Caldas
Sep 12 '13 at 13:12
$begingroup$
Is this an answer??? Or just an imposition?
$endgroup$
– André Caldas
Sep 12 '13 at 13:12
14
14
$begingroup$
I find the statement that "students tend to confuse Leibniz's notation with a quotient" a bit problematic. The reason for this is that Leibniz certainly thought of $frac{dy}{dx}$ as a quotient. Since it behaves as a ratio in many contexts (such as the chain rule), it may be more helpful to the student to point out that in fact the derivative can be said to be "equal" to the ratio $frac{dy}{dx}$ if "equality" is interpreted as a more general relation of equality "up to an infinitesimal term", which is how Leibniz thought of it. I don't think this is comparable to thinking of "dx" as a product
$endgroup$
– Mikhail Katz
Oct 2 '13 at 12:57
$begingroup$
I find the statement that "students tend to confuse Leibniz's notation with a quotient" a bit problematic. The reason for this is that Leibniz certainly thought of $frac{dy}{dx}$ as a quotient. Since it behaves as a ratio in many contexts (such as the chain rule), it may be more helpful to the student to point out that in fact the derivative can be said to be "equal" to the ratio $frac{dy}{dx}$ if "equality" is interpreted as a more general relation of equality "up to an infinitesimal term", which is how Leibniz thought of it. I don't think this is comparable to thinking of "dx" as a product
$endgroup$
– Mikhail Katz
Oct 2 '13 at 12:57
|
show 8 more comments
$begingroup$
$boldsymbol{dfrac{dy}{dx}}$ is definitely not a ratio - it is the limit (if it exists) of a ratio. This is Leibniz's notation of the derivative (c. 1670) which prevailed to the one of Newton $dot{y}(x)$.
Still, most Engineers and even many Applied Mathematicians treat it as a ratio. A very common such case is when solving separable ODEs, i.e. equations of the form
$$
frac{dy}{dx}=f(x)g(y),
$$
writing the above as
$$f(x),dx=frac{dy}{g(y)},
$$
and then integrating.
Apparently this is not Mathematics, it is a symbolic calculus.
Why are we allowed to integrate the left hand side with respect to to $x$ and
the right hand side with respect to to $y$? What is the meaning of that?
This procedure often leads to the right solution, but not always. For example
applying this method to the IVP
$$
frac{dy}{dx}=y+1, quad y(0)=-1,qquad (star)
$$
we get, for some constant $c$,
$$
ln (y+1)=intfrac{dy}{y+1} = int dx = x+c,
$$
equivalently
$$
y(x)=mathrm{e}^{x+c}-1.
$$
Note that it is impossible to incorporate the initial condition $y(0)=-1$, as $mathrm{e}^{x+c}$ never vanishes. By the way, the solution of $(star)$ is $y(x)equiv -1$.
Even worse, consider the case
$$
y'=frac{3y^{1/3}}{2}, quad y(0)=0,
$$
where, using this symbolic calculus, leads to $y^{2/3}=t$.
In my opinion, Calculus should be taught rigorously, with $delta$'s and $varepsilon$'s. Once these are well understood, then one can use such symbolic calculus, provided that he/she is convinced under which restrictions it is indeed permitted.
$endgroup$
12
$begingroup$
I would disagree with this to some extent with your example as many would write the solution as $y(x)=e^{x+c}rightarrow y(x)=e^Ce^xrightarrow y(x)=Ce^x$ for the 'appropriate' $C$. Then we have $y(0)=Ce-1=-1$, implying $C=0$ avoiding the issue which is how many introductory D.E. students would answer the question so the issue is never noticed. But yes, $frac{dy}{dx}$ is certainly not a ratio.
$endgroup$
– mathematics2x2life
Dec 20 '13 at 21:11
11
$begingroup$
Your example works if $dy/dx$ is handled naively as a quotient. Given $dy/dx = y+1$, we can deduce $dx = dy/(y+1)$, but as even undergraduates know, you can't divide by zero, so this is true only as long as $y+1 ne 0$. Thus we correctly conclude that $(star)$ has no solution such that $y+1 ne 0$. Solving for $y+1=0$, we have $dy/dx = 0$, so $y = int 0 dx = 0 + C$, and $y(0)=-1$ constraints $C=-1$.
$endgroup$
– Gilles
Aug 28 '14 at 14:47
1
$begingroup$
Since you mention Leibniz, it may be helpful to clarify that Leibniz did view $frac{dy}{dx}$ as a ratio, for the sake of historical accuracy.
$endgroup$
– Mikhail Katz
Dec 7 '15 at 18:59
$begingroup$
+1 for the interesting IVP example, I have never noticed that subtlety.
$endgroup$
– electronpusher
Apr 22 '17 at 3:04
4
$begingroup$
You got the wrong answer because you divided by zero, not because there's anything wrong with treating the derivative as a ratio.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:20
|
show 1 more comment
$begingroup$
$boldsymbol{dfrac{dy}{dx}}$ is definitely not a ratio - it is the limit (if it exists) of a ratio. This is Leibniz's notation of the derivative (c. 1670) which prevailed to the one of Newton $dot{y}(x)$.
Still, most Engineers and even many Applied Mathematicians treat it as a ratio. A very common such case is when solving separable ODEs, i.e. equations of the form
$$
frac{dy}{dx}=f(x)g(y),
$$
writing the above as
$$f(x),dx=frac{dy}{g(y)},
$$
and then integrating.
Apparently this is not Mathematics, it is a symbolic calculus.
Why are we allowed to integrate the left hand side with respect to to $x$ and
the right hand side with respect to to $y$? What is the meaning of that?
This procedure often leads to the right solution, but not always. For example
applying this method to the IVP
$$
frac{dy}{dx}=y+1, quad y(0)=-1,qquad (star)
$$
we get, for some constant $c$,
$$
ln (y+1)=intfrac{dy}{y+1} = int dx = x+c,
$$
equivalently
$$
y(x)=mathrm{e}^{x+c}-1.
$$
Note that it is impossible to incorporate the initial condition $y(0)=-1$, as $mathrm{e}^{x+c}$ never vanishes. By the way, the solution of $(star)$ is $y(x)equiv -1$.
Even worse, consider the case
$$
y'=frac{3y^{1/3}}{2}, quad y(0)=0,
$$
where, using this symbolic calculus, leads to $y^{2/3}=t$.
In my opinion, Calculus should be taught rigorously, with $delta$'s and $varepsilon$'s. Once these are well understood, then one can use such symbolic calculus, provided that he/she is convinced under which restrictions it is indeed permitted.
$endgroup$
12
$begingroup$
I would disagree with this to some extent with your example as many would write the solution as $y(x)=e^{x+c}rightarrow y(x)=e^Ce^xrightarrow y(x)=Ce^x$ for the 'appropriate' $C$. Then we have $y(0)=Ce-1=-1$, implying $C=0$ avoiding the issue which is how many introductory D.E. students would answer the question so the issue is never noticed. But yes, $frac{dy}{dx}$ is certainly not a ratio.
$endgroup$
– mathematics2x2life
Dec 20 '13 at 21:11
11
$begingroup$
Your example works if $dy/dx$ is handled naively as a quotient. Given $dy/dx = y+1$, we can deduce $dx = dy/(y+1)$, but as even undergraduates know, you can't divide by zero, so this is true only as long as $y+1 ne 0$. Thus we correctly conclude that $(star)$ has no solution such that $y+1 ne 0$. Solving for $y+1=0$, we have $dy/dx = 0$, so $y = int 0 dx = 0 + C$, and $y(0)=-1$ constraints $C=-1$.
$endgroup$
– Gilles
Aug 28 '14 at 14:47
1
$begingroup$
Since you mention Leibniz, it may be helpful to clarify that Leibniz did view $frac{dy}{dx}$ as a ratio, for the sake of historical accuracy.
$endgroup$
– Mikhail Katz
Dec 7 '15 at 18:59
$begingroup$
+1 for the interesting IVP example, I have never noticed that subtlety.
$endgroup$
– electronpusher
Apr 22 '17 at 3:04
4
$begingroup$
You got the wrong answer because you divided by zero, not because there's anything wrong with treating the derivative as a ratio.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:20
|
show 1 more comment
$begingroup$
$boldsymbol{dfrac{dy}{dx}}$ is definitely not a ratio - it is the limit (if it exists) of a ratio. This is Leibniz's notation of the derivative (c. 1670) which prevailed to the one of Newton $dot{y}(x)$.
Still, most Engineers and even many Applied Mathematicians treat it as a ratio. A very common such case is when solving separable ODEs, i.e. equations of the form
$$
frac{dy}{dx}=f(x)g(y),
$$
writing the above as
$$f(x),dx=frac{dy}{g(y)},
$$
and then integrating.
Apparently this is not Mathematics, it is a symbolic calculus.
Why are we allowed to integrate the left hand side with respect to to $x$ and
the right hand side with respect to to $y$? What is the meaning of that?
This procedure often leads to the right solution, but not always. For example
applying this method to the IVP
$$
frac{dy}{dx}=y+1, quad y(0)=-1,qquad (star)
$$
we get, for some constant $c$,
$$
ln (y+1)=intfrac{dy}{y+1} = int dx = x+c,
$$
equivalently
$$
y(x)=mathrm{e}^{x+c}-1.
$$
Note that it is impossible to incorporate the initial condition $y(0)=-1$, as $mathrm{e}^{x+c}$ never vanishes. By the way, the solution of $(star)$ is $y(x)equiv -1$.
Even worse, consider the case
$$
y'=frac{3y^{1/3}}{2}, quad y(0)=0,
$$
where, using this symbolic calculus, leads to $y^{2/3}=t$.
In my opinion, Calculus should be taught rigorously, with $delta$'s and $varepsilon$'s. Once these are well understood, then one can use such symbolic calculus, provided that he/she is convinced under which restrictions it is indeed permitted.
$endgroup$
$boldsymbol{dfrac{dy}{dx}}$ is definitely not a ratio - it is the limit (if it exists) of a ratio. This is Leibniz's notation of the derivative (c. 1670) which prevailed to the one of Newton $dot{y}(x)$.
Still, most Engineers and even many Applied Mathematicians treat it as a ratio. A very common such case is when solving separable ODEs, i.e. equations of the form
$$
frac{dy}{dx}=f(x)g(y),
$$
writing the above as
$$f(x),dx=frac{dy}{g(y)},
$$
and then integrating.
Apparently this is not Mathematics, it is a symbolic calculus.
Why are we allowed to integrate the left hand side with respect to to $x$ and
the right hand side with respect to to $y$? What is the meaning of that?
This procedure often leads to the right solution, but not always. For example
applying this method to the IVP
$$
frac{dy}{dx}=y+1, quad y(0)=-1,qquad (star)
$$
we get, for some constant $c$,
$$
ln (y+1)=intfrac{dy}{y+1} = int dx = x+c,
$$
equivalently
$$
y(x)=mathrm{e}^{x+c}-1.
$$
Note that it is impossible to incorporate the initial condition $y(0)=-1$, as $mathrm{e}^{x+c}$ never vanishes. By the way, the solution of $(star)$ is $y(x)equiv -1$.
Even worse, consider the case
$$
y'=frac{3y^{1/3}}{2}, quad y(0)=0,
$$
where, using this symbolic calculus, leads to $y^{2/3}=t$.
In my opinion, Calculus should be taught rigorously, with $delta$'s and $varepsilon$'s. Once these are well understood, then one can use such symbolic calculus, provided that he/she is convinced under which restrictions it is indeed permitted.
edited Feb 20 '18 at 9:14
answered Dec 20 '13 at 10:56


Yiorgos S. SmyrlisYiorgos S. Smyrlis
63.4k1385164
63.4k1385164
12
$begingroup$
I would disagree with this to some extent with your example as many would write the solution as $y(x)=e^{x+c}rightarrow y(x)=e^Ce^xrightarrow y(x)=Ce^x$ for the 'appropriate' $C$. Then we have $y(0)=Ce-1=-1$, implying $C=0$ avoiding the issue which is how many introductory D.E. students would answer the question so the issue is never noticed. But yes, $frac{dy}{dx}$ is certainly not a ratio.
$endgroup$
– mathematics2x2life
Dec 20 '13 at 21:11
11
$begingroup$
Your example works if $dy/dx$ is handled naively as a quotient. Given $dy/dx = y+1$, we can deduce $dx = dy/(y+1)$, but as even undergraduates know, you can't divide by zero, so this is true only as long as $y+1 ne 0$. Thus we correctly conclude that $(star)$ has no solution such that $y+1 ne 0$. Solving for $y+1=0$, we have $dy/dx = 0$, so $y = int 0 dx = 0 + C$, and $y(0)=-1$ constraints $C=-1$.
$endgroup$
– Gilles
Aug 28 '14 at 14:47
1
$begingroup$
Since you mention Leibniz, it may be helpful to clarify that Leibniz did view $frac{dy}{dx}$ as a ratio, for the sake of historical accuracy.
$endgroup$
– Mikhail Katz
Dec 7 '15 at 18:59
$begingroup$
+1 for the interesting IVP example, I have never noticed that subtlety.
$endgroup$
– electronpusher
Apr 22 '17 at 3:04
4
$begingroup$
You got the wrong answer because you divided by zero, not because there's anything wrong with treating the derivative as a ratio.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:20
|
show 1 more comment
12
$begingroup$
I would disagree with this to some extent with your example as many would write the solution as $y(x)=e^{x+c}rightarrow y(x)=e^Ce^xrightarrow y(x)=Ce^x$ for the 'appropriate' $C$. Then we have $y(0)=Ce-1=-1$, implying $C=0$ avoiding the issue which is how many introductory D.E. students would answer the question so the issue is never noticed. But yes, $frac{dy}{dx}$ is certainly not a ratio.
$endgroup$
– mathematics2x2life
Dec 20 '13 at 21:11
11
$begingroup$
Your example works if $dy/dx$ is handled naively as a quotient. Given $dy/dx = y+1$, we can deduce $dx = dy/(y+1)$, but as even undergraduates know, you can't divide by zero, so this is true only as long as $y+1 ne 0$. Thus we correctly conclude that $(star)$ has no solution such that $y+1 ne 0$. Solving for $y+1=0$, we have $dy/dx = 0$, so $y = int 0 dx = 0 + C$, and $y(0)=-1$ constraints $C=-1$.
$endgroup$
– Gilles
Aug 28 '14 at 14:47
1
$begingroup$
Since you mention Leibniz, it may be helpful to clarify that Leibniz did view $frac{dy}{dx}$ as a ratio, for the sake of historical accuracy.
$endgroup$
– Mikhail Katz
Dec 7 '15 at 18:59
$begingroup$
+1 for the interesting IVP example, I have never noticed that subtlety.
$endgroup$
– electronpusher
Apr 22 '17 at 3:04
4
$begingroup$
You got the wrong answer because you divided by zero, not because there's anything wrong with treating the derivative as a ratio.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:20
12
12
$begingroup$
I would disagree with this to some extent with your example as many would write the solution as $y(x)=e^{x+c}rightarrow y(x)=e^Ce^xrightarrow y(x)=Ce^x$ for the 'appropriate' $C$. Then we have $y(0)=Ce-1=-1$, implying $C=0$ avoiding the issue which is how many introductory D.E. students would answer the question so the issue is never noticed. But yes, $frac{dy}{dx}$ is certainly not a ratio.
$endgroup$
– mathematics2x2life
Dec 20 '13 at 21:11
$begingroup$
I would disagree with this to some extent with your example as many would write the solution as $y(x)=e^{x+c}rightarrow y(x)=e^Ce^xrightarrow y(x)=Ce^x$ for the 'appropriate' $C$. Then we have $y(0)=Ce-1=-1$, implying $C=0$ avoiding the issue which is how many introductory D.E. students would answer the question so the issue is never noticed. But yes, $frac{dy}{dx}$ is certainly not a ratio.
$endgroup$
– mathematics2x2life
Dec 20 '13 at 21:11
11
11
$begingroup$
Your example works if $dy/dx$ is handled naively as a quotient. Given $dy/dx = y+1$, we can deduce $dx = dy/(y+1)$, but as even undergraduates know, you can't divide by zero, so this is true only as long as $y+1 ne 0$. Thus we correctly conclude that $(star)$ has no solution such that $y+1 ne 0$. Solving for $y+1=0$, we have $dy/dx = 0$, so $y = int 0 dx = 0 + C$, and $y(0)=-1$ constraints $C=-1$.
$endgroup$
– Gilles
Aug 28 '14 at 14:47
$begingroup$
Your example works if $dy/dx$ is handled naively as a quotient. Given $dy/dx = y+1$, we can deduce $dx = dy/(y+1)$, but as even undergraduates know, you can't divide by zero, so this is true only as long as $y+1 ne 0$. Thus we correctly conclude that $(star)$ has no solution such that $y+1 ne 0$. Solving for $y+1=0$, we have $dy/dx = 0$, so $y = int 0 dx = 0 + C$, and $y(0)=-1$ constraints $C=-1$.
$endgroup$
– Gilles
Aug 28 '14 at 14:47
1
1
$begingroup$
Since you mention Leibniz, it may be helpful to clarify that Leibniz did view $frac{dy}{dx}$ as a ratio, for the sake of historical accuracy.
$endgroup$
– Mikhail Katz
Dec 7 '15 at 18:59
$begingroup$
Since you mention Leibniz, it may be helpful to clarify that Leibniz did view $frac{dy}{dx}$ as a ratio, for the sake of historical accuracy.
$endgroup$
– Mikhail Katz
Dec 7 '15 at 18:59
$begingroup$
+1 for the interesting IVP example, I have never noticed that subtlety.
$endgroup$
– electronpusher
Apr 22 '17 at 3:04
$begingroup$
+1 for the interesting IVP example, I have never noticed that subtlety.
$endgroup$
– electronpusher
Apr 22 '17 at 3:04
4
4
$begingroup$
You got the wrong answer because you divided by zero, not because there's anything wrong with treating the derivative as a ratio.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:20
$begingroup$
You got the wrong answer because you divided by zero, not because there's anything wrong with treating the derivative as a ratio.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:20
|
show 1 more comment
$begingroup$
$frac{dy}{dx}$ is not a ratio - it is a symbol used to represent a limit.
$endgroup$
6
$begingroup$
This is one possible view on $frac{dy}{dx}$, related to the fact that the common number system does not contain infinitesimals, making it impossible to justify this symbol as a ratio in that particular framework. However, Leibniz certainly meant it to be a ratio. Furthermore, it can be justified as a ratio in modern infinitesimal theories, as mentioned in some of the other answers.
$endgroup$
– Mikhail Katz
Nov 17 '13 at 14:57
add a comment |
$begingroup$
$frac{dy}{dx}$ is not a ratio - it is a symbol used to represent a limit.
$endgroup$
6
$begingroup$
This is one possible view on $frac{dy}{dx}$, related to the fact that the common number system does not contain infinitesimals, making it impossible to justify this symbol as a ratio in that particular framework. However, Leibniz certainly meant it to be a ratio. Furthermore, it can be justified as a ratio in modern infinitesimal theories, as mentioned in some of the other answers.
$endgroup$
– Mikhail Katz
Nov 17 '13 at 14:57
add a comment |
$begingroup$
$frac{dy}{dx}$ is not a ratio - it is a symbol used to represent a limit.
$endgroup$
$frac{dy}{dx}$ is not a ratio - it is a symbol used to represent a limit.
answered Nov 5 '11 at 3:15
GdSGdS
33732
33732
6
$begingroup$
This is one possible view on $frac{dy}{dx}$, related to the fact that the common number system does not contain infinitesimals, making it impossible to justify this symbol as a ratio in that particular framework. However, Leibniz certainly meant it to be a ratio. Furthermore, it can be justified as a ratio in modern infinitesimal theories, as mentioned in some of the other answers.
$endgroup$
– Mikhail Katz
Nov 17 '13 at 14:57
add a comment |
6
$begingroup$
This is one possible view on $frac{dy}{dx}$, related to the fact that the common number system does not contain infinitesimals, making it impossible to justify this symbol as a ratio in that particular framework. However, Leibniz certainly meant it to be a ratio. Furthermore, it can be justified as a ratio in modern infinitesimal theories, as mentioned in some of the other answers.
$endgroup$
– Mikhail Katz
Nov 17 '13 at 14:57
6
6
$begingroup$
This is one possible view on $frac{dy}{dx}$, related to the fact that the common number system does not contain infinitesimals, making it impossible to justify this symbol as a ratio in that particular framework. However, Leibniz certainly meant it to be a ratio. Furthermore, it can be justified as a ratio in modern infinitesimal theories, as mentioned in some of the other answers.
$endgroup$
– Mikhail Katz
Nov 17 '13 at 14:57
$begingroup$
This is one possible view on $frac{dy}{dx}$, related to the fact that the common number system does not contain infinitesimals, making it impossible to justify this symbol as a ratio in that particular framework. However, Leibniz certainly meant it to be a ratio. Furthermore, it can be justified as a ratio in modern infinitesimal theories, as mentioned in some of the other answers.
$endgroup$
– Mikhail Katz
Nov 17 '13 at 14:57
add a comment |
$begingroup$
In most formulations, $frac{dx}{dy}$ can not be interpreted as a ratio, as $dx$ and $dy$ do not actually exist in them. An exception to this is shown in this book. How it works, as Arturo said, is we allow infinitesimals (by using the hyperreal number system). It is well formulated, and I prefer it to limit notions, as this is how it was invented. Its just that they weren't able to formulate it correctly back then. I will give a slightly simplified example. Let us say you are differentiating $y=x^2$. Now let $dx$ be a miscellaneous infinitesimals (it is the same no matter which you choose if your function is differentiate-able at that point.) $$dy=(x+dx)^2-x^2$$
$$dy=2xtimes dx+dx^2$$
Now when we take the ratio, it is:
$$frac{dy}{dx}=2x+dx$$
(Note:Actually,$frac{Delta y}{Delta x}$ is what we found in the beginning, and $dy$ is defined so that $frac{dy}{dx}$ is $frac{Delta y}{Delta x}$ rounded to the nearest real number.)
$endgroup$
$begingroup$
So, your example is still incomplete. To complete it, you should either take the limit of $dxto0$, or take standard part of the RHS if you treat $dx$ as infinitesimal instead of as $varepsilon$.
$endgroup$
– Ruslan
May 8 '18 at 19:49
add a comment |
$begingroup$
In most formulations, $frac{dx}{dy}$ can not be interpreted as a ratio, as $dx$ and $dy$ do not actually exist in them. An exception to this is shown in this book. How it works, as Arturo said, is we allow infinitesimals (by using the hyperreal number system). It is well formulated, and I prefer it to limit notions, as this is how it was invented. Its just that they weren't able to formulate it correctly back then. I will give a slightly simplified example. Let us say you are differentiating $y=x^2$. Now let $dx$ be a miscellaneous infinitesimals (it is the same no matter which you choose if your function is differentiate-able at that point.) $$dy=(x+dx)^2-x^2$$
$$dy=2xtimes dx+dx^2$$
Now when we take the ratio, it is:
$$frac{dy}{dx}=2x+dx$$
(Note:Actually,$frac{Delta y}{Delta x}$ is what we found in the beginning, and $dy$ is defined so that $frac{dy}{dx}$ is $frac{Delta y}{Delta x}$ rounded to the nearest real number.)
$endgroup$
$begingroup$
So, your example is still incomplete. To complete it, you should either take the limit of $dxto0$, or take standard part of the RHS if you treat $dx$ as infinitesimal instead of as $varepsilon$.
$endgroup$
– Ruslan
May 8 '18 at 19:49
add a comment |
$begingroup$
In most formulations, $frac{dx}{dy}$ can not be interpreted as a ratio, as $dx$ and $dy$ do not actually exist in them. An exception to this is shown in this book. How it works, as Arturo said, is we allow infinitesimals (by using the hyperreal number system). It is well formulated, and I prefer it to limit notions, as this is how it was invented. Its just that they weren't able to formulate it correctly back then. I will give a slightly simplified example. Let us say you are differentiating $y=x^2$. Now let $dx$ be a miscellaneous infinitesimals (it is the same no matter which you choose if your function is differentiate-able at that point.) $$dy=(x+dx)^2-x^2$$
$$dy=2xtimes dx+dx^2$$
Now when we take the ratio, it is:
$$frac{dy}{dx}=2x+dx$$
(Note:Actually,$frac{Delta y}{Delta x}$ is what we found in the beginning, and $dy$ is defined so that $frac{dy}{dx}$ is $frac{Delta y}{Delta x}$ rounded to the nearest real number.)
$endgroup$
In most formulations, $frac{dx}{dy}$ can not be interpreted as a ratio, as $dx$ and $dy$ do not actually exist in them. An exception to this is shown in this book. How it works, as Arturo said, is we allow infinitesimals (by using the hyperreal number system). It is well formulated, and I prefer it to limit notions, as this is how it was invented. Its just that they weren't able to formulate it correctly back then. I will give a slightly simplified example. Let us say you are differentiating $y=x^2$. Now let $dx$ be a miscellaneous infinitesimals (it is the same no matter which you choose if your function is differentiate-able at that point.) $$dy=(x+dx)^2-x^2$$
$$dy=2xtimes dx+dx^2$$
Now when we take the ratio, it is:
$$frac{dy}{dx}=2x+dx$$
(Note:Actually,$frac{Delta y}{Delta x}$ is what we found in the beginning, and $dy$ is defined so that $frac{dy}{dx}$ is $frac{Delta y}{Delta x}$ rounded to the nearest real number.)
edited Jan 29 '17 at 3:02


Frank
3,7951633
3,7951633
answered Sep 19 '13 at 23:47


PyRulezPyRulez
4,95122470
4,95122470
$begingroup$
So, your example is still incomplete. To complete it, you should either take the limit of $dxto0$, or take standard part of the RHS if you treat $dx$ as infinitesimal instead of as $varepsilon$.
$endgroup$
– Ruslan
May 8 '18 at 19:49
add a comment |
$begingroup$
So, your example is still incomplete. To complete it, you should either take the limit of $dxto0$, or take standard part of the RHS if you treat $dx$ as infinitesimal instead of as $varepsilon$.
$endgroup$
– Ruslan
May 8 '18 at 19:49
$begingroup$
So, your example is still incomplete. To complete it, you should either take the limit of $dxto0$, or take standard part of the RHS if you treat $dx$ as infinitesimal instead of as $varepsilon$.
$endgroup$
– Ruslan
May 8 '18 at 19:49
$begingroup$
So, your example is still incomplete. To complete it, you should either take the limit of $dxto0$, or take standard part of the RHS if you treat $dx$ as infinitesimal instead of as $varepsilon$.
$endgroup$
– Ruslan
May 8 '18 at 19:49
add a comment |
$begingroup$
It may be of interest to record Russell's views of the matter:
Leibniz's belief that the Calculus had philosophical importance is now known to be erroneous: there are no infinitesimals in it, and $dx$ and $dy$ are not numerator and denominator of a fraction. (Bertrand Russell, RECENT WORK ON THE PHILOSOPHY OF LEIBNIZ. Mind, 1903).
$endgroup$
5
$begingroup$
So yet another error made by Russell, is what you're saying?
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:24
$begingroup$
Yes, an error indeed, and one that he elaborated on in embarrassing detail in his Principles of Mathematics. @TobyBartels
$endgroup$
– Mikhail Katz
Apr 30 '17 at 12:20
add a comment |
$begingroup$
It may be of interest to record Russell's views of the matter:
Leibniz's belief that the Calculus had philosophical importance is now known to be erroneous: there are no infinitesimals in it, and $dx$ and $dy$ are not numerator and denominator of a fraction. (Bertrand Russell, RECENT WORK ON THE PHILOSOPHY OF LEIBNIZ. Mind, 1903).
$endgroup$
5
$begingroup$
So yet another error made by Russell, is what you're saying?
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:24
$begingroup$
Yes, an error indeed, and one that he elaborated on in embarrassing detail in his Principles of Mathematics. @TobyBartels
$endgroup$
– Mikhail Katz
Apr 30 '17 at 12:20
add a comment |
$begingroup$
It may be of interest to record Russell's views of the matter:
Leibniz's belief that the Calculus had philosophical importance is now known to be erroneous: there are no infinitesimals in it, and $dx$ and $dy$ are not numerator and denominator of a fraction. (Bertrand Russell, RECENT WORK ON THE PHILOSOPHY OF LEIBNIZ. Mind, 1903).
$endgroup$
It may be of interest to record Russell's views of the matter:
Leibniz's belief that the Calculus had philosophical importance is now known to be erroneous: there are no infinitesimals in it, and $dx$ and $dy$ are not numerator and denominator of a fraction. (Bertrand Russell, RECENT WORK ON THE PHILOSOPHY OF LEIBNIZ. Mind, 1903).
edited Nov 26 '17 at 9:45
answered May 1 '14 at 17:10
Mikhail KatzMikhail Katz
30.7k14399
30.7k14399
5
$begingroup$
So yet another error made by Russell, is what you're saying?
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:24
$begingroup$
Yes, an error indeed, and one that he elaborated on in embarrassing detail in his Principles of Mathematics. @TobyBartels
$endgroup$
– Mikhail Katz
Apr 30 '17 at 12:20
add a comment |
5
$begingroup$
So yet another error made by Russell, is what you're saying?
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:24
$begingroup$
Yes, an error indeed, and one that he elaborated on in embarrassing detail in his Principles of Mathematics. @TobyBartels
$endgroup$
– Mikhail Katz
Apr 30 '17 at 12:20
5
5
$begingroup$
So yet another error made by Russell, is what you're saying?
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:24
$begingroup$
So yet another error made by Russell, is what you're saying?
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:24
$begingroup$
Yes, an error indeed, and one that he elaborated on in embarrassing detail in his Principles of Mathematics. @TobyBartels
$endgroup$
– Mikhail Katz
Apr 30 '17 at 12:20
$begingroup$
Yes, an error indeed, and one that he elaborated on in embarrassing detail in his Principles of Mathematics. @TobyBartels
$endgroup$
– Mikhail Katz
Apr 30 '17 at 12:20
add a comment |
$begingroup$
I realize this is an old post, but I think it's worth while to point out that in the so-called Quantum Calculus $frac{dy}{dx}$ $is$ a ratio. The subject $starts$ off immediately by saying this is a ratio, by defining differentials and then calling derivatives a ratio of differentials:
The $q-$differential is defined as
$$d_q f(x) = f(qx) - f(x)$$
and the $h-$differential as
$$d_h f(x) = f(x+h) - f(x)$$
It follows that $d_q x = (q-1)x$ and $d_h x = h$.
From here, we go on to define the $q-$derivative and $h-$derivative, respectively:
$$D_q f(x) = frac{d_q f(x)}{d_q x} = frac{f(qx) - f(x)}{(q-1)x}$$
$$D_h f(x) = frac{d_h f(x)}{d_q x} = frac{f(x+h) - f(x)}{h}$$
Notice that
$$lim_{q to 1} D_q f(x) = lim_{hto 0} D_h f(x) = frac{df(x)}{x} neq text{a ratio}$$
$endgroup$
$begingroup$
I just want to point out that @Yiorgos S. Smyrlis did already state that dy/dx is not a ratio, but a limit of a ratio (if it exists). I only included my response because this subject seems interesting (I don't think many have heard of it) and in this subject we work in the confines of it being a ratio... but certainly the limit is not really a ratio.
$endgroup$
– Squirtle
Dec 28 '13 at 1:54
$begingroup$
You start out saying that it is a ratio and then end up saying that it is not a ratio. It's interesting that you can define it as a limit of ratios in two different ways, but you've still only given it as a limit of ratios, not as a ratio directly.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:40
2
$begingroup$
I guess you mean to say that the q-derivative and h-derivative are ratios; that the usual derivative may be recovered as limits of these is secondary to your point.
$endgroup$
– Toby Bartels
May 2 '17 at 21:55
$begingroup$
Yes, that is precisely my point.
$endgroup$
– Squirtle
Feb 23 '18 at 4:04
add a comment |
$begingroup$
I realize this is an old post, but I think it's worth while to point out that in the so-called Quantum Calculus $frac{dy}{dx}$ $is$ a ratio. The subject $starts$ off immediately by saying this is a ratio, by defining differentials and then calling derivatives a ratio of differentials:
The $q-$differential is defined as
$$d_q f(x) = f(qx) - f(x)$$
and the $h-$differential as
$$d_h f(x) = f(x+h) - f(x)$$
It follows that $d_q x = (q-1)x$ and $d_h x = h$.
From here, we go on to define the $q-$derivative and $h-$derivative, respectively:
$$D_q f(x) = frac{d_q f(x)}{d_q x} = frac{f(qx) - f(x)}{(q-1)x}$$
$$D_h f(x) = frac{d_h f(x)}{d_q x} = frac{f(x+h) - f(x)}{h}$$
Notice that
$$lim_{q to 1} D_q f(x) = lim_{hto 0} D_h f(x) = frac{df(x)}{x} neq text{a ratio}$$
$endgroup$
$begingroup$
I just want to point out that @Yiorgos S. Smyrlis did already state that dy/dx is not a ratio, but a limit of a ratio (if it exists). I only included my response because this subject seems interesting (I don't think many have heard of it) and in this subject we work in the confines of it being a ratio... but certainly the limit is not really a ratio.
$endgroup$
– Squirtle
Dec 28 '13 at 1:54
$begingroup$
You start out saying that it is a ratio and then end up saying that it is not a ratio. It's interesting that you can define it as a limit of ratios in two different ways, but you've still only given it as a limit of ratios, not as a ratio directly.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:40
2
$begingroup$
I guess you mean to say that the q-derivative and h-derivative are ratios; that the usual derivative may be recovered as limits of these is secondary to your point.
$endgroup$
– Toby Bartels
May 2 '17 at 21:55
$begingroup$
Yes, that is precisely my point.
$endgroup$
– Squirtle
Feb 23 '18 at 4:04
add a comment |
$begingroup$
I realize this is an old post, but I think it's worth while to point out that in the so-called Quantum Calculus $frac{dy}{dx}$ $is$ a ratio. The subject $starts$ off immediately by saying this is a ratio, by defining differentials and then calling derivatives a ratio of differentials:
The $q-$differential is defined as
$$d_q f(x) = f(qx) - f(x)$$
and the $h-$differential as
$$d_h f(x) = f(x+h) - f(x)$$
It follows that $d_q x = (q-1)x$ and $d_h x = h$.
From here, we go on to define the $q-$derivative and $h-$derivative, respectively:
$$D_q f(x) = frac{d_q f(x)}{d_q x} = frac{f(qx) - f(x)}{(q-1)x}$$
$$D_h f(x) = frac{d_h f(x)}{d_q x} = frac{f(x+h) - f(x)}{h}$$
Notice that
$$lim_{q to 1} D_q f(x) = lim_{hto 0} D_h f(x) = frac{df(x)}{x} neq text{a ratio}$$
$endgroup$
I realize this is an old post, but I think it's worth while to point out that in the so-called Quantum Calculus $frac{dy}{dx}$ $is$ a ratio. The subject $starts$ off immediately by saying this is a ratio, by defining differentials and then calling derivatives a ratio of differentials:
The $q-$differential is defined as
$$d_q f(x) = f(qx) - f(x)$$
and the $h-$differential as
$$d_h f(x) = f(x+h) - f(x)$$
It follows that $d_q x = (q-1)x$ and $d_h x = h$.
From here, we go on to define the $q-$derivative and $h-$derivative, respectively:
$$D_q f(x) = frac{d_q f(x)}{d_q x} = frac{f(qx) - f(x)}{(q-1)x}$$
$$D_h f(x) = frac{d_h f(x)}{d_q x} = frac{f(x+h) - f(x)}{h}$$
Notice that
$$lim_{q to 1} D_q f(x) = lim_{hto 0} D_h f(x) = frac{df(x)}{x} neq text{a ratio}$$
edited Jan 16 '15 at 6:43
answered Dec 28 '13 at 1:51


SquirtleSquirtle
4,1941741
4,1941741
$begingroup$
I just want to point out that @Yiorgos S. Smyrlis did already state that dy/dx is not a ratio, but a limit of a ratio (if it exists). I only included my response because this subject seems interesting (I don't think many have heard of it) and in this subject we work in the confines of it being a ratio... but certainly the limit is not really a ratio.
$endgroup$
– Squirtle
Dec 28 '13 at 1:54
$begingroup$
You start out saying that it is a ratio and then end up saying that it is not a ratio. It's interesting that you can define it as a limit of ratios in two different ways, but you've still only given it as a limit of ratios, not as a ratio directly.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:40
2
$begingroup$
I guess you mean to say that the q-derivative and h-derivative are ratios; that the usual derivative may be recovered as limits of these is secondary to your point.
$endgroup$
– Toby Bartels
May 2 '17 at 21:55
$begingroup$
Yes, that is precisely my point.
$endgroup$
– Squirtle
Feb 23 '18 at 4:04
add a comment |
$begingroup$
I just want to point out that @Yiorgos S. Smyrlis did already state that dy/dx is not a ratio, but a limit of a ratio (if it exists). I only included my response because this subject seems interesting (I don't think many have heard of it) and in this subject we work in the confines of it being a ratio... but certainly the limit is not really a ratio.
$endgroup$
– Squirtle
Dec 28 '13 at 1:54
$begingroup$
You start out saying that it is a ratio and then end up saying that it is not a ratio. It's interesting that you can define it as a limit of ratios in two different ways, but you've still only given it as a limit of ratios, not as a ratio directly.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:40
2
$begingroup$
I guess you mean to say that the q-derivative and h-derivative are ratios; that the usual derivative may be recovered as limits of these is secondary to your point.
$endgroup$
– Toby Bartels
May 2 '17 at 21:55
$begingroup$
Yes, that is precisely my point.
$endgroup$
– Squirtle
Feb 23 '18 at 4:04
$begingroup$
I just want to point out that @Yiorgos S. Smyrlis did already state that dy/dx is not a ratio, but a limit of a ratio (if it exists). I only included my response because this subject seems interesting (I don't think many have heard of it) and in this subject we work in the confines of it being a ratio... but certainly the limit is not really a ratio.
$endgroup$
– Squirtle
Dec 28 '13 at 1:54
$begingroup$
I just want to point out that @Yiorgos S. Smyrlis did already state that dy/dx is not a ratio, but a limit of a ratio (if it exists). I only included my response because this subject seems interesting (I don't think many have heard of it) and in this subject we work in the confines of it being a ratio... but certainly the limit is not really a ratio.
$endgroup$
– Squirtle
Dec 28 '13 at 1:54
$begingroup$
You start out saying that it is a ratio and then end up saying that it is not a ratio. It's interesting that you can define it as a limit of ratios in two different ways, but you've still only given it as a limit of ratios, not as a ratio directly.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:40
$begingroup$
You start out saying that it is a ratio and then end up saying that it is not a ratio. It's interesting that you can define it as a limit of ratios in two different ways, but you've still only given it as a limit of ratios, not as a ratio directly.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:40
2
2
$begingroup$
I guess you mean to say that the q-derivative and h-derivative are ratios; that the usual derivative may be recovered as limits of these is secondary to your point.
$endgroup$
– Toby Bartels
May 2 '17 at 21:55
$begingroup$
I guess you mean to say that the q-derivative and h-derivative are ratios; that the usual derivative may be recovered as limits of these is secondary to your point.
$endgroup$
– Toby Bartels
May 2 '17 at 21:55
$begingroup$
Yes, that is precisely my point.
$endgroup$
– Squirtle
Feb 23 '18 at 4:04
$begingroup$
Yes, that is precisely my point.
$endgroup$
– Squirtle
Feb 23 '18 at 4:04
add a comment |
$begingroup$
Anything that can be said in mathematics can be said in at least 3 different ways...all things about derivation/derivatives depend on the meaning that is attached to the word: TANGENT.
It is agreed that the derivative is the "gradient function" for tangents (at a point); and spatially (geometrically) the gradient of a tangent is the "ratio" ( "fraction" would be better ) of the y-distance to the x-distance.
Similar obscurities occur when "spatial and algebraic" are notationally confused.. some people take the word "vector" to mean a track!
$endgroup$
2
$begingroup$
Accordingto John Robinson (2days ago) vectors... elements(points) of vector spaces are different from points
$endgroup$
– kozenko
May 3 '14 at 3:43
add a comment |
$begingroup$
Anything that can be said in mathematics can be said in at least 3 different ways...all things about derivation/derivatives depend on the meaning that is attached to the word: TANGENT.
It is agreed that the derivative is the "gradient function" for tangents (at a point); and spatially (geometrically) the gradient of a tangent is the "ratio" ( "fraction" would be better ) of the y-distance to the x-distance.
Similar obscurities occur when "spatial and algebraic" are notationally confused.. some people take the word "vector" to mean a track!
$endgroup$
2
$begingroup$
Accordingto John Robinson (2days ago) vectors... elements(points) of vector spaces are different from points
$endgroup$
– kozenko
May 3 '14 at 3:43
add a comment |
$begingroup$
Anything that can be said in mathematics can be said in at least 3 different ways...all things about derivation/derivatives depend on the meaning that is attached to the word: TANGENT.
It is agreed that the derivative is the "gradient function" for tangents (at a point); and spatially (geometrically) the gradient of a tangent is the "ratio" ( "fraction" would be better ) of the y-distance to the x-distance.
Similar obscurities occur when "spatial and algebraic" are notationally confused.. some people take the word "vector" to mean a track!
$endgroup$
Anything that can be said in mathematics can be said in at least 3 different ways...all things about derivation/derivatives depend on the meaning that is attached to the word: TANGENT.
It is agreed that the derivative is the "gradient function" for tangents (at a point); and spatially (geometrically) the gradient of a tangent is the "ratio" ( "fraction" would be better ) of the y-distance to the x-distance.
Similar obscurities occur when "spatial and algebraic" are notationally confused.. some people take the word "vector" to mean a track!
edited Aug 29 '16 at 16:32


Chill2Macht
11.6k91768
11.6k91768
answered May 3 '14 at 3:20
kozenkokozenko
40144
40144
2
$begingroup$
Accordingto John Robinson (2days ago) vectors... elements(points) of vector spaces are different from points
$endgroup$
– kozenko
May 3 '14 at 3:43
add a comment |
2
$begingroup$
Accordingto John Robinson (2days ago) vectors... elements(points) of vector spaces are different from points
$endgroup$
– kozenko
May 3 '14 at 3:43
2
2
$begingroup$
Accordingto John Robinson (2days ago) vectors... elements(points) of vector spaces are different from points
$endgroup$
– kozenko
May 3 '14 at 3:43
$begingroup$
Accordingto John Robinson (2days ago) vectors... elements(points) of vector spaces are different from points
$endgroup$
– kozenko
May 3 '14 at 3:43
add a comment |
$begingroup$
Assuming you're happy with $dy/dx$, when it becomes $ldots dy$ and $ldots dx$ it means that it follows that what precedes $dy$ in terms of $y$ is equal to what precedes $dx$ in terms of $x$.
"in terms of" = "with reference to".
That is, if "$a frac{dy}{dx} = b$", then it follows that "$a$ with reference to $y$ = $b$ with reference to $x$". If the equation has all the terms with $y$ on the left and all with $x$ on the right, then you've got to a good place to continue.
The phrase "it follows that" means you haven't really moved $dx$ as in algebra. It now has a different meaning which is also true.
$endgroup$
add a comment |
$begingroup$
Assuming you're happy with $dy/dx$, when it becomes $ldots dy$ and $ldots dx$ it means that it follows that what precedes $dy$ in terms of $y$ is equal to what precedes $dx$ in terms of $x$.
"in terms of" = "with reference to".
That is, if "$a frac{dy}{dx} = b$", then it follows that "$a$ with reference to $y$ = $b$ with reference to $x$". If the equation has all the terms with $y$ on the left and all with $x$ on the right, then you've got to a good place to continue.
The phrase "it follows that" means you haven't really moved $dx$ as in algebra. It now has a different meaning which is also true.
$endgroup$
add a comment |
$begingroup$
Assuming you're happy with $dy/dx$, when it becomes $ldots dy$ and $ldots dx$ it means that it follows that what precedes $dy$ in terms of $y$ is equal to what precedes $dx$ in terms of $x$.
"in terms of" = "with reference to".
That is, if "$a frac{dy}{dx} = b$", then it follows that "$a$ with reference to $y$ = $b$ with reference to $x$". If the equation has all the terms with $y$ on the left and all with $x$ on the right, then you've got to a good place to continue.
The phrase "it follows that" means you haven't really moved $dx$ as in algebra. It now has a different meaning which is also true.
$endgroup$
Assuming you're happy with $dy/dx$, when it becomes $ldots dy$ and $ldots dx$ it means that it follows that what precedes $dy$ in terms of $y$ is equal to what precedes $dx$ in terms of $x$.
"in terms of" = "with reference to".
That is, if "$a frac{dy}{dx} = b$", then it follows that "$a$ with reference to $y$ = $b$ with reference to $x$". If the equation has all the terms with $y$ on the left and all with $x$ on the right, then you've got to a good place to continue.
The phrase "it follows that" means you haven't really moved $dx$ as in algebra. It now has a different meaning which is also true.
edited Jul 16 '13 at 14:14


user1729
17.3k64193
17.3k64193
answered Jul 16 '13 at 13:55
jacques sassoonjacques sassoon
15712
15712
add a comment |
add a comment |
$begingroup$
To ask "Is $frac{dy}{dx}$ a ratio or isn't it?" is like asking "Is $sqrt 2$ a number or isn't it?" The answer depends on what you mean by "number". $sqrt 2$ is not an Integer or a Rational number, so if that's what you mean by "number", then the answer is "No, $sqrt 2$ is not a number."
However, the Real numbers are an extension of the Rational numbers that includes irrational numbers such as $sqrt 2$, and so, in this set of numbers, $sqrt 2$ is a number.
In the same way, a differential such as $dx$ is not a Real number, but it is possible to extend the Real numbers to include infinitesimals, and, if you do that, then $frac{dy}{dx}$ is truly a ratio.
When a professor tells you that $dx$ by itself is meaningless, or that $frac{dy}{dx}$ is not a ratio, they are correct, in terms of "normal" number systems such as the Real or Complex systems, which are the number systems typically used in science, engineering and even mathematics. Infinitesimals can be placed on a rigorous footing, but sometimes at the cost of surrendering some important properties of the numbers we rely on for everyday science.
See https://en.wikipedia.org/wiki/Infinitesimal#Number_systems_that_include_infinitesimals for a discussion of number systems that include infinitesimals.
$endgroup$
add a comment |
$begingroup$
To ask "Is $frac{dy}{dx}$ a ratio or isn't it?" is like asking "Is $sqrt 2$ a number or isn't it?" The answer depends on what you mean by "number". $sqrt 2$ is not an Integer or a Rational number, so if that's what you mean by "number", then the answer is "No, $sqrt 2$ is not a number."
However, the Real numbers are an extension of the Rational numbers that includes irrational numbers such as $sqrt 2$, and so, in this set of numbers, $sqrt 2$ is a number.
In the same way, a differential such as $dx$ is not a Real number, but it is possible to extend the Real numbers to include infinitesimals, and, if you do that, then $frac{dy}{dx}$ is truly a ratio.
When a professor tells you that $dx$ by itself is meaningless, or that $frac{dy}{dx}$ is not a ratio, they are correct, in terms of "normal" number systems such as the Real or Complex systems, which are the number systems typically used in science, engineering and even mathematics. Infinitesimals can be placed on a rigorous footing, but sometimes at the cost of surrendering some important properties of the numbers we rely on for everyday science.
See https://en.wikipedia.org/wiki/Infinitesimal#Number_systems_that_include_infinitesimals for a discussion of number systems that include infinitesimals.
$endgroup$
add a comment |
$begingroup$
To ask "Is $frac{dy}{dx}$ a ratio or isn't it?" is like asking "Is $sqrt 2$ a number or isn't it?" The answer depends on what you mean by "number". $sqrt 2$ is not an Integer or a Rational number, so if that's what you mean by "number", then the answer is "No, $sqrt 2$ is not a number."
However, the Real numbers are an extension of the Rational numbers that includes irrational numbers such as $sqrt 2$, and so, in this set of numbers, $sqrt 2$ is a number.
In the same way, a differential such as $dx$ is not a Real number, but it is possible to extend the Real numbers to include infinitesimals, and, if you do that, then $frac{dy}{dx}$ is truly a ratio.
When a professor tells you that $dx$ by itself is meaningless, or that $frac{dy}{dx}$ is not a ratio, they are correct, in terms of "normal" number systems such as the Real or Complex systems, which are the number systems typically used in science, engineering and even mathematics. Infinitesimals can be placed on a rigorous footing, but sometimes at the cost of surrendering some important properties of the numbers we rely on for everyday science.
See https://en.wikipedia.org/wiki/Infinitesimal#Number_systems_that_include_infinitesimals for a discussion of number systems that include infinitesimals.
$endgroup$
To ask "Is $frac{dy}{dx}$ a ratio or isn't it?" is like asking "Is $sqrt 2$ a number or isn't it?" The answer depends on what you mean by "number". $sqrt 2$ is not an Integer or a Rational number, so if that's what you mean by "number", then the answer is "No, $sqrt 2$ is not a number."
However, the Real numbers are an extension of the Rational numbers that includes irrational numbers such as $sqrt 2$, and so, in this set of numbers, $sqrt 2$ is a number.
In the same way, a differential such as $dx$ is not a Real number, but it is possible to extend the Real numbers to include infinitesimals, and, if you do that, then $frac{dy}{dx}$ is truly a ratio.
When a professor tells you that $dx$ by itself is meaningless, or that $frac{dy}{dx}$ is not a ratio, they are correct, in terms of "normal" number systems such as the Real or Complex systems, which are the number systems typically used in science, engineering and even mathematics. Infinitesimals can be placed on a rigorous footing, but sometimes at the cost of surrendering some important properties of the numbers we rely on for everyday science.
See https://en.wikipedia.org/wiki/Infinitesimal#Number_systems_that_include_infinitesimals for a discussion of number systems that include infinitesimals.
answered Nov 2 '16 at 18:56
HawthorneHawthorne
18113
18113
add a comment |
add a comment |
$begingroup$
I am going to join @Jesse Madnick here, and try to interpret $frac{dy}{dx}$ as a ratio. The idea is: lets interpret $dx$ and $dy$ as functions on $Tmathbb R^2$, as if they were differential forms. For each tangent vector $v$, set $dx(v):=v(x)$. If we identify $Tmathbb R^2$ with $mathbb R^4$, we get that $(x,y,dx,dy)$ is just the canonical coordinate system for $mathbb R^4$. If we exclude the points where $dx=0$, then $frac{dy}{dx} = 2x$ is a perfectly healthy equation, its solutions form a subset of $mathbb R^4$.
Let's see if it makes any sense. If we fix $x$ and $y$, the solutions form a straight line through the origin of the tangent space at $(x,y)$, its slope is $2x$. So, the set of all solutions is a distribution, and the integral manifolds happen to be the parabolas $y=x^2+c$. Exactly the solutions of the differential equation that we would write as $frac{dy}{dx} = 2x$. Of course, we can write it as $dy = 2xdx$ as well. I think this is at least a little bit interesting. Any thoughts?
$endgroup$
add a comment |
$begingroup$
I am going to join @Jesse Madnick here, and try to interpret $frac{dy}{dx}$ as a ratio. The idea is: lets interpret $dx$ and $dy$ as functions on $Tmathbb R^2$, as if they were differential forms. For each tangent vector $v$, set $dx(v):=v(x)$. If we identify $Tmathbb R^2$ with $mathbb R^4$, we get that $(x,y,dx,dy)$ is just the canonical coordinate system for $mathbb R^4$. If we exclude the points where $dx=0$, then $frac{dy}{dx} = 2x$ is a perfectly healthy equation, its solutions form a subset of $mathbb R^4$.
Let's see if it makes any sense. If we fix $x$ and $y$, the solutions form a straight line through the origin of the tangent space at $(x,y)$, its slope is $2x$. So, the set of all solutions is a distribution, and the integral manifolds happen to be the parabolas $y=x^2+c$. Exactly the solutions of the differential equation that we would write as $frac{dy}{dx} = 2x$. Of course, we can write it as $dy = 2xdx$ as well. I think this is at least a little bit interesting. Any thoughts?
$endgroup$
add a comment |
$begingroup$
I am going to join @Jesse Madnick here, and try to interpret $frac{dy}{dx}$ as a ratio. The idea is: lets interpret $dx$ and $dy$ as functions on $Tmathbb R^2$, as if they were differential forms. For each tangent vector $v$, set $dx(v):=v(x)$. If we identify $Tmathbb R^2$ with $mathbb R^4$, we get that $(x,y,dx,dy)$ is just the canonical coordinate system for $mathbb R^4$. If we exclude the points where $dx=0$, then $frac{dy}{dx} = 2x$ is a perfectly healthy equation, its solutions form a subset of $mathbb R^4$.
Let's see if it makes any sense. If we fix $x$ and $y$, the solutions form a straight line through the origin of the tangent space at $(x,y)$, its slope is $2x$. So, the set of all solutions is a distribution, and the integral manifolds happen to be the parabolas $y=x^2+c$. Exactly the solutions of the differential equation that we would write as $frac{dy}{dx} = 2x$. Of course, we can write it as $dy = 2xdx$ as well. I think this is at least a little bit interesting. Any thoughts?
$endgroup$
I am going to join @Jesse Madnick here, and try to interpret $frac{dy}{dx}$ as a ratio. The idea is: lets interpret $dx$ and $dy$ as functions on $Tmathbb R^2$, as if they were differential forms. For each tangent vector $v$, set $dx(v):=v(x)$. If we identify $Tmathbb R^2$ with $mathbb R^4$, we get that $(x,y,dx,dy)$ is just the canonical coordinate system for $mathbb R^4$. If we exclude the points where $dx=0$, then $frac{dy}{dx} = 2x$ is a perfectly healthy equation, its solutions form a subset of $mathbb R^4$.
Let's see if it makes any sense. If we fix $x$ and $y$, the solutions form a straight line through the origin of the tangent space at $(x,y)$, its slope is $2x$. So, the set of all solutions is a distribution, and the integral manifolds happen to be the parabolas $y=x^2+c$. Exactly the solutions of the differential equation that we would write as $frac{dy}{dx} = 2x$. Of course, we can write it as $dy = 2xdx$ as well. I think this is at least a little bit interesting. Any thoughts?
answered Jan 28 '17 at 19:08
Dávid KertészDávid Kertész
12216
12216
add a comment |
add a comment |
$begingroup$
There are many answers here, but the simplest seems to be missing. So here it is:
Yes, it is a ratio, for exactly the reason that you said in your question.
$endgroup$
1
$begingroup$
Some other people have already more-or-less given this answer, but then go into more detail about how it fits into tangent spaces in higher dimensions and whatnot. That is all very interesting, of course, but it may give the impression that the development of the derivative as a ratio that appears in the original question is not enough by itself. But it is enough.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
1
$begingroup$
Nonstandard analysis, while providing an interesting perspective and being closer to what Leibniz himself was thinking, is also not necessary for this. The definition of differential that is cited in the question is not infinitesimal, but it still makes the derivative into a ratio of differentials.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
add a comment |
$begingroup$
There are many answers here, but the simplest seems to be missing. So here it is:
Yes, it is a ratio, for exactly the reason that you said in your question.
$endgroup$
1
$begingroup$
Some other people have already more-or-less given this answer, but then go into more detail about how it fits into tangent spaces in higher dimensions and whatnot. That is all very interesting, of course, but it may give the impression that the development of the derivative as a ratio that appears in the original question is not enough by itself. But it is enough.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
1
$begingroup$
Nonstandard analysis, while providing an interesting perspective and being closer to what Leibniz himself was thinking, is also not necessary for this. The definition of differential that is cited in the question is not infinitesimal, but it still makes the derivative into a ratio of differentials.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
add a comment |
$begingroup$
There are many answers here, but the simplest seems to be missing. So here it is:
Yes, it is a ratio, for exactly the reason that you said in your question.
$endgroup$
There are many answers here, but the simplest seems to be missing. So here it is:
Yes, it is a ratio, for exactly the reason that you said in your question.
answered Apr 29 '17 at 4:33
Toby BartelsToby Bartels
669515
669515
1
$begingroup$
Some other people have already more-or-less given this answer, but then go into more detail about how it fits into tangent spaces in higher dimensions and whatnot. That is all very interesting, of course, but it may give the impression that the development of the derivative as a ratio that appears in the original question is not enough by itself. But it is enough.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
1
$begingroup$
Nonstandard analysis, while providing an interesting perspective and being closer to what Leibniz himself was thinking, is also not necessary for this. The definition of differential that is cited in the question is not infinitesimal, but it still makes the derivative into a ratio of differentials.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
add a comment |
1
$begingroup$
Some other people have already more-or-less given this answer, but then go into more detail about how it fits into tangent spaces in higher dimensions and whatnot. That is all very interesting, of course, but it may give the impression that the development of the derivative as a ratio that appears in the original question is not enough by itself. But it is enough.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
1
$begingroup$
Nonstandard analysis, while providing an interesting perspective and being closer to what Leibniz himself was thinking, is also not necessary for this. The definition of differential that is cited in the question is not infinitesimal, but it still makes the derivative into a ratio of differentials.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
1
1
$begingroup$
Some other people have already more-or-less given this answer, but then go into more detail about how it fits into tangent spaces in higher dimensions and whatnot. That is all very interesting, of course, but it may give the impression that the development of the derivative as a ratio that appears in the original question is not enough by itself. But it is enough.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
$begingroup$
Some other people have already more-or-less given this answer, but then go into more detail about how it fits into tangent spaces in higher dimensions and whatnot. That is all very interesting, of course, but it may give the impression that the development of the derivative as a ratio that appears in the original question is not enough by itself. But it is enough.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
1
1
$begingroup$
Nonstandard analysis, while providing an interesting perspective and being closer to what Leibniz himself was thinking, is also not necessary for this. The definition of differential that is cited in the question is not infinitesimal, but it still makes the derivative into a ratio of differentials.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
$begingroup$
Nonstandard analysis, while providing an interesting perspective and being closer to what Leibniz himself was thinking, is also not necessary for this. The definition of differential that is cited in the question is not infinitesimal, but it still makes the derivative into a ratio of differentials.
$endgroup$
– Toby Bartels
Apr 29 '17 at 4:50
add a comment |
$begingroup$
The derivate $frac{dy}{dx}$ is not a ratio, but rather a representation of a ratio within a limit.
Similarly, $dx$ is a representation of $Delta x$ inside a limit with interaction. This interaction can be in the form of multiplication, division, etc. with other things inside the same limit.
This interaction inside the limit is what makes the difference. You see, a limit of a ratio is not necessarily the ratio of the limits, and that is one example of why the interaction is considered to be inside the limit. This limit is hidden or left out in the shorthand notation that Liebniz invented.
The simple fact is that most of calculus is a shorthand representation of something else. This shorthand notation allows us to calculate things more quickly and it looks better than what it is actually representative of. The problem comes in when people expect this notation to act like actual maths, which it can't because it is just a representation of actual maths.
So, in order to see the underlying properties of calculus, we always have to convert it to the actual mathematical form and then analyze it from there. Then by memorization of basic properties and combinations of these different properties we can derive even more properties.
$endgroup$
add a comment |
$begingroup$
The derivate $frac{dy}{dx}$ is not a ratio, but rather a representation of a ratio within a limit.
Similarly, $dx$ is a representation of $Delta x$ inside a limit with interaction. This interaction can be in the form of multiplication, division, etc. with other things inside the same limit.
This interaction inside the limit is what makes the difference. You see, a limit of a ratio is not necessarily the ratio of the limits, and that is one example of why the interaction is considered to be inside the limit. This limit is hidden or left out in the shorthand notation that Liebniz invented.
The simple fact is that most of calculus is a shorthand representation of something else. This shorthand notation allows us to calculate things more quickly and it looks better than what it is actually representative of. The problem comes in when people expect this notation to act like actual maths, which it can't because it is just a representation of actual maths.
So, in order to see the underlying properties of calculus, we always have to convert it to the actual mathematical form and then analyze it from there. Then by memorization of basic properties and combinations of these different properties we can derive even more properties.
$endgroup$
add a comment |
$begingroup$
The derivate $frac{dy}{dx}$ is not a ratio, but rather a representation of a ratio within a limit.
Similarly, $dx$ is a representation of $Delta x$ inside a limit with interaction. This interaction can be in the form of multiplication, division, etc. with other things inside the same limit.
This interaction inside the limit is what makes the difference. You see, a limit of a ratio is not necessarily the ratio of the limits, and that is one example of why the interaction is considered to be inside the limit. This limit is hidden or left out in the shorthand notation that Liebniz invented.
The simple fact is that most of calculus is a shorthand representation of something else. This shorthand notation allows us to calculate things more quickly and it looks better than what it is actually representative of. The problem comes in when people expect this notation to act like actual maths, which it can't because it is just a representation of actual maths.
So, in order to see the underlying properties of calculus, we always have to convert it to the actual mathematical form and then analyze it from there. Then by memorization of basic properties and combinations of these different properties we can derive even more properties.
$endgroup$
The derivate $frac{dy}{dx}$ is not a ratio, but rather a representation of a ratio within a limit.
Similarly, $dx$ is a representation of $Delta x$ inside a limit with interaction. This interaction can be in the form of multiplication, division, etc. with other things inside the same limit.
This interaction inside the limit is what makes the difference. You see, a limit of a ratio is not necessarily the ratio of the limits, and that is one example of why the interaction is considered to be inside the limit. This limit is hidden or left out in the shorthand notation that Liebniz invented.
The simple fact is that most of calculus is a shorthand representation of something else. This shorthand notation allows us to calculate things more quickly and it looks better than what it is actually representative of. The problem comes in when people expect this notation to act like actual maths, which it can't because it is just a representation of actual maths.
So, in order to see the underlying properties of calculus, we always have to convert it to the actual mathematical form and then analyze it from there. Then by memorization of basic properties and combinations of these different properties we can derive even more properties.
answered Jan 20 at 23:20


GustavGustav
1469
1469
add a comment |
add a comment |
protected by davidlowryduda♦ Mar 23 '14 at 18:41
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
1
$begingroup$
math.stackexchange.com/questions/1548487/…
$endgroup$
– user117644
Dec 26 '15 at 22:58
$begingroup$
It can be roughly interpreted as a rate of change of $y$ as a function of $x$ . This statement though has many lacks.
$endgroup$
– adityaguharoy
Aug 28 '17 at 17:39
$begingroup$
dy compared to dx is How I look at it.
$endgroup$
– mick
Jun 10 '18 at 21:50