Jump to content

definition of derivative


113

Recommended Posts

2 hours ago, Eise said:

<NitPickerMode>

Newton was basically an alchemist and theologian.

Debatable, one should know how much time and effort he placed in the different disciplines (which I don't know).

A difference with Newton is that Leibniz stood fully in the academic disciplines of his day, where Newton, e.g. in his theology would have been seen as a heretic (and therefore Newton decided not to publish his theological writings). In his theological studies Newton e.g. denied the Trinity, based on textual criticism. Modern New Testament scholars agree with his conclusions.

</NitPickerMode>

I know that both the terms Scientist and Physicist can be traced back to ancient Greek, but I don't think either were actually in use in Newton's day.

Newton dabbled in many waters we would separate today.

However both you and wtf will be glad that I have been to Al's Chemist and obtained a new can of nit spray.  +1 each

:)

6 hours ago, wtf said:

That'll keep him busy.

Yeah I'm having trouble with Google translate from Zummerzett to English at the moment.

Link to comment
Share on other sites

I also got differing definitions of derivatives in high school and university.
But, being in Physics, and not needing that stubborn trait of Mathematicians to define, and re-define everything, I simply use them as a tool.
( I don't define a hammer when I bang in a nail )

Link to comment
Share on other sites

On 10/7/2018 at 11:10 AM, taeto said:

If your math book suggests to compute the derivative of y as the limit of the difference quotient of f , without saying first that y=f(x) , then it makes sense for you to throw it in the recycling bin.

Maybe it can. But this is analysis, a particular branch of math that deals with functions of real and complex numbers. If you ask whether it makes sense to have a real number d with the property 0<dx for all real numbers x , then the answer is that such a d cannot exist, because, e.g., d/2 has 0<d/2<d in R contradicts that d has the required property. 

1/∞  is not a real number. Does it make sense to have 1/∞  with the property 0   1/∞  <  x  for all real numbers x ?

On 10/7/2018 at 11:10 AM, taeto said:

If this is not the property that you would want an "infinitely small" number to have, then what is the property that you are thinking of? Will you evade to answer that question? 

I am not the one who is evading answering questions.

On 10/7/2018 at 11:10 AM, taeto said:

Just writing up some string of symbols like 1/ doesn't always point to something that makes any sense. I ask again, what does it mean to you? Apparently you can make sense of it, since you keep going on about it.  Why don't you answer the question: "what does it mean?"? 

I already told what does it mean: it is neither zero nor non-zero. Because it is both. You evaded answering my question.

On 10/7/2018 at 11:10 AM, taeto said:

Now it looks like you use "finite" to mean "non-zero".

no, I don't use finite to mean non-zero. 1/∞ is not finite.

On 10/7/2018 at 11:10 AM, taeto said:

In the context of analysis, distance is given by Euclidean metric, and then the only distance that is not "finite" is identically 0. You end up with f(x+dx)f(x)=0 always, independently of f and x

You are ignoring an infinitely small distance dx, and it is not necessarily 0. Otherwise you end up with f´(x) = dy/dx = 0/0

On 10/7/2018 at 11:10 AM, taeto said:

The most positive I can say would be something like just forget about the actual meaning of dx .

That would not be a very good idea.

On 10/7/2018 at 11:10 AM, taeto said:

In ordinary usage, the functions x and dx belong to different species, they do not allow to be composed together by the binary operation of addition. It is something like trying to add a scalar to a 2-dimensional vector. 

are you trying to tell that I can't perform the additon x + dx ?

On 10/7/2018 at 11:10 AM, taeto said:

The most you could do is to use your expression as a notational shorthand, which substitutes dx for h and removes the needs to write the lim symbol.

I don't know why it should be only a notational shorthand without meaning. I sounds like forgetting or ignoring the problem.

On 10/7/2018 at 11:10 AM, taeto said:

Maybe you can think of any advantages in doing so.

no, I can't think of any advantages in doing so.

 

Link to comment
Share on other sites

Quote

113

Quote

In ordinary usage, the functions x and dx belong to different species, they do not allow to be composed together by the binary operation of addition. It is something like trying to add a scalar to a 2-dimensional vector. 

are you trying to tell that I can't perform the additon x + dx ?

No, I think we all understand what taeto was trying to say, but English is not his first language and this did not come out very well.

 

On 12/10/2018 at 7:05 AM, 113 said:

Alright:

How exact is your " If dx is a very small quantity then (dx)2 is insignificant and may be ignored so we have" ?

I thought that you would understand that my sentence continued below.

so does it tell exactly how small dx is?

I was quite taken aback by this flippant response to my attempt to help.

Link to comment
Share on other sites

2 hours ago, 113 said:

are you trying to tell that I can't perform the additon x + dx ?

If you want to do that, you will have to assign some meaning to such a sum, because there is no a priori meaning attached to it.

Maybe a better analogy would be to think of two functions f and g that have disjoint domains.

Link to comment
Share on other sites

4 hours ago, 113 said:

1/∞  is not a real number. Does it make sense to have 1/∞  with the property 0   1/∞  <  x  for all real numbers x ?

Technically, it does.

 

You can define the formal "number" infinity, adjoin it to the real numbers, and use the resulting field. 

 

The problem with using this for calculus is that you then have to define your functions to include those new points. What is sin(1/infinity)? What is e^(1/infinity)? Alternatively, can you define the function f such that f(x) = 0 unless x is positive and smaller than any positive real number, in which case f(x) = 1?

 

There are a couple ways around that. One has already been discussed - using nonstandard analysis, in which case the above f doesn't make sense (kind of). None of them are especially useful, especially to undergrads.

Link to comment
Share on other sites

3 hours ago, uncool said:

You can define the formal "number" infinity, adjoin it to the real numbers, and use the resulting field. 

I would be curious to see a reference for that. What does a "resulting field" look like?

Sure you could join \(i\) to \(\mathbb{R}\) and get \(\mathbb{C}\), and just give the name "\(\infty\)" to \(i\). But here he wants to preserve the order relation from the real numbers, and that is one thing that will be a little harder to do.

Edited by taeto
Link to comment
Share on other sites

19 hours ago, 113 said:

1/∞  is not a real number. Does it make sense to have 1/∞  with the property 0   1/∞  <  x  for all real numbers x ?

It does make sense. You explicitly allow \(1/\infty = 0\), but that would make it a real number. Alternatively it seems fine to have a non-real \(\epsilon > 0\) that is less than every positive real. That would mean an extension to the usual linear ordering of \(\mathbb{R}\). But if you want to do calculations using such a thing, you have to explain what the rules are: what is going to be the meaning of \(x+\epsilon\) and \(\epsilon x\), and things like that.

20 hours ago, 113 said:

I am not the one who is evading answering questions.

Great. So we will get to see how you use your formula to calculate the derivative at \(x=0\) of the function \(f\) given by \(f(x)=x^2\) for every \(x\in \mathbb{R}\) like I asked before? Why not show us how it actually works? Show how it is meaningful.

Link to comment
Share on other sites

16 hours ago, uncool said:

Technically, it does.You can define the formal "number" infinity, adjoin it to the real numbers, and use the resulting field. The problem with using this for calculus is that you then have to define your functions to include those new points. What is sin(1/infinity)? What is e^(1/infinity)?

Using \(  dx = 1/\infty  \)

\( sin(x) = x - x^3/3! + x^5/5! - ... \)

\( sin(dx) = dx - {(dx)}^3/3! + {(dx)}^5/5! - ...  = dx \)

\( e^x = 1 + x + x^2/2! + x^3/3! + ...\)

\( e^{dx} = 1 + dx + {(dx)}^2/2! + {(dx)}^3/3! + ... = 1 + dx\)

because \( {(dx)}^2 \) and higher powers of dx are equal to 0

 

43 minutes ago, taeto said:

IGreat. So we will get to see how you use your formula to calculate the derivative at x=0 of the function f given by f(x)=x2 for every xR like I asked before? Why not show us how it actually works? Show how it is meaningful.

studiot already showed how to do it. Since he said that I don't use not terminology exactly,  I asked how small dx is so that (dx)2 can be ignored but he said that I must look for an answer in some book. I have told that dx = 1/∞ exactly so that (dx)2 can be ignored. Am I more exact than his book?

Edited by 113
Link to comment
Share on other sites

3 hours ago, 113 said:

studiot already showed how to do it. Since he said that I don't use not terminology exactly,  I asked how small dx is so that (dx)2 can be ignored but he said that I must look for an answer in some book. I have told that dx = 1/∞ exactly so that (dx)2 can be ignored. Am I more exact than his book?

 

How is this an appropriate response to my comment?

I actually gave an example of where you are misusing terminology

 

On 11/10/2018 at 5:15 PM, studiot said:

113 is not using terminology exactly for instance

 

On 06/10/2018 at 5:32 PM, 113 said:

Derivative is defined as the limit of a finite difference

Clearly misusing the term finite difference, which is a term concerning real numbers.

I think 113 meant 'non zero difference' here.

 

Do you know what a finite difference is?

I'm sorry you are so contemptuous of books,

 

Here is an example modern definition of a derivative from the modern book by Kantarovich, showing  the ideas I have suggested and you appear to have rejected.

derivative1.thumb.jpg.260ebd9eeead062e15c191fa5ab4ea0e.jpg

Link to comment
Share on other sites

7 hours ago, 113 said:

studiot already showed how to do it. Since he said that I don't use not terminology exactly,  I asked how small dx is so that (dx)2 can be ignored but he said that I must look for an answer in some book. I have told that dx = 1/∞ exactly so that (dx)2 can be ignored. Am I more exact than his book?

Again: to say "\(dx=1/\infty\)" does not make sense unless you explain it. Why actually do you think that \(1/\infty^2\) can be "ignored", and \(1/\infty\) cannot be ignored? Look pretty much similar to me.

What studiot did was this. He replaced the \(h\) in the usual definition by your \(dx\), and then did what one does when using the classical definition, namely calculate the expression and calculate the limit as \(h \to 0\) (respectively \(dx\to 0\) ). The \((dx)^2\) bit means that when you divide \(h^2\) by \(h\), then you get \(h\), and \(h \to 0\), i.e. "can be ignored".

However, it seems that you cannot ignore the extra term \(dx\), and it will remain a part of the answer. And you tell us that it is not zero. According to your thinking, the derivative is not zero.

Another thing is that studiot correctly assumed that the \(h\) (which he called \(dx\) to humour you) is a real number, so you can calculate with it using normal arithmetic in the field \( (\mathbb{R},+,\cdot) \). If you want to do arithmetic with things that are not real, which is what you say, then you have to build a lot more theory before you can perform the similar sequence of calculations.

Link to comment
Share on other sites

51 minutes ago, taeto said:

Again: to say "dx=1/\infty " does not make sense unless you explain it. Why actually do you think that 1/\infty^2 can be "ignored", and 1/\infty cannot be ignored? Look pretty much similar to me.

What studiot did was this. He replaced the h in the usual definition by your dx , and then did what one does when using the classical definition, namely calculate the expression and calculate the limit as h \to 0 (respectively dx\to 0 ). The (dx)^2 bit means that when you divide h^2 by h , then you get h , and h \to 0 , i.e. "can be ignored".

However, it seems that you cannot ignore the extra term dx , and it will remain a part of the answer. And you tell us that it is not zero. According to your thinking, the derivative is not zero.

Another thing is that studiot correctly assumed that the h (which he called dx to humour you) is a real number, so you can calculate with it using normal arithmetic in the field (\mathbb{R},+,\cdot) . If you want to do arithmetic with things that are not real, which is what you say, then you have to build a lot more theory before you can perform the similar sequence of calculations.

 

Yes I was offering the simplest into that I know

Here is the version from Abbot.

 

Taeto have you see Either Ferrar or Hobson on this subject?

derivative2.jpg

Link to comment
Share on other sites

5 hours ago, studiot said:

Yes I was offering the simplest into that I know

Here is the version from Abbot.

Taeto have you see Either Ferrar or Hobson on this subject?

      It seemed that you were basically explaining the standard approach to the OP, and doing it quite well at that. But seeing that he seems to have a starting point of objecting to the classical approach, it is maybe not an effective strategy to bring it in like that.

     Abbott is friendly and careful. It is clear that the \(\delta x\) and \(\delta y\) can be any numbers, however large, as opposed to the setting imagined by the OP.

     I have not come across Ferrar or Hobson. I have learned some analysis from Apostol and Rudin, and I have lectured analysis from Bartle. Could be that I am simply too old school.

Link to comment
Share on other sites

Hobson was the grandaddy of them all.

My 3rd Ed Vol 1 is dated 1927 and my 2nd ed Vol2 is dated 1926

The Theory of Functions of a Real Variable and the Theory of Fourier Series

Vols 1 and 2 E W Hobson

Cambridge University Press.

 

Ferrar was a contemporary of / overlapped with Hardy at Oxford

He wrote a number of thoughtful analysis textbooks including

Differential Calculus

Integral Calculus

A Textbook of Convergence

 

All Oxford University Press

Edited by studiot
Link to comment
Share on other sites

On 10/16/2018 at 1:59 AM, 113 said:

Using dx=1/

sin(x)=xx3/3!+x5/5!...

sin(dx)=dx(dx)3/3!+(dx)5/5!...=dx

ex=1+x+x2/2!+x3/3!+...

edx=1+dx+(dx)2/2!+(dx)3/3!+...=1+dx

because (dx)2 and higher powers of dx are equal to 0

How did you get those series in the first place? They usually would come from calculus, but you are trying to define calculus in the first place.

On 10/15/2018 at 1:51 PM, taeto said:

I would be curious to see a reference for that. What does a "resulting field" look like?

Sure you could join i to R and get C , and just give the name " " to i . But here he wants to preserve the order relation from the real numbers, and that is one thing that will be a little harder to do.

It would look a lot like (be isomorphic to) the field of rational functions on R, with order given by "eventual" behavior/leading coefficient.

Link to comment
Share on other sites

On 10/16/2018 at 5:26 PM, taeto said:

Again: to say "dx=1/ " does not make sense unless you explain it. Why actually do you think that 1/2 can be "ignored", and 1/ cannot be ignored? Look pretty much similar to me.

you are telling that dx and (dx)2 are the same. In that case we would have  \(\frac{dx}{(dx)^2} = 1 \)

Instead what we have is  \(\frac{dx}{(dx)^2} = \frac{1}{dx} = \infty \)

Quote

What studiot did was this. He replaced the h in the usual definition by your dx , and then did what one does when using the classical definition, namely calculate the expression and calculate the limit as h0 (respectively dx0 ). The (dx)2 bit means that when you divide h2 by h , then you get h , and h0 , i.e. "can be ignored"

He did not calculate the limit, he just stated that " If dx is a very small quantity then (dx)2 is insignificant and may be ignored ". When I asked him to tell exactly how

small dx must be so that (dx)2 can be ignored, he told me to look for an answer in a book.

Quote

However, it seems that you cannot ignore the extra term dx , and it will remain a part of the answer. And you tell us that it is not zero. According to your thinking, the derivative is not zero.

No, the extra term dx will not remain part of the answer just because of what studiot said " If dx is a very small quantity then (dx)2 is insignificant and may be ignored ".

It would remain part of the answer only if it did not satisfy what studiot said, if it would not be small enough. All that I am asking for is for him to explain how

small exactly it must be. I have told that is dx is exactly 1/∞ , and it means that it is infinitely small, only in that case (dx)2  can be ignored.

 

 

Edited by 113
Link to comment
Share on other sites

6 hours ago, 113 said:

you are telling that dx and (dx)2 are the same. In that case we would have  dx(dx)2=1

Instead what we have is  dx(dx)2=1dx=

Are you sure that you can divide by \((dx)^2\)? Do we not have to ignore \((dx)^2\) before we continue?

6 hours ago, 113 said:

He did not calculate the limit, he just stated that " If dx is a very small quantity then (dx)2 is insignificant and may be ignored ". When I asked him to tell exactly how

small dx must be so that (dx)2 can be ignored, he told me to look for an answer in a book.

This is precisely the reason why we can see that studiot was doing the classical computation. When he divides his version of \((dx)^2\) by \(dx\) he will end up with a term \(dx\) the limit of which is \(0\), meaning that as an additive term it can be" ignored", that is, already considered to add \(0\) to the result. However, when you divide \((dx)^2\) by \(dx\) you get \(dx\), a term which you insist on being nonzero, hence it cannot be ignored in the same sense.

6 hours ago, 113 said:

It would remain part of the answer only if it did not satisfy what studiot said, if it would not be small enough. All that I am asking for is for him to explain how

small exactly it must be. I have told that is dx is exactly 1/∞ , and it means that it is infinitely small, only in that case (dx)2  can be ignored.

Apparently studiot used "ignoring" to mean that we can replace additive terms which have limit equal to \(0\) by \(0\) itself when we take a limit. This is a very precise notion. But you seem to have a different idea of what it means to "ignore". I.e. if we do calculations your style, do we get \(1 = \frac{dx}{dx} = \frac{(dx)^2}{(dx)^2} = \frac{0}{0}\) because we can "ignore" \((dx)^2\)? 

Edited by taeto
Link to comment
Share on other sites

6 hours ago, 113 said:

I have told that is dx is exactly 1/∞

Whoever told you this was just plain wrong.

There are several different approaches to differential calculus and not one of them claim this.

Quote

1/∞ is not finite.

Is it not ?

Why not? , what is your definition of finite?

Mine is that   [math]\xi [/math] is finite if there exists a number w such that  [math]\xi [/math]   <  w for all      [math]\xi [/math]

Is this condition not met?

 

Quote

I am talking about finite difference f(x+h) - f(x) where h is finite

You did get this one right, although it is not the usual presentation of a finite difference or finite differences since it doesn't work for the finite difference  [math]\Delta x[/math]

 

Did I mention that there are several approaches to differential calculus?

I think you have got these and also the terminology muddled up.

Before your questions can be answered some misconceptions need to be put right.

An infinitesimal   [math]\varsigma [/math]  is defined as a quantity such that for any positive e, however small,


[math]\left| \varsigma  \right|[/math]   < e

 

The quantities   [math]\delta x[/math]  and  [math]\delta y[/math]

are infinitesimals. (Newton called them fluxions, not infinitesimals)

The quantities dx and dy are not.

 

You also need to distinguish between

Derivative, derived function, differential coefficient, finite difference (which is not h), finite and infinite.

Armed with proper defintions of all these you will be able to sort out your questions including how we can assign (dx)2 to zero.

 

On the first page here I stayed up till after midnight trying to push the discussion along and taeto must have stayed up even longer as he lives in a timezone an hour ahead of me.

Then I very quickly posted the simplest explanation trying not to disrupt your incorrect symbolism too much and said where you could look up more before I could come back with more detail here.

I consider your response to that very rude.

However the average man on the street would not be likely to study this subject so I assume you must have some reason.

I don't care where you are starting from, but no one can offer proper help and discussion without that knowledge.

 

 

 

 

Link to comment
Share on other sites

5 hours ago, studiot said:

 

The quantities   δx   and  δy

are infinitesimals. (Newton called them fluxions, not infinitesimals)

 

 

 

Studiot you are most definitely wrong about that. Since there are no infinitesimals in the real numbers, how could that possibly make sense? dx and dy are differential forms. They are not infinitesimals in the modern view. Nor were they ever. Nor did Newton think they were, although the historical evidence for that proposition can be argued. But mathematically, dy and dx are not infinitesimals.

ps -- Sorry what? What is δ? I must be misunderstanding you. Off my game. (THAT WAS A JOKE!! FROM NOW ON I WILL CLEALY NOTE MY JOKES AS SUCH)

Edited by wtf
Link to comment
Share on other sites

20 hours ago, taeto said:

Are you sure that you can divide by (dx)2 ? Do we not have to ignore (dx)2 before we continue?

I am not sure if I can divide (dx)2. But I am not sure about you. Maybe you can ignore (dx)2 before you continue, but not me.

Quote

This is precisely the reason why we can see that studiot was doing the classical computation. When he divides his version of (dx)2 by dx he will end up with a term dx the limit of which is 0 ,

He did not take the limit.

Quote

meaning that as an additive term it can be" ignored", that is, already considered to add 0 to the result. However, when you divide (dx)2 by dx you get dx , a term which you insist on being nonzero, hence it cannot be ignored in the same sense.

but you said that I am doing calculations in "my style" so that I should not end up with an extra term dx because I am ignoring (dx)2 . So why you are now telling that I am ending up with the extra term dx? Am I doing calculations in "my style" or am I not?

Quote

Apparently studiot used "ignoring" to mean that we can replace additive terms which have limit equal to 0 by 0 itself when we take a limit.

He did not take the limit.

Quote

This is a very precise notion.

Perhaps if he had taken the limit.

Quote

But you seem to have a different idea of what it means to "ignore". I.e. if we do calculations your style, do we get 1=dxdx=(dx)2(dx)2=00 because we can "ignore" (dx)2

That more or less looks like doing calculations in your style, not mine.

Edited by 113
Link to comment
Share on other sites

18 hours ago, wtf said:

Studiot you are most definitely wrong about that. Since there are no infinitesimals in the real numbers, how could that possibly make sense? dx and dy are differential forms. They are not infinitesimals in the modern view. Nor were they ever. Nor did Newton think they were, although the historical evidence for that proposition can be argued. But mathematically, dy and dx are not infinitesimals.

ps -- Sorry what? What is δ? I must be misunderstanding you. Off my game. (THAT WAS A JOKE!! FROM NOW ON I WILL CLEALY NOTE MY JOKES AS SUCH)

 

 

Am I?

18 hours ago, wtf said:
22 hours ago, studiot said:

 

The quantities   δx   and  δy

are infinitesimals. (Newton called them fluxions, not infinitesimals)

 

 

 

Studiot you are most definitely wrong about that. Since there are no infinitesimals in the real numbers, how could that possibly make sense? dx and dy are differential forms. They are not infinitesimals in the modern view. Nor were they ever. Nor did Newton think they were, although the historical evidence for that proposition can be argued. But mathematically, dy and dx are not infinitesimals.

ps -- Sorry what? What is δ? I must be misunderstanding you. Off my game. (THAT WAS A JOKE!! FROM NOW ON I WILL CLEALY NOTE MY JOKES AS SUCH)

Am I ?

22 hours ago, studiot said:

 

The quantities   δx   and  δy

are infinitesimals. (Newton called them fluxions, not infinitesimals)

The quantities dx and dy are not.

It would help if you quoted a complete statement.

 

I most definitely said that dx and dy are not infinitesimals.

What is your definition of an infinitesimal by the way?

As to Newtons and his fluxions he published a book about them.

Quote
Method of Fluxions is a book by Isaac Newton. The book was completed in 1671, and published in 1736. Fluxion is Newton's term for a derivative.
Publisher‎: ‎Henry Woodfall
Pages‎: ‎339
Publication date‎: ‎1736

 

The important point about making the distinction is that the dy and dx in


[math]\frac{{dy}}{{dx}}[/math]

are considered inseparable (In Mathematics) and maths teachers take great pains to emphasise this.

Elementary teaching clearly states that

[math]\frac{{dy}}{{dx}}[/math]

should be considered as single object.

This is partly why some authorities eschew that notation in favour of the f'(x) notation which is clearly not a ratio or fraction of anything.

Treating them as separate entities is a Physicsy thing as I was accused of earlier.

Differential forms are different again, and mostly used in Physics based subjects.

 

On the other hand the infinitesimals have long been treated as separable, but they are out of fashion as I said, except with Engineers who have kept their flag flying through various changes of approach.

 

 

2 hours ago, 113 said:

That more or less looks like doing calculations in your style, not mine.

 

The whole point about this subject I have consistently been making is there there are several approaches 'styles' if you wish,

and that it is important to be consistent and not mix them up.

 

I am perfectly happy to explain the whys and wherefores of all this, when you are ready.

 

 

 

Edited by studiot
Link to comment
Share on other sites

7 hours ago, 113 said:

I am not sure if I can divide (dx)2. But I am not sure about you. Maybe you can ignore (dx)2 before you continue, but not me.

He did not take the limit.

but you said that I am doing calculations in "my style" so that I should not end up with an extra term dx because I am ignoring (dx)2 . So why you are now telling that I am ending up with the extra term dx? Am I doing calculations in "my style" or am I not?

He did not take the limit.

Perhaps if he had taken the limit.

That more or less looks like doing calculations in your style, not mine.

     I had thought that you were attached to the idea that you may "ignore" \(dx^2\)

at any time before you finish the calculation. Happy to see that is not the case.

You are "ignoring" terms only when doing so will lead you to the correct result?

     You have not volunteered much motivation to show that you "definition" of a

derivative is somehow more practical than the standard definition. You refer to

studiot's calculation, in which he uses a standard limit argument.

     How about another different case to show how your suggestion may be the

superior one.

     Now let g  be the function for which g(x) = x if x is real, and g(x) = 0 if x is not

real. Of course in real analysis you only consider domains that are subsets of the

set of real numbers, accordingly you would compute a derivative of g'(0) = 1 at

x = 0, if x = 0 is contained in an open interval of the domain of g. But that would 

be by the classical definition of derivative. What would your answer be, and how

would you arrive at it? 

Edited by taeto
Link to comment
Share on other sites

57 minutes ago, taeto said:

 Now let g  be the function for which g(x) = x if x is real, and g(x) = 0 if x is not

real. Of course in real analysis you only consider domains that are subsets of the

set of real numbers, accordingly you would compute a derivative of g'(0) = 1 at

x = 0, if x = 0 is contained in an open interval of the domain of g. But that would 

be by the classical definition of derivative. What would your answer be, and how

would you arrive at it? 

It is helpful to think about the (3 dimensional) plot of this function of taeto's.

Geometrical interpretations used to be very fashionable and then lost out to algebra.

But they are coming back into fashion, for instance with 'horn functions' .

 

Taeto's example also demonstrates another important matter.

A derivative is a value - the value of the derived function at a particular point.

 

Geometric interpretations are also helpful when dealing with infinitesimals.

You can't really divide by (dx)2, but you can divide by     [math]{\left( {\delta x} \right)^2}[/math]  , following the rules for the Orders of small quantities.

These, along with geometry, also provide meaning for functions and their derived functions when they have associated units.

For instance

If y is area and x is length

What do you think


[math]\frac{{dy}}{{dx}}[/math]


means and what are its units?

 

Link to comment
Share on other sites

10 hours ago, studiot said:

 

 

 

I most definitely said that dx and dy are not infinitesimals.

What is your definition of an infinitesimal by the way?

As to Newtons and his fluxions he published a book about them.

 

The important point about making the distinction is that the dy and dx in


dydx

are considered inseparable (In Mathematics) and maths teachers take great pains to emphasise this.

 

 

Studiot, I was very confused by your post.

First, yes dx and dy are not infinitesimals. I misread that part of your post.

But you said that "The quantities   δx   and  δyare infinitesimals. (Newton called them fluxions, not infinitesimals)"

I have two problems here. One, what are δx and δy? I looked back through the thread and could not find that notation defined. Clarify please?

Second, Newton called the derivative a fluxion. dx and dx aren't fluxions. The limit of delta-y over delta-x is the fluxion. Of course Newton didn't have the formal concept of limit but his intuition was pretty close.

Then you tried to argue that Newton wrote a book on fluxions. Um, yeah, he did. What does that have to do with what we're talking about? What we call the derivative, Newton called a fluxion. Neither derivatives nor fluxions are infinitesimal.

Finally, Newton tried several different approaches to clarifying what he meant by (what we now call) the limit of the difference quotient. He did NOT really espouse infinitesimals in the same way Leibniz did. That's the part that is historically arguable -- what Newton thought about infinitesimals. 

To be clear:

* Fluxions are derivatives, not infinitesimals. (And fluents are integrals).

* Newton didn't really use infinitesimals as such in the strong way Leibniz did. 

* Newton wrote books. But fluxions aren't infinitesimals. Nor did Newton think about dy and dx as infinitesimals. Not (as I understand it) in as explicit a way as Leibniz did.

Edited by wtf
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.