Jump to content

definition of derivative


113

Recommended Posts

 

26 minutes ago, wtf said:

Studiot, I was very confused by your post.

First, yes dx and dy are not infinitesimals. I misread that part of your post.

But you said that "The quantities   δx   and  δyare infinitesimals. (Newton called them fluxions, not infinitesimals)"

I have two problems here. One, what are δx and δy? I looked back through the thread and could not find that notation defined. Clarify please?

Second, Newton called the derivative a fluxion. dx and dx aren't fluxions. The limit of delta-y over delta-x is the fluxion. Of course Newton didn't have the formal concept of limit but his intuition was pretty close.

Then you tried to argue that Newton wrote a book on fluxions. Um, yeah, he did. What does that have to do with what we're talking about? What we call the derivative, Newton called a fluxion. Neither derivatives nor fluxions are infinitesimal.

Finally, Newton tried several different approaches to clarifying what he meant by (what we now call) the limit of the difference quotient. He did NOT really espouse infinitesimals in the same way Leibniz did. That's the part that is historically arguable -- what Newton thought about infinitesimals. 

To be clear:

* Fluxions are derivatives, not infinitesimals. (And fluents are integrals).

* Newton didn't really use infinitesimals as such in the strong way Leibniz did. 

* Newton wrote books. But fluxions aren't infinitesimals. Nor did Newton think about dy and dx as infinitesimals. Not (as I understand it) in as explicit a way as Leibniz did.

I have never tried to read Leibnitz.

What do you understand by evanescent increments?

https://evanescentincrements.wordpress.com/about/

Quote

Evanescent Increments

What are evanescent increments?

Part of the beauty of Calculus is– I don’t really know.  Not tangible, supremely debatable– Newton named them as a measure of the ratio of fluxions.

 

Maybe I'm wrong about some detail but

I don't know where the attribution of the ratio being the fluxion has come from, my sources seem to clearly indicate that Newton considered  [math]{\delta x}[/math]  and 

 [math]{\delta y}[/math] as fluxions.

 

 

 

Link to comment
Share on other sites

28 minutes ago, studiot said:

 

I have never tried to read Leibnitz.

What do you understand by evanescent increments?

https://evanescentincrements.wordpress.com/about/

 

Maybe I'm wrong about some detail but

I don't know where the attribution of the ratio being the fluxion has come from, my sources seem to clearly indicate that Newton considered  δx   and 

 δy as fluxions.

 

 

 

Studiot you still haven't told me what the notation δy means. I don't know what you mean by that notation. 

Secondly I don't think we could have a sensible conversation about what Newton meant when he wrote something down in Latin that some historian translated as evanescent increments. We don't know what Newton was thinking. He was most likely thinking like a physicist. "It doesn't make mathematical sense but it lets me explain the apple falling on earth and the planets moving in the heavens by the same simple principles. So I'll just use it, and let the mathematicians try to sort it out for the next couple of centuries."

I do know that over his career, he explained his fluxions in several different ways. That shows he was well aware of the logical problem of a lack of rigorous foundation. However, my understanding is simply that whereas Leibniz was "Infinitesimals, dude!", Newton was more like "top and bottom close to zero, ultimate ratio is what I call the fluxion." 

But we're not historians of science. A lot of people have written a lot of books about every detail of Newton's thought. In the end all I want to know is what you mean by curly delta so I can have some idea what you're talking about. 

Um ... just realized this. Do you mean delta-x and delta-y? What's usually marked up as [math]\Delta[/math]x and [math]\Delta[/math]y?

Edited by wtf
Link to comment
Share on other sites

2 minutes ago, wtf said:

Studiot you still haven't told me what the notation δy means. I don't know what you mean by that notation. 

I am waiting for your definition of an infinitesimal, that I asked for a while back.

I offered my best one.

2 minutes ago, wtf said:

Secondly I don't think we could have a sensible conversation about what Newton meant when he wrote something down in Latin that some historian translated as evanescent increments.

Going to a grammar school I did Latin. (That was the English definition of a grammar school)

evanescent increments was not translated by 'some historian'.

It was part of a very famous attack on Newton by the Church of his day.

 

7 minutes ago, wtf said:

He was most likely thinking like a physicist.

Yes I believe I said something similar in my first post.

7 minutes ago, wtf said:

Um ... just realized this. Do you mean delta-x and delta-y? What's usually written [math]\Delta[/math]?

I don't know if you mean  [math]\Delta [/math] ?

But this is connected to Newton thinking like a physicist (Which he was in all but name)

William Playfair, the accredited inventor of line graphs, pie charts etc was just being born when Newton was in his grave twenty odd years.
Newton and his contemporaries worked from tabulations.
Newton developed an advanced calculus of finite differences, characterised by the use of upper case delta to denote a finite difference.
These were fixed values and most decidedly not infinitesimal; they were (and still are) sometimes quite large in value.

Newton used these to fill in or interpolate gaps in his tables, but I thought you knew all this.

So it is not a great step from big(ish) differences to small differences characterised by lower case delta, and thence to differences as small as desired.

Later mathematicians extended this idea to the 'epsilon - delta' construction you will find in many modern higher level texts on analysis; again I'm sure you already know this.

 

 

Link to comment
Share on other sites

21 minutes ago, studiot said:

I am waiting for your definition of an infinitesimal, that I asked for a while back.

I offered my best one.

Going to a grammar school I did Latin. (That was the English definition of a grammar school)

evanescent increments was not translated by 'some historian'.

It was part of a very famous attack on Newton by the Church of his day.

 

Yes I believe I said something similar in my first post.

I don't know if you mean  Δ ?

But this is connected to Newton thinking like a physicist (Which he was in all but name)

William Playfair, the accredited inventor of line graphs, pie charts etc was just being born when Newton was in his grave twenty odd years.
Newton and his contemporaries worked from tabulations.
Newton developed an advanced calculus of finite differences, characterised by the use of upper case delta to denote a finite difference.
These were fixed values and most decidedly not infinitesimal; they were (and still are) sometimes quite large in value.

Newton used these to fill in or interpolate gaps in his tables, but I thought you knew all this.

So it is not a great step from big(ish) differences to small differences characterised by lower case delta, and thence to differences as small as desired.

Later mathematicians extended this idea to the 'epsilon - delta' construction you will find in many modern higher level texts on analysis; again I'm sure you already know this.

 

 

> I don't know if you mean  Δ ?

Studiot what is the curly delta?? I've asked four times. I don't understand your notation.

If the evanescent increments was written as criticism of Newton, you can hardly use it as evidence of what Newton himself thought. 

In fact Newton's description of "ultimate ratio" sounds suspiciously like the modern definition of limit. As [math]\Delta[/math]x and  [math]\Delta[/math]y get "closer and closer" to zero, their ratio  [math]\frac{\Delta y}{\Delta x}[/math] reaches its ultimate ratio. That's the informal way of thinking of a modern limit.

This is very different than regarding As [math]\Delta[/math]x and As [math]\Delta[/math]y as ever being infinitesimal. On the contrary; at any time, they are NOT ZERO. They're strictly positive.

By the way an infinitesimal is a positive quantity that's strictly less than 1/n for every positive integer n. Sometimes it's less than or equal so that zero is regarded as the only real infinitesimal by some authors. 

 

Link to comment
Share on other sites

15 minutes ago, wtf said:

By the way an infinitesimal is a positive quantity that's strictly less than 1/n for every positive integer n. Sometimes it's less than or equal so that zero is regarded as the only real infinitesimal by some authors. 

So your definition means a fixed value then?

 

That is the only material difference from mine.

 

Since it it fixed, how can an infinitesimal tend to zero or just get smaller and smaller? ?

Link to comment
Share on other sites

4 minutes ago, studiot said:

So your definition means a fixed value then?

 

That is the only material difference from mine.

 

Since it it fixed, how can an infinitesimal tend to zero or just get smaller and smaller? ?

Why won't you just explain the notational question I asked you several times? I don't get it. 

An infinitesimal doesn't "tend to zero." An infinitesimal is a particular number in the hyperreal number system. Infinitesimals don't get smaller and smaller.

I'm getting the feeling you have an engineering understanding of what limits are. I only say that because you have a far greater grasp on engineering math than I do, but you seem to think that limits are infinitesimals. Am I getting this wrong? 

Limits are not infinitesimals. Limits made infinitesimals unnecessary. Limits caused infinitesimals to be banished from math [till their recent resurgence]. 

If by curly delta you mean capital delta, then delta-x and delta-y are not infinitesimals. Not in Newton's thought and not in modern thought. They're each strictly positive quantities.

ps -- Re your comment that you know Latin. I can't top that. Should I just quit now?

 

Edited by wtf
Link to comment
Share on other sites

1 minute ago, wtf said:

Limits caused infinitesimals to be banished from math [till their recent resurgence].

So someone was wrong?

15 minutes ago, wtf said:

If by curly delta you mean capital delta, then delta-x and delta-y are not infinitesimals. Not in Newton's thought and not in modern thought. They're each strictly positive quantities.

Upper case delta, (followed by a referent) is the difference between two fixed values (proabably in a table) of that referent.

So     [math]{\rm{\Delta x}}\;{\rm{means}}\;\left( {{{\rm{x}}_{\rm{n}}}{\rm{ - }}{{\rm{x}}_{{\rm{n - 1}}}}} \right)[/math]

all three are fixed or constant.

 

Lower case delta followed by a referent means an arbitrarily small increment in that referent.

To be arbitrarily small it must be a variable.

As a variable it is an increment in the referent, which is also a variable.

So [math]\delta x[/math] is an arbitrarily small increment in the variable x.

 

 

To create the limiting process you refer to it is only necessary to append the following improper numbers to define a derivative

[math] + \infty [/math]

[math] - \infty [/math]

Cantor was the first to operate this way (Math Analen 1872, p128)  A very long time after Newton.

 

 

 

Link to comment
Share on other sites

2 hours ago, studiot said:

 

To create the limiting process you refer to it is only necessary to append the following improper numbers to define a derivative

+

Cantor was the first to operate this way (Math Analen 1872, p128)  A very long time after Newton.

 

 

 

>  So someone was wrong?

Ideas go in and out of fashion. There's no right or wrong about it. 

 

> Upper case delta, (followed by a referent) is the difference between two fixed values (proabably in a table) of that referent.

> So     Δxmeans(xnxn1)

> all three are fixed or constant.

Yes, [math]\Delta]x[math] and [math]\Delta]y[math] are fixed, finite numbers. If you visualize this as Newton did, as the position of a point moving through space (in this case one-dimensional space), [math]\Delta]y[math] is a number representing a difference in position and [math]\Delta]x[math] is a strictly positive real number representing a difference in time. 

Surely I don't need to tell you this! Why are we having this conversation?

>  Lower case delta followed by a referent means an arbitrarily small increment in that referent.

There is no such thing as lowercase delta. You seem to be confusing arbitrarily small (as in a variable) and infinitely small (as in an infinitesimal). 

There is no lower-case delta in standard math. Where did that idea come from? Is that something they use in engineering?

>  To be arbitrarily small it must be a variable.

> As a variable it is an increment in the referent, which is also a variable.

> So δx is an arbitrarily small increment in the variable x.

If that is what you truly believe then you need to read a book on real analysis. You are simply misunderstanding what a function of a real variable is. You do of course understand that concept physically; but not mathematically. There is simply no such thing as an "arbitrarily small increment." That is indeed how freshman and physicists think about it; but it's not how the math is actually understood by mathematicians. It the way you express these ideas that makes me wonder if you are just misunderstanding what limits are.

>  To create the limiting process you refer to it is only necessary to append the following improper numbers to define a derivative

I have no idea what it means. And to the extent that I do understand, it's wrong ."Only necessary to append the following improper numbers?" What is an improper number?

I don't mean to be disputatious with you since we have more views in common than not. But I have noticed that you do bring an engineering orientation; and that's often not the right way to understand limits.

 

Edited by wtf
Link to comment
Share on other sites

9 hours ago, wtf said:

Ideas go in and out of fashion. There's no right or wrong about it. 

Glad you agree on this.

9 hours ago, wtf said:

There is no such thing as lowercase delta.

This seems somewhat inflexible considering I have already posted an extract from a well recognised textbook using exactly that notation.

I Note that (from a straw poll) American practice most often uses Upper case delta, whilst British and European practice uses lower case delta for the same entity.

 

I advocate the reservation of the upper case delta for finite differences since these preceded the calculus of real variables in history and are still in important use today.

Hence the use of lower case delta for something entirely different.

 

How would you write the formulae for forward, backward and divided differences, Gregory's formula etc?

In my opinion this sort of thing is quite enough use (and very good and compact use it is) for upper case deltas

findif1.jpg.c4b7ca0381fc709578646052d037608d.jpg

 

 

Link to comment
Share on other sites

11 hours ago, studiot said:

 

(comments below). 

> This seems somewhat inflexible considering I have already posted an extract from a well recognised textbook using exactly that notation.

Incorrect information that happens to be published doesn't make it any less incorrect. 

But I note that in the photocopied page you posted, lowercase curly delta does not appear. So what the bloody hell, as they say on your side of the pond?

> I Note that (from a straw poll) American practice most often uses Upper case delta, whilst British and European practice uses lower case delta for the same entity.

Oh. Well that is a perfectly sensible remark. But that doesn't explain ...

> I advocate the reservation of the upper case delta for finite differences since these preceded the calculus of real variables in history and are still in important use today.

> Hence the use of lower case delta for something entirely different.

Can you see that you just contradict yourself? Is curly delta merely a Britishism for what we Yanks notate with uppercase triangle delta? Or is it a separate symbol with a separate meaning? In the former case that's all I need to know, it's perfectly fine. In the latter case, what exactly is the meaning? I claim it has no meaning and the way you are using it is as a physicist or engineer would who can build a bridge or make a quark omelette, but who didn't happen to take a rigorous class in real analysis. It's not wrong, per se, just ignorant, in the literal meaning of the word. It's not a pejorative, it just means you never needed the formal mathematical understanding, and you seem to be lacking it. You haven't engaged with this point.

> How would you write the formulae for forward, backward and divided differences, Gregory's formula etc?

Um, what? I've never heard of forward, backward, and divided differences. Or Gregory's formula. I get that you had kind of an old-fashioned classical education in calculus, which is cool. It's interesting to see what they used to teach. It's not what they teach these days. Nor is your understanding of limits correct. I don't know if it's because you were taught wrong, or because perhaps you just learned wrong. But limits aren't infinitesimals, nor are functions that go to zero infinitesimals. 

> In my opinion this sort of thing is quite enough use (and very good and compact use it is) for upper case deltas

tl;dr: Now you're back to the Yank/Brit divide again? Which is it? Is lowercase curly delta is Britspeak for uppercase triangle delta, a FIXED nonnegative (in the case of y) or strictly positive (in the case of x) difference in position or y-value? Or is it a brand new symbol that indicates that you think limits are infinitesimals?

And Studiot, the page you uploaded doesn't contain any curly deltas. It took up all that screen space yet failed to make any discernible point.

ps -- I looked at your picture again. You said it illustrates a use for the uppercase triangle deltas. But the illustration you posted is about the method of FINITE DIFFERENCES. It's not continuous calculus at all. It has nothing to do with what we're talking about. 

pps -- Are you perhaps making some sort of finitist argument? That the method of finite differences is sufficient for all of calculus? That's one way to interpret why you would post an entirely off-topic page. We are talking about standard calculus based on the standard model of the real numbers. If you're talking about something else, please let me know. 

Edited by wtf
Link to comment
Share on other sites

wtf

 

I know there have been some bad storms in America recently, so I can only assume that you have been affected by the same type of Kansas tornado that affected Dorothy.

I can't see that your last tirade of unreasoning outpouring has done this thread any good.

So I can only suggest we postpone any further discussion until you return to your normal self.

 

 

Link to comment
Share on other sites

4 hours ago, studiot said:

wtf

 

I know there have been some bad storms in America recently, so I can only assume that you have been affected by the same type of Kansas tornado that affected Dorothy.

I can't see that your last tirade of unreasoning outpouring has done this thread any good.

So I can only suggest we postpone any further discussion until you return to your normal self.

 

 

I don't get this at all. I gave a very reasoned response to what you wrote. You posted a textbook page on the method of finite differences that didn't mention your lowercase delta. And you contradicted yourself by first saying the upper and lower case deltas are Yank versus Brit usage; then you said it means something different, but you still haven't clearly defined it. 

I stand by what I wrote and regard it as a perfectly civil communication. I have no idea what you are objecting to. Other than debunking your vague, contradictory, and incorrect ideas.

The weather in San Diego has been lovely recently. It's almost always lovely here.

ps -- Wow. I'm at a loss. You gave two different explanations for upper/lower case delta that contradicted each other. You posted a page that you claimed supported your point, but (1) it didn't contain lowercase curly delta; and (2) it was about the method of finite differences, and not standard calculus. In addition you've shown ongoing confusion about the nature of limits. I've pointed out your errors in a respectful manner.

pps -- Maybe you're upset that I noted that you're strong on engineering math and weak on abstract math but don't realize the latter. I've been reading your stuff for several years on two message boards and that's been my observation.

 

 

Edited by wtf
Link to comment
Share on other sites

On 10/22/2018 at 6:27 PM, taeto said:

     I had thought that you were attached to the idea that you may "ignore" dx2

at any time before you finish the calculation. Happy to see that is not the case.

You are "ignoring" terms only when doing so will lead you to the correct result?

No, I am not ignoring terms in order to get to correct result. I am ignoring terms that are equal to 0.

The result of \(\frac{(dx)^2}{(dx)^2}  \) depend on the order: if the simplification is done first then the result is different than if (dx)2 = 0 is used :

\(\frac{(dx)^2}{(dx)^2} = \frac{dx}{dx} \frac{dx}{dx}\ = \frac{dx}{dx}\) <------ simplification done first

 

 

\(\frac{(dx)^2}{(dx)^2} = \frac{0}{0}\) <------ (dx)2 = 0 is used

So how can you be certain that \(\frac{dx}{dx} =  \frac{0}{0}\) ?

It could be that \(\frac{dx}{dx} =  1 \) and it does not need to be equal to \(\frac{0}{0}\).  Just because dx can be non-zero, so why should \(\frac{dx}{dx} =  \frac{0}{0}\) ?

 

Quote

     You have not volunteered much motivation to show that you "definition" of a

derivative is somehow more practical than the standard definition. You refer to

studiot's calculation, in which he uses a standard limit argument.

No, he does not use a standard limit argument, he does not use limits at all. I don't see why this very fact is so hard for you to admit. Is it because what

he has shown supports what I have written? And you decided that I am doing calculations in "my style", and it would look impossible for you how studiot

is also doing calculations in "my style"? You just can't admit that I am not developing my own mathematics from scratch. All that I am trying to do is understanding calculus. I use calculus, I don't invent my own calculus.

Quote

     How about another different case to show how your suggestion may be the

superior one.

     Now let g  be the function for which g(x) = x if x is real, and g(x) = 0 if x is not

real. Of course in real analysis you only consider domains that are subsets of the

set of real numbers, accordingly you would compute a derivative of g'(0) = 1 at

x = 0, if x = 0 is contained in an open interval of the domain of g. But that would 

be by the classical definition of derivative. What would your answer be, and how

would you arrive at it? 

I am not sure of what you want me to do.

 

 

Edited by 113
Link to comment
Share on other sites

1 hour ago, 113 said:

The result of (dx)2(dx)2 depend on the order: if the simplification is done first then the result is different than if (dx)2 = 0 is used :

(dx)2(dx)2=dxdxdxdx =dxdx <------ simplification done first

 

 

(dx)2(dx)2=00 <------ (dx)2 = 0 is used

So how can you be certain that dxdx=00 ?

It could be that dxdx=1 and it does not need to be equal to 00 .  Just because dx can be non-zero, so why should dxdx=00 ?

I still have no idea what you mean by your \(dx\). It is used in various roles in Calculus, Analysis and Differential Geometry, and none of them agree with what you say about it.

Now you state \((dx)^2=0.\) When you use an equality \(=\) sign, do you actually mean that the things on either side are identical? And your original post assumes that it makes sense to divide by \(dx\), right? So then if you take \((dx)^2=0\) and divide by \(dx\) on both sides, does dividing the same quantity by the same quantity produce different results depending on whether the quantity is on the LHS or the RHS of the equality sign? If you carry out this division step, do you see why it is confusing at least to some people when you insist that \(dx>0?\)

1 hour ago, 113 said:

No, he does not use a standard limit argument, he does not use limits at all. I don't see why this very fact is so hard for you to admit. Is it because what

he has shown supports what I have written? And you decided that I am doing calculations in "my style", and it would look impossible for you how studiot

is also doing calculations in "my style"? You just can't admit that I am not developing my own mathematics from scratch. All that I am trying to do is understanding calculus. I use calculus, I don't invent my own calculus.

When you try to understand calculus, you should become familiar with the standard limit argument and be able to recognize that studiot applied it.

It looks like you are making up your own stuff. Have you ever seen identities like \(dx = 1/\infty\) or expressions like \(x+dx\) in any text which seriously teaches calculus?

Anyway, you are obviously not developing any "mathematics", seeing that you only work with undefined concepts.

1 hour ago, 113 said:

I am not sure of what you want me to do.

Show how to calculate the derivative \(g'(0)\) of the function \(g\) defined by \(g(x)=x\) if \(x\) is a real number, and \(g(x)=0\) if \(x\) is not a real number.

Edited by taeto
Link to comment
Share on other sites

7 hours ago, 113 said:

I am not developing my own mathematics from scratch. All that I am trying to do is understanding calculus. I use calculus, I don't invent my own calculus.

Ah, progress at last.

"I use calculus"

So you are perhaps hoping to 'streamline' the presentation of derivatives. ?

 

Given this I should not have simply substituted dx for h as suggested in your original post.

"All that I am trying to do" is pitch my response at the appropriate level.

 

When someone is first starting calculus there is a big difficulty of presentation because they will also be fresh to the other subjects included in analysis, ie coordinate geometry, sequences and series, convergence and so on.

 

So you have to give  less than rigourous explanation, which is leaky around the edges yet facilitates progress.

This is what teachers do.

Then they revist the subject applying more rigour, perhaps several times in all.

The other part of the introduction provides for a way to establish some basic derivatives to work with and some basic combing rules for more complicated derivatives.
That is the purpose of the definitions you are employing.

If you don't have means of calculating some derivatives,  there is no point in studying them at all.

Most calculus courses then proceed to concentrate on differentiation of more and more complicated (algebraic) expressions, by manipulating the formulae developed form 'first principles' before attacking the more difficult underlying theory.

This is why I asked you where you fit into this developing process of understanding calculus.

 

Now one notation for the derivative is dy/dx, but this is not a fraction like 1/5 or 237/390 or whatever, but it is a definite number, if it exists at all.
It is not a variable for a given value of x (and y).

Following the simple idea of approaching something as closely as we please that is developed in the simple introduction to sequences and series that accompanies the simple introduction to calculus.

So we can easily conceive of a real fraction being a ratio of two (small) quantities that are variables.
Because they are variables we can allow them to vary in a controlled manner.
This controlled manner forces the ratio of these variables to approach, ever more closely, the actual value of the derivative at (x)
 

Which brings us to your observation (again correct)

7 hours ago, 113 said:

he result of (dx)2(dx)2 depend on the order: if the simplification is done first then the result is different than if (dx)2 = 0 is used :

(dx)2(dx)2=dxdxdxdx =dxdx <------ simplification done first

 

 

(dx)2(dx)2=00 <------ (dx)2 = 0 is used

So how can you be certain that dxdx=00 ?

It could be that dxdx=1 and it does not need to be equal to 00 .  Just because dx can be non-zero, so why should dxdx=00 ?

 

I am not sure what you mean by 'order', but I will give a formal definition in a moment.

First I will say (as I did before) there are difficulties lying in wait down the line if you substitute dx for h.
These become more apparent when you tackle partial differentiation and what is called the 'total derivative' and also before that what americans call the chain rule and we call function of a function.  (I don't know if you know about these because you haven't said)

I will also stick with Leibnitz dy/dx notation since it is easier down that line.

Finally I call these small quantities the infinitesimals  [math]{\delta x}[/math]  and    [math]{\delta y}[/math] for reasons which will become apparent shortly.

 

The ratio of two small quantities which are both indefinitely small may be one of

1) Finite

2) Indefinitely small

3) Indefinitely large

This leads to the notion of 'the order of small quantities'.

1) Two variables, p and q, each of which tends to a limit of zero by themselves are said to be indefinitely small quantities or infinitesimals of the same order if the ratio q/p is finite.

2) If this ratio tends to zero (becomes indefinitely smaller and smaller as  p and q become smaller and smaller),  q is said to be an infinitesimal of an order higher than p.

3) If this ratio tends to infinity (becomes indefinitely larger and larger as p and q become larger and larger), q is said to be an infinitesimal of lower order than p.

4) If the ratio q/p2 is finite then q is said to be an infinitesimal of the second order, if p is taken as an infinitesimal of the first order. And so on.

Exmples of this are:-

If r is the radius of a (very small) sphere, so  that r may be considered an infinitesimal then

The surface area of that sphere will be an infinitesimal of the second order

The volume of that sphere will be an infinitesimal of the third order.

 

 

OK so back to calculus

Remembering that dy/dx is a fixed number we wish our formula to approach or get ever closer to

Expressing the (small) change in y as a function of a (small) change in x


[math]\delta y = \frac{{dy}}{{dx}}\delta x + k\delta x[/math]

where k is some quantity that tends to zero as  [math]\delta x[/math] tends to zero.

So that

Now k,   [math]\delta x[/math]  and    [math]\delta y[/math]  are all infinitesimals operating under the rules I have just given.

If

[math]\delta x[/math]  and    [math]\delta y[/math] 

are first order infinitesimals, then the product

k [math]\delta x[/math]

Is an infinitesimal of higher order

So


[math]\delta y \approx \frac{{dy}}{{dx}}\delta x[/math]

or


[math]\frac{{\delta y}}{{\delta x}}[/math]    approaches [math]\frac{{dy}}{{dx}}[/math]

more and more closely the smaller [math]\delta x[/math] becomes.

 

This is the full presentation I did not have time for the other night, so I told you where you could find it.

And also shows why I should have followed my own invocation to distinguish between dx and delta_x.             
and so should not have substituted dx for h, which is another symbol used instead of delta_x.

 

Edited by studiot
clear empty space
Link to comment
Share on other sites

On 10/24/2018 at 10:29 AM, taeto said:

I still have no idea what you mean by your dx . It is used in various roles in Calculus, Analysis and Differential Geometry, and none of them agree with what you say about it.

I have explained from the very beginning what I mean by dx:

 An introduction of infinity brings a duality into the definition of an infinitesimal, meaning that we have to deal with objects that both zero and non-zero at the same time.

From the very beginning you evaded answering my question " is 1/∞  zero or non-zero ?"

That's why you are stuck at asking what I mean by dx. I have explained everything exactly. It seems to me that you are ignoring what I have written. Also those sources you mention don't deal with the duality I mentioned,  they simply ignore it. That's why it may look as though they don't agree with what I have written. But the truth is they don't even deal with it.

Quote

Now you state (dx)2=0. When you use an equality = sign, do you actually mean that the things on either side are identical? And your original post assumes that it makes sense to divide by dx , right? So then if you take (dx)2=0 and divide by dx on both sides, does dividing the same quantity by the same quantity produce different results depending on whether the quantity is on the LHS or the RHS of the equality sign? If you carry out this division step, do you see why it is confusing at least to some people when you insist that dx>0?

It may look as though I am dividing by 0. But again the result depends on which order the calculation is done:

take (dx)2 = 0  and divide by dx on both sides

\(\frac{(dx)^2}{dx} = \frac{0}{dx}\)

which is the same as \(\frac{0}{dx} = \frac{0}{dx}\) looks valid because it is 0 = 0

 

On the other hand if simplification is done first

\(\frac{(dx)^2}{dx} = \frac{0}{dx}\)

\(dx = 0 \)

There is confusion only if you ignore the duality that I mentioned, I don't simply just insist that dx > 0. The confusion arises if you ignore that dx is both zero and non-zero at the same time. You are trying to force, or define, dx to be either 0 or non-zero.

Quote

When you try to understand calculus, you should become familiar with the standard limit argument and be able to recognize that studiot applied it.

no, he did not apply limits, he just stated that he can ignore (dx)if dx is small enough, exactly what I have done.

Quote

It looks like you are making up your own stuff. Have you ever seen identities like dx=1/ or expressions like x+dx in any text which seriously teaches calculus?

no, I am not making up my own stuff. I have seen identities like 1/∞ = 0 in many texts which seriously teach calculus. Also I have seen that many of them ignore

the duality I mentioned, but not all of them. So I did not make up the duality myself, I am dealing with it.

Quote

Anyway, you are obviously not developing any "mathematics", seeing that you only work with undefined concepts.

no, I don't work with such concepts. The reason is that you can't define, or force, dx to be either 0 or non-zero, because it is both. You are ignoring the problem.

 

 

Edited by 113
Link to comment
Share on other sites

6 hours ago, 113 said:

I have explained from the very beginning what I mean by dx:

 An introduction of infinity brings a duality into the definition of an infinitesimal, meaning that we have to deal with objects that both zero and non-zero at the same time.

That tells me that it does not make sense to introduce infinity. Now answer this: who introduced infinity into this thread, you or I?

6 hours ago, 113 said:

From the very beginning you evaded answering my question " is 1/∞  zero or non-zero ?"

That is a stupid and dishonest statement. There are lots of different uses of \(\infty\) in mathematics, and I had to ask which one you mean to address, which I did. That is not evasion. You on the other hand have evaded this question entirely. 

6 hours ago, 113 said:

IThat's why you are stuck at asking what I mean by dx. I have explained everything exactly. It seems to me that you are ignoring what I have written. Also those sources you mention don't deal with the duality I mentioned,  they simply ignore it. That's why it may look as though they don't agree with what I have written. But the truth is they don't even deal with it.

You explained \(dx\) as something having to do with \(\infty\), but you did not explain what you mean by \(\infty\).

The sources do not mention the "duality", because they do not know about it. How would they have learned about it until your posts here?

6 hours ago, 113 said:

There is confusion only if you ignore the duality that I mentioned, I don't simply just insist that dx > 0. The confusion arises if you ignore that dx is both zero and non-zero at the same time. You are trying to force, or define, dx to be either 0 or non-zero.

You are completely dishonest. You are the one who told us all that \(dx\) is not zero.

6 hours ago, 113 said:

no, he did not apply limits, he just stated that he can ignore (dx)if dx is small enough, exactly what I have done.

You just applied a limit argument right there.

6 hours ago, 113 said:

no, I am not making up my own stuff. I have seen identities like 1/∞ = 0 in many texts which seriously teach calculus. 

I can verify that. It means that you are making up your own stuff, because you seem convinced that \(1/\infty\) is not zero.

6 hours ago, 113 said:

no, I don't work with such concepts. The reason is that you can't define, or force, dx to be either 0 or non-zero, because it is both. You are ignoring the problem.

Which problem? There is no problem in calculus. The problem is to somehow explain things to you so you may grasp it.

Edited by taeto
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.