Jump to content

Infinitesimals and limits are the same thing


dasnulium

Recommended Posts

To gain true understanding of a subject it can help to study its origins and how its theory and practice changed over the years – and the mathematical field of calculus is no exception. But calculus students who do read accounts of its history encounter something strange – the claim that the theory which underpinned the subject for long after its creation was wrong and that it was corrected several hundred years later, in spite of the fact that the original theory never produced erroneous results. I argue here that both this characterization of the original theory and this interpretation of the paradigm shift to its successor are false. Infinitesimals, used properly, were never unrigorous and the supposed rigor of limit theory does not imply greater correctness, but rather the (usually unnecessary) exposition of hidden deductive steps. Furthermore those steps can, if set out, constitute a proof that original infinitesimals work in accordance with limit theory – contrary to the common opinion that the two approaches represent irreconcilable philosophical positions. This proof, demonstrating that we can adopt a unified paradigm for calculus, is to my knowledge novel although its logic may have been employed in another context. I also claim that non-standard analysis (the most famous previous attempt at unification) only partially clarified the situation because the type of infinitesimals it uses are critically different from original infinitesimals.

See here for the paper: http://vixra.org/abs/1901.0134. Comments welcome!

Link to comment
Share on other sites

> the claim that the theory which underpinned the subject for long after its creation was wrong

I didn't click on the link but your error is that there was no theory underlying Newton's calculus till the late 19th century finally nailed it down. Newton himself perfectly well understood that he couldn't put his fluxions on a logically rigorous foundation and made several unsuccessful attempts. Took another 200 years to nail it down with the modern theory of limits. 

If you claim there was a rigorous theory of infinitesimals before nonstandard analysis, please reference it here. It would be news to me and I know a little about this subject.

NSA is claimed by its proponents to offer some pedagogical advantages (which 40 years of practice since Keisler's book have failed to demonstrate) but nobody claims there are any theoretical benefits, since NSA is a model of the same first-order axioms as standard analysis. 

ps -- I gave your paper a quick skim. One thing that jumped out is that you say that "for some reason" infinitesimals went away and limits came into favor after 1900. Well duh, that's because nobody could make infinitesimals rigorous and the theory of limits DID make calculus and analysis rigorous. You might say that limits replaced infinitesimals for the same reason round wheels replaced square ones. They work better. And NSA is like Stan Wagon's square-wheeled bicycle. It proves that you can do it, but that doesn't mean we all should. 

https://www.math.hmc.edu/funfacts/ffiles/10001.2-3-8.shtml

My take on this subject, clearly yours differs but I don't think you have made your case. 

Edited by wtf
Link to comment
Share on other sites

5 hours ago, dasnulium said:

To gain true understanding of a subject it can help to study its origins and how its theory and practice changed over the years – and the mathematical field of calculus is no exception. But calculus students who do read accounts of its history encounter something strange – the claim that the theory which underpinned the subject for long after its creation was wrong and that it was corrected several hundred years later, in spite of the fact that the original theory never produced erroneous results. I argue here that both this characterization of the original theory and this interpretation of the paradigm shift to its successor are false. Infinitesimals, used properly, were never unrigorous and the supposed rigor of limit theory does not imply greater correctness, but rather the (usually unnecessary) exposition of hidden deductive steps. Furthermore those steps can, if set out, constitute a proof that original infinitesimals work in accordance with limit theory – contrary to the common opinion that the two approaches represent irreconcilable philosophical positions. This proof, demonstrating that we can adopt a unified paradigm for calculus, is to my knowledge novel although its logic may have been employed in another context. I also claim that non-standard analysis (the most famous previous attempt at unification) only partially clarified the situation because the type of infinitesimals it uses are critically different from original infinitesimals.

See here for the paper: http://vixra.org/abs/1901.0134. Comments welcome!

 

Compared to the usual rambling rants against the establisment this paper was clean and tight, with good references.

All in all a worthwhile addition to the knowledgebase on this subject and a reminder that no subject is static and that fashions come and go, even in Mathematics.

A pleasure to read, I did not know there was/is a movement to re-examine the history of this subject in the light of modern thinking and I can personally vouchsafe the value and veracity of your opening sentence in most disciplines. Nor it seems did wtf.

My library goes back to the earlier period you speak of , so I will be comparing some of the texts from that time

So welcome and +1 for introducing an interesting subject.

Link to comment
Share on other sites

I think you have either used standard calculus or assumed your conclusion without realizing it in your paper, by assuming that "r" (which you should more explicitly define than "the two sets of terms as a ratio") can always be expressed in the form "+- b epsilon^2 +- c epsilon^3 +-.../+- a epsilon". 

More generally: let's say we have the function f(x) = 0 if x is neither infinitesimal nor 0, and 1 if x is 0 or infinitesimal (in other words, if x is smaller than any rational number). What is the limit as x approaches 0 of f(x)? (More on this after an answer)

Link to comment
Share on other sites

12 hours ago, dasnulium said:

..., in spite of the fact that the original theory never produced erroneous results.

I have read somewhere that Cauchy tried to use the "original theory" in an argument, but ended up with a wrong result. I will look for the reference, though maybe someone knows already?!  

Link to comment
Share on other sites

1 hour ago, taeto said:

I have read somewhere that Cauchy tried to use the "original theory" in an argument, but ended up with a wrong result. I will look for the reference, though maybe someone knows already?!  

Is there an "original theory?" This would be new to me and of great interest. My understanding is that Newton could not logically explain the limit of the difference quotient, since if the numerator and denominator are nonzero, the ratio is not the derivative (what Newton called the fluxion). And if they're both zero, then the expression 0/0 is undefined. So Newton could explain the world with his theory, but he could not properly ground it in logic. He understood this himself and tried over the course of his career to provide a better explanation, without success.

Fast forward 200 years and the usual suspects Weierstrass, Cauchy, et. al. finally rigorized analysis. The crowning piece was set theory; and in the first half of the 20th century the whole of math was reconceptualized in terms of set theory. This overarching intellectual project is known as the arithmetization of analysis. https://www.encyclopediaofmath.org/index.php/Arithmetization_of_analysis

(The Wiki article is wretched, the one I linked is much better). 

Now OP suggests that there was actually a rigorous theory based on infinitesimals that got unfairly pushed aside by the limit concept. [Am I characterizing OP's position correctly?] I am asking, what is that theory? I've never heard of it and would be greatly interested to know if there's a suppressed history out there.

I'll also add that in modern times we have nonstandard analysis, which does finally rigorize infinitesimals; and smooth infinitesimal analysis (SIA), which is an approach to differential geometry that uses infinitesimals. But neither of these theories are the "suppressed" theory, if such there be.

Have I got the outline right? What is this suppressed theory? Who first wrote it down, who suppressed it, and why haven't I ever heard of it? 

Edited by wtf
Link to comment
Share on other sites

12 minutes ago, wtf said:

Is there an "original theory?" This would be new to me and of great interest. My understanding is that Newton could not logically explain the limit of the difference quotient, since if the numerator and denominator are nonzero, the ratio is not the derivative (what Newton called the fluxion). And if they're both zero, then the expression 0/0 is undefined. So Newton could explain the world with his theory, but he could not properly ground it in logic. He understood this himself and tried over the course of his career to provide a better explanation, without success.

Fast forward 200 years and the usual suspects Weierstrass, Cauchy, et. al. finally rigorized analysis. The crowning piece was set theory; and in the first half of the 20th century the whole of math was reconceptualized in terms of set theory. This overarching intellectual project is known as the arithmetization of analysis. https://www.encyclopediaofmath.org/index.php/Arithmetization_of_analysis

(The Wiki article is wretched, the one I linked is much better). 

Now OP suggests that there was actually a rigorous theory based on infinitesimals that got unfairly pushed aside by the limit concept. [Am I characterizing OP's position correctly?] I am asking, what is that theory? I've never heard of it and would be greatly interested to know if there's a suppressed history out there.

I'll also add that in modern times we have nonstandard analysis, which does finally rigorize infinitesimals; and smooth infinitesimal analysis (SIA), which is an approach to differential geometry that uses infinitesimals. But neither of these theories are the "suppressed" theory, if such there be.

Have I got the outline right? What is this suppressed theory? Who first wrote it down, who suppressed it, and why haven't I ever heard of it? 

 

You have references to this?

In discussing the OP paper (the purpose of this thread) I note a psarsity of references to Newton and his input compared to that of Leibnitz and the continentals.
This bias towards one or the other european originator is common in articles.
It is also common to entirely fail to mention Seki.

Link to comment
Share on other sites

33 minutes ago, studiot said:

 

You have references to this?

In discussing the OP paper (the purpose of this thread) I note a psarsity of references to Newton and his input compared to that of Leibnitz and the continentals.
This bias towards one or the other european originator is common in articles.
It is also common to entirely fail to mention Seki.

> You have references to this?

In addition to the link on the arithmetization of analysis that I gave above, for contemporaneous criticism of Newton's calculus, see the Berkeley's famous The Analyst, A DISCOURSE Addressed to an Infidel MATHEMATICIAN. WHEREIN It is examined whether the Object, Principles, and Inferences of the modern Analysis are more distinctly conceived, or more evidently deduced, than Religious Mysteries and Points of Faith, https://en.wikipedia.org/wiki/The_Analyst.

For a good history of the 19th century rigorization efforts see for example Judith Grabiner's The Origins of Cauchy's Rigorous Calculus, or Carl Boyer's The History of the Calculus and Its Conceptual Development (both of which I own),  or any of the many other histories of the math of that era.

https://www.amazon.com/History-Calculus-Conceptual-Development-Mathematics/dp/0486605094/ref=pd_lpo_sbs_14_t_0?_encoding=UTF8&psc=1&refRID=BN7J3SR7H6NX77YD4531,

https://www.amazon.com/Origins-Cauchys-Rigorous-Calculus-Mathematics/dp/0486438155

> This bias towards one or the other european originator is common in articles.

I happen to know a lot more about Newton than I do about Leibniz, but again, if there is a secret, suppressed, rigorous theory of infinitesimals, surely some kind soul would throw me a link, if only to show me the error of my ways, yes?

> It is also common to entirely fail to mention Seki.

A Google search did not turn up a relevant reference among the many disambiguations. 

Edited by wtf
Link to comment
Share on other sites

23 minutes ago, wtf said:

Thanks for the useful link!

Regarding an "original theory", such a one would have a theorem like \( dy = \frac{dy}{dx}dx,\) which means that you can do arithmetic with an "infinitesimal" \(dx\), assumed nonzero. In contemporary mathematics the same expression is still a theorem, but it stands for something entirely different; both \(x\) and \(y\) are functions that have differentials \(dx\) and \(dy\)
respectively, with \(\frac{dy}{dx}\) being their derivative. None of the latter functions represent anything "infinitely small", indeed the range of either differential can easily be unbounded.

In that sense the "theories" somehow should not be considered comparable, because they speak of completely different things. On the other hand, they might be, at least partially, "isomorphic", by being able to show theorems that have an identical outward appearance. 

I am definitely interested in any small nuances which would make a proposed "original theory" make a different prediction than what we would expect today.  

Link to comment
Share on other sites

In reply to wtf - I'll try to address your points. Firstly, you say "there was no theory underlying Newton's calculus till the late 19th century". The reason I focus on Leibniz more than Newton is not only because his notation stuck, but because his version provoked more debate about the underlying theory. In particular L'Hopital's seminal textbook makes it clear that microlinearity together with nilsquare infinitesimals were the main principles of the subject. Is that theory rigorous? Well, I also point out that if you increment the variable of a polynomial by such an infinitesimal you get (by applying the binomial theorem) a linear equation for the gradient in terms of the increment. This is probably why any controversies were ignored until the late 19th century, so why did it become a problem then? I offer some theories about this, the main one being that LEM became very rigid and the idea of something being indefinitely small became unacceptable - although personally I see indefinite precision as qualitatively different from both practical precision and equality. Secondly, you say that "nobody could make infinitesimals rigorous", but the proof section of the paper contains just such a 'rigorization' - the text of the paper is meant to put the proof in a proper context. It may seem incredible that this wasn't done before, but here we are. So why did it take so long? The simplest answer is that from the early 20th century academia split - 'mainstream' academic mathematics was on one side; while physics, engineering and constructive mathematics was on the other. The former camp didn't want nilsquare infinitesimals because they're too focused on polynomials, while the latter camp kept on using them (e.g. Roger Penrose's book The Road to Reality uses them repeatedly) without really explaining why. This divide was written off as philosophical and further questions were often ignored - as John L Bell put it (from memory) "this [nilpotency] is an intrinsic property, not dependent on comparisons with other quantities". Therefore if you do want to see if nilsquares accord with limit theory you need to consider what is neglected (which is polynomial expressions with higher power incremental terms as the subject) and compare that to what is being kept, which is the first power incremental term. Without question the natural way to do this is by taking a proportion, and if you do that you get an indefinitely small ratio (as demonstrated), which if you accept this take on LEM allows such higher power expressions to be neglected.

Edited by dasnulium
Link to comment
Share on other sites

22 hours ago, wtf said:

> It is also common to entirely fail to mention Seki.

A Google search did not turn up a relevant reference among the many disambiguations. 

Takazu Seki, pioneer Japanese mathematician, accountant and chief of the National Bureau of Supply   b.early 1640s  Edo or Huzioka, d.  1708

16 hours ago, dasnulium said:

The reason I focus on Leibniz more than Newton is not only because his notation stuck, but because his version provoked more debate about the underlying theory.

We had a long discussion about this subject in a recent thread.

https://www.scienceforums.net/topic/116421-definition-of-derivative/?page=3

 

Many of the references I referred to in my first post here appeared there.
You do not seem to have heard of at least some of these.

Link to comment
Share on other sites

On 1/14/2019 at 4:50 PM, dasnulium said:

In reply to wtf - I'll try to address your points ...

Thank you for your detailed reply. Looks like I'll have to work through your paper and probably end up learning a few things. As I've mentioned I'm more familiar with Newton and  not at all with Leibniz, so I evidently have some gaps in my knowledge.

Your focus on nilsquare infinitesimals and denial of LEM reminds me of smooth infinitesimal analysis, is this related to your ideas?

https://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis

8 hours ago, studiot said:

Takazu Seki, pioneer Japanese mathematician, accountant and chief of the National Bureau of Supply   b.early 1640s  Edo or Huzioka, d.  1708

 

Cool, I will check them out.The Wiki entry for Seki is very interesting. Thanks for the info.

Edited by wtf
Link to comment
Share on other sites

wtf: Yes, it is related to SIA - one of my main influences was John L Bell. I was very annoyed after reading one of his books because the simpler proofs offered by SIA which he uses had not been available when I was at school. Of course, they're only simpler for polynomials, but that's an important class of functions. Another very important class of functions is the mechanical functions, which Descartes excludes from consideration in a quote in the paper. But even if they can't be analyzed normally they can be analyzed numerically i.e. with finite differences, and if you take Granville's approach (as in the last reference) as the best example of how limits can be applied then finite differences and limits work in a very similar way. This is of course much easier with computers - for a tangent you get a list of numbers converging on a value. Since this approach does also work for polynomials we can say that limits are a more general theory - Leibniz implies this in a quote I give where he gives what to me seems like the 19th century limit definition. Note that before computers were invented there wasn't much incentive to think about the broader theory and angst about the continuum didn't 'boil over' until two hundred years after Leibniz and Newton. Bell has written much about the continuum - constructivism holds that saying something is not unequal to something (e.g. zero) does not imply that it is actually equal to it. In pure mathematics where we can just give a variable a value (as opposed to real life) the only way that that condition can be met is if something is smaller than any value you give - which is the essence of the limit criterion and also a description of indefinite smallness, which is why I think they're the same thing and say that LEM has been over-applied. Anyway, when I said Leibniz provoked more debate I meant constructive debate - although not by much since Leibniz and Nieuwentijt never reconciled their methods. If they had I probably wouldn't be writing this, but my job requires very obsessive thinking so the paper was a natural development.

Edited by dasnulium
Link to comment
Share on other sites

3 hours ago, dasnulium said:

wtf: Yes, it is related to SIA 

I don't follow your point at all then. Are you claiming that SIA relates to anything that was happening in the 17th century? Lawvere's paper on synthetic differential geometry came out in 1998. And SIA is based on category theory. a subject that didn't come into existence till the 1950's.

Edited by wtf
Link to comment
Share on other sites

7 hours ago, dasnulium said:

the only way that that condition can be met is if something is smaller than any value you give

I don't agree with this. Examples from statistics come to mind.

Whilst I liked the style and presentation of your paper, I don't endorse everything in it.

Link to comment
Share on other sites

ps I might as well add this since it's on my mind and OP's not around at the moment. You don't need the axiom of choice to define the real numbers. That was an error in the paper although I don't think anything else depends on it.

Edited by wtf
Link to comment
Share on other sites

  • 3 weeks later...
10 hours ago, dasnulium said:

"You will never come up with a well-order [sic] of the reals and neither will anyone else. It's a consequence of the Axiom of Choice (AC), so it is inherently nonconstructive." wtf

That's as true today as when I wrote it a few weeks ago. But you surely don't need choice to define the reals. See any modern textbook on real analysis for a construction of the reals using only the axioms of ZF. 

I'm not sure why you took my correction of a minor and inconsequential error in your paper, and doubled down to a demonstrably false claim. It seems like digging a hole deeper where a simple "Thanks for the clarification" would be appropriate.

Edited by wtf
Link to comment
Share on other sites

14 hours ago, dasnulium said:

wtf: Could you quote the line from the paper which is 'demonstrably false'.

"The third possibility is that the crisis was a side-effect of the introduction of Georg Cantor’s theory of transfinite numbers. The theory depends on the Axiom of Choice, which implies LEM for the continuum ..."

Sorry you didn't claim the reals require choice, you claimed Cantor's theory of transfinite numbers does. Equally wrong. And what does implying "LEM for the continuum" mean? 

Link to comment
Share on other sites

6 hours ago, dasnulium said:

That was the part where I try to discern why a foundational crisis even happened in mathematics which is not the main point of the paper, so I won't get into a debate about that. But you might like this guy's take on it: https://www.quora.com/profile/Eric-Freiling

I'm taking another run at your paper. I just read the intro. Some of this is sinking in. I  agree with your point that infinitesimals in the hyperreals are not nilpotent hence aren't quite the right model for the powers of epsilon that go away. Am I getting that? I think you are clarifying the distinction between an approach like SIA and the nonstandard analysis model. I think you have a good point. 

Now what I am not too sure about is what you are saying about the status of the infinitesimal approach. I always thought it was a search for rigor; but I think you're saying they already had rigor and got unfairly demoted. Am I understanding this right? My point earlier was that SIA is very recent and quite modern in the sense of being based on category theory. They did not have that point of view in the 18th century. As far as I know. Is that the case you're trying to make?

 

Edited by wtf
Link to comment
Share on other sites

@dasnulium,

Can you please explain this passage?

"Mathematicians could however always claim that they were not assuming that the so-called law of excluded middle (LEM) applies to the continuum, and that nilpotency is a corollary of this. But as the supporters of LEM gained influence in the late nineteenth century this position became less tenable; ..."

* What does it mean that LEM does or doesn't "apply to the continuum?" That makes no sense to me. LEM applies or doesn't apply to propositions. 

* How is LEM or its denial a corollary of nilpotency?

* The supporter of LEM gained influence in the 19th century? Are you making the claim that 17th and 18th century mathematics was a hotbed of LEM denial? That flies in the face of the written history, doesn't it? My understanding is that denial of LEM came into math via Brouwer in the early 20th century, and not before then; and that it's making a contemporary resurgence due to the computational viewpoint. But to say that the supporters of LEM gained influence in the 19th century doesn't seem right.  My understanding is that LEM had universal acceptance in math until Brouwer.

Would appreciate clarity on these points, thanks.

 

Edited by wtf
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.