Jump to content

dasnulium

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by dasnulium

  1. Swansont, against my wishes, decided to merge my previous comment with the 2019 post, rather than it being a new post. He then declined to respond to my complaint about this. Consequently, I won't be engaging any further with this site.
  2. This is Version 2 of Next to Nothing - a Single Paradigm, an essay and proof I first posted in January 2019 and linked from here. Here is that link: Version 2 can be found here - https://drive.google.com/file/d/1k6gtN1k_WfRVyAUqwhjGXo10U3HWVpoA/view Preface: Since posting Next to Nothing – a Single Paradigm over two and a half years ago I have been able to compose a more complete version of the paper. I here address the main criticism of the original, elaborate on one of its themes, and offer some thoughts on a related matter. Improvements have also been made throughout the original text (while retaining the general narrative) and many of the references are new. The proof itself remains unchanged. The additional Continuation section follows on from the main text.
  3. wtf: I did think carefully about word choice for the thesis, in particular I would never use the term 'infinitely small' to describe an infinitesimal - 'indefinitely small' does not mean smaller than 'every' positive number, it means smaller than any positive number to which you can assign a value (this is similar to Kant's ideas about the infinite as discussed by Bell). I don't use the word 'arbitrary' because in numerical analysis (unlike in regular calculus) it means something different to 'indefinite' - namely that the minimum value of the increment may be arbitrary (alternatively, depending on the functions concerned, it may have to meet certain criteria). Note that the term 'indefinitely small' is meaningless for numerical analysis for obvious reasons. uncool: Bell's take on the 'blip' function doesn't distinguish between non-zero and zero/infinitesimal, but simply between non-zero and zero - so I don't see the purpose of your question, maybe my answer to wtf would help.
  4. The point of the paper is that there's a connection that may have been overlooked, maybe for this reason: He's ruling out the idea that the standard part/nilsquare rule operation is somehow unsafe. A limit in calculus is what you get when the increment of x becomes indefinitely small (i.e. infinitesimal) so I think that they are two ways of looking at the same thing. You also mention statistics, but I know that there have been efforts to found statistics on a non-constructive basis, which I would avoid so I can't comment on that. Replying to uncool: the function you describe is explicitly discontinuous so I would simply assume calculus doesn't apply to it. Bell talks about it on page 5 here https://pdfs.semanticscholar.org/e226/af69111bcba4aff8318f2b479dd6c3202325.pdf Clarification to last comment: the equation s = h √(1 + y'²) applies to the finite difference and is therefore true for any value (in this context 'proportional' is the wrong word for the RHS). Does it apply to infinitesimal increments? I said that "Note that SIA seems to work better than NSA for this", but you can get it to work in NSA too. First do this s/h = √(1 + y'²), then transition to the infinitesimal by changing s/h to ds/dh (yielding the well known equation) and taking the standard part of y'. This may be taken to mean that ds/dh becomes 0/0, but it couldn't subsequently be neglected because it would be indeterminate, not zero.
  5. To elaborate on "the 'error' for polynomials is the sum of the higher power incremental terms" - when we take the derivative of y = f(x), x is said to be the independent variable. However, differentiation assumes an arbitrary x value and a variable increment, and the latter is the quantity actually varying. If we take the finite difference quotient of a polynomial, varying the increment (for a given x value) results in a gradient that changes with the secant; but if we take the regular derivative of that polynomial we can't vary the increment because those terms have been neglected (and cancelled) - and since an indefinitely small increment implies an indefinitely small secant length*, which is by definition part of the tangent, this gives us the gradient of the tangent. *By Pythagoras secant length is proportional to the increment thus s = h √(1 + y'²) although the equation is not linear, so simply by reducing the increment a smaller secant length can be found. The derivative may of course increase to counteract this, but it would have to eventually become vertical to nullify it, at which point calculus no longer applies. Note that SIA seems to work better than NSA for this line of reasoning because taking the standard part dispenses with all incremental terms, even those of the first power, but this would cause the first RHS term to be set to zero. This may indicate that SIA bears a closer resemblance to the true nature of calculus than NSA.
  6. "How is LEM or its denial a corollary of nilpotency?" Because an increment which produces a noticeable 'error' (defined relatively) is by definition not infinitesimal and therefore LEM applies. The 'error' for polynomials is the sum of the higher power incremental terms. LEM here means non-negligible, not separate in some ideal sense (i.e. axiomatically). That's why I question the justification for LEM in the paper. I can't talk about the background further because the main point of the paper is to show that limits and original infinitesimals are technically equivalent at least, which is very simple. As Klein said "With these elementary presentations there remains always the suspicion that by neglecting successive small quantities we have finally accumulated a noticeable error, even if each is permissable singly. But I do not need to show how we can rule this out, for these questions are so thoroughly elementary that each of you can think them through when you feel so inclined." Elementary Mathematics - From an Advanced Standpoint, Felix Klein, 1908, p190 (NB that is a paraphrased translation). Very few people ever do feel so inclined though. For more of the philosophical background The Continuous and the Infinitesimal by John L Bell is the best guide. Over and out!
  7. That was the part where I try to discern why a foundational crisis even happened in mathematics which is not the main point of the paper, so I won't get into a debate about that. But you might like this guy's take on it: https://www.quora.com/profile/Eric-Freiling.
  8. wtf: Could you quote the line from the paper which is 'demonstrably false'.
  9. "You will never come up with a well-order [sic] of the reals and neither will anyone else. It's a consequence of the Axiom of Choice (AC), so it is inherently nonconstructive." wtf
  10. wtf: Yes, it is related to SIA - one of my main influences was John L Bell. I was very annoyed after reading one of his books because the simpler proofs offered by SIA which he uses had not been available when I was at school. Of course, they're only simpler for polynomials, but that's an important class of functions. Another very important class of functions is the mechanical functions, which Descartes excludes from consideration in a quote in the paper. But even if they can't be analyzed normally they can be analyzed numerically i.e. with finite differences, and if you take Granville's approach (as in the last reference) as the best example of how limits can be applied then finite differences and limits work in a very similar way. This is of course much easier with computers - for a tangent you get a list of numbers converging on a value. Since this approach does also work for polynomials we can say that limits are a more general theory - Leibniz implies this in a quote I give where he gives what to me seems like the 19th century limit definition. Note that before computers were invented there wasn't much incentive to think about the broader theory and angst about the continuum didn't 'boil over' until two hundred years after Leibniz and Newton. Bell has written much about the continuum - constructivism holds that saying something is not unequal to something (e.g. zero) does not imply that it is actually equal to it. In pure mathematics where we can just give a variable a value (as opposed to real life) the only way that that condition can be met is if something is smaller than any value you give - which is the essence of the limit criterion and also a description of indefinite smallness, which is why I think they're the same thing and say that LEM has been over-applied. Anyway, when I said Leibniz provoked more debate I meant constructive debate - although not by much since Leibniz and Nieuwentijt never reconciled their methods. If they had I probably wouldn't be writing this, but my job requires very obsessive thinking so the paper was a natural development.
  11. In reply to wtf - I'll try to address your points. Firstly, you say "there was no theory underlying Newton's calculus till the late 19th century". The reason I focus on Leibniz more than Newton is not only because his notation stuck, but because his version provoked more debate about the underlying theory. In particular L'Hopital's seminal textbook makes it clear that microlinearity together with nilsquare infinitesimals were the main principles of the subject. Is that theory rigorous? Well, I also point out that if you increment the variable of a polynomial by such an infinitesimal you get (by applying the binomial theorem) a linear equation for the gradient in terms of the increment. This is probably why any controversies were ignored until the late 19th century, so why did it become a problem then? I offer some theories about this, the main one being that LEM became very rigid and the idea of something being indefinitely small became unacceptable - although personally I see indefinite precision as qualitatively different from both practical precision and equality. Secondly, you say that "nobody could make infinitesimals rigorous", but the proof section of the paper contains just such a 'rigorization' - the text of the paper is meant to put the proof in a proper context. It may seem incredible that this wasn't done before, but here we are. So why did it take so long? The simplest answer is that from the early 20th century academia split - 'mainstream' academic mathematics was on one side; while physics, engineering and constructive mathematics was on the other. The former camp didn't want nilsquare infinitesimals because they're too focused on polynomials, while the latter camp kept on using them (e.g. Roger Penrose's book The Road to Reality uses them repeatedly) without really explaining why. This divide was written off as philosophical and further questions were often ignored - as John L Bell put it (from memory) "this [nilpotency] is an intrinsic property, not dependent on comparisons with other quantities". Therefore if you do want to see if nilsquares accord with limit theory you need to consider what is neglected (which is polynomial expressions with higher power incremental terms as the subject) and compare that to what is being kept, which is the first power incremental term. Without question the natural way to do this is by taking a proportion, and if you do that you get an indefinitely small ratio (as demonstrated), which if you accept this take on LEM allows such higher power expressions to be neglected.
  12. To gain true understanding of a subject it can help to study its origins and how its theory and practice changed over the years – and the mathematical field of calculus is no exception. But calculus students who do read accounts of its history encounter something strange – the claim that the theory which underpinned the subject for long after its creation was wrong and that it was corrected several hundred years later, in spite of the fact that the original theory never produced erroneous results. I argue here that both this characterization of the original theory and this interpretation of the paradigm shift to its successor are false. Infinitesimals, used properly, were never unrigorous and the supposed rigor of limit theory does not imply greater correctness, but rather the (usually unnecessary) exposition of hidden deductive steps. Furthermore those steps can, if set out, constitute a proof that original infinitesimals work in accordance with limit theory – contrary to the common opinion that the two approaches represent irreconcilable philosophical positions. This proof, demonstrating that we can adopt a unified paradigm for calculus, is to my knowledge novel although its logic may have been employed in another context. I also claim that non-standard analysis (the most famous previous attempt at unification) only partially clarified the situation because the type of infinitesimals it uses are critically different from original infinitesimals. See here for the paper: http://vixra.org/abs/1901.0134. Comments welcome!
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.