Everything posted by wtf
-
Notation study
I haven't followed this thread so I don't know the context of your paste of an excerpt from Keisler's 1977 or so book on NSA. What you wrote here is perfectly true, since both the hyperreals and the standard reals are models of the first-order theory of the reals. But in order to construct the hyperreals, you need a gadget called a non-principal ultrafilter on the natural numbers. Such a thing exists only in the presence of a weak form of the axiom of choice. So the logical principles needed to build the hyperreals exceed those needed to build the standard reals. Secondly, the hyperreals do not satisfy the least upper bound property, because they are non-Archimedean. Third, Keisler's book is not about research, since the hyperreals were first constructed by Hewitt in 1948 and nonstandard analysis was developed by Robinson sometime afterward. Keisler's intent was to write an NSA-based textbook for freshman calculus. It's telling that in the 44 years since then, no other similar books have been written; and calculus is still overwhelmingly taught in the traditional manner based on limits. There are occasional NSA-based calculus courses given, and studies show that by and large, students come away just as confused about NSA-based calculus as they do from traditional calculus. So we see that (1) NSA offers no pedagogical advantages (else more schools would have adopted it since 1977 and more texts would have been written); (2) NSA requires a strictly stronger logical foundation than the standard reals, namely a weak form of the axiom of choice; and (3) the hyperreals lack the fundamental defining property of the standard reals, namely the least upper bound property. As I say I'm not sure what your point is in pasting this excerpt so I can't comment on that. I'm just mentioning some context for NSA that you should know about if you care about NSA or wish to make some point based on it. Some light background reading of interest: https://en.wikipedia.org/wiki/Transfer_principle https://en.wikipedia.org/wiki/Ultrafilter https://math.stackexchange.com/questions/1838272/why-do-we-need-ultrafilter-for-construction-of-hyperreal-numbers https://en.wikipedia.org/wiki/Least-upper-bound_property https://en.wikipedia.org/wiki/Archimedean_property
-
First Post on Primes
Darn, I'm busted.
-
Any Good Lecture Series on Complex Analysis?
Coursera has a course. It's all video so you can sign up and watch any time you like, there's no schedule as far as I know. https://www.coursera.org/learn/complex-analysis
-
Do points lie on tangent lines "only?"
Nothing personal, I just happened to have run across this exact issue on some other forum a day or two ago so it was fresh in my mind.
-
Do points lie on tangent lines "only?"
I wish to refine this statement because it's a common point of confusion. You have no proof that "At every point on the curve of the function, you can draw a tangent line, such that 1 point ( only ) is common to both," nor do you have a rigorous definition of what a tangent line is. Rather, we have an INTUITION about what a tangent line is. In order to make the notion rigorous, we DEFINE the tangent line at a point to be the straight line passing through that point with slope equal to the derivative at that point, if the derivative exists. That is, the the slope of the tangent line is NOT "equivalent" to the derivative; rather, it's DEFINED that way. The idea is to make precise the intuitive idea of the tangent line at a point. If you think (as students often do) that the derivative is "the same" as the slope of the tangent line, that's a misunderstanding of what's going on. There is no tangent line, formally, until we define it via the derivative. Then (for example) we can make rigorous the intuitively clear observation that the graph of |x| has no tangent line at 0. Otherwise, we could have no proof, since without the derivative we have only an intuitive but not a rigorous notion of tangent line.
-
First Post on Primes
I have not given this any detailed thought since weeks ago and, to the minimal extent I've thought about this recently, I agree that you're right. The totient function gives the same answer as the inclusion/exclusion principle for this problem.
-
First Post on Primes
I think you're right that inclusion-exclusion and Euler totient give the same result for your problem.
-
First Post on Primes
If they give different answers then clearly at least one (or possibly both) are inaccurate. But my statement is just a guess and not based on thinking about the problem much. You're probably right.
-
First Post on Primes
A few weeks ago I had the problem space mapped into my brain for an evening. Too busy to remap it at the moment, so probably won't be able to get back to this. Off the top of my head I don't think the totient function will always give the same answer as inclusion/exclusion, but I could be wrong and probably am. Sorry I can't offer more assistance.
-
First Post on Primes
@Tinacity, ps -- Wait DUH! We forgot 11/49. I don't know why we both got confused about this. I solved the problem. The trick is that if n is divisible by one of 2, 3, or 5, so is 60 - n. So the pairs (n, 60-n) where both elements are relatively prime to 2, 3, and 5 are exactly the same as the numbers n with the same property. So the solution is to do inclusion/exclusion on 30 to determine how many numbers are not divisible by 2, 3, or 5; and that's the number of pairs. In the case of 60 there are exactly 8 pairs: 1/59, 7/53, 11/49, 13/47, 17/43, 19/41, 23/37, and 29/31. That's eight. You can now write a program to do inclusion/exclusion on your original number N, or half of 2N if you think of it that way (that is, 2x3x5 = 30, multiply by 2 to get 60, then do inclusion/exclusion on 30). The "sum to 60" is a red herring, an aspect of the problem that adds confusion but doesn't change the problem. The number of pairs that sum to 60 where each element of the pair is not divisible by 2, 3, or 5, is exactly equal to the number of numbers between 1 and 30 not divisible by 2, 3, or 5. And this result generalizes under the conditions of your problem. To do inclusion/exclusion on N = 2*3*5*7*11*13*17*19*23 you take: - The sum of all 8-fold products of N (that is, every combination of 8 factors at a time); - Minus the sum of all 7-fold products; - Plus the sum of all 6-fold products; etc. You then subtract the final sum from N, and that's the number of pairs where both elements are relatively prime. I believe that's it, but if I messed up I hope someone will jump in.
-
First Post on Primes
Is this a matter of being concerned that someone will steal your idea? If you post your actual problem perhaps someone can offer some help. For what it's worth, here's what I did with inclusion/exclusion. Suppose we want to know how many numbers from 1 through 30 are not divisible by any of 2, 3, or 5. We calculate how many ARE divisible by at least one of them as follows; 1/2 x 30 = 15 1/3 x 30 = 10 1/5 x 30 = 6 That adds up to 31. Now for the double counts, which must be subtracted: 1/6 x 30 = 5 1/10 x 30 = 3 1/15 x 30 = 2 That adds up to 10, to be subtracted. Now we must ADD back the triple counts: namely, 1/30 x 30 = 1. So we have 31 - 10 + 1 = 22. Therefore there are 30 - 22 = 8 numbers NOT divisible by any of 2, 3, or 5. Indeed we can count them by hand: 1, 7, 11, 13, 17, 19, 23, and 29. Eight as calculated. Now the problem is that we have not accounted for the pairs 29/31, or 23/37, etc., because the larger numbers of the pair are out of our range. So if you figure out how to account for the "sum to 60" aspect of the problem, you'll be able to work this out. Do feel free to give more information about your actual problem, or not, as you see fit. Then again when Hilbert offered to help Einstein with general relativity, Einstein at first welcomed his offer; but then realized that Hilbert was trying to solve the problem first and take credit. So maybe you're right not to give too much away! LOL. ps -- Wait DUH! We forgot 11/49. I don't know why we both got confused about this. I solved the problem. The trick is that if n is divisible by one of 2, 3, or 5, so is 60 - n. So the pairs (n, 60-n) where both elements are relatively prime to 2, 3, and 5 are exactly the same as the numbers n with the same property. So the solution is to do inclusion/exclusion on 30 to determine how many numbers are not divisible by 2, 3, or 5; and that's the number of pairs. In the case of 60 there are exactly 8 pairs: 1/59, 7/53, 11/49, 13/47, 17/43, 19/41, 23/37, and 29/31. That's eight. You can now write a program to do inclusion/exclusion on your original number N, or half of 2N if you think of it that way (that is, 2x3x5 = 30, multiply by 2 to get 60, then do inclusion/exclusion on 30). The "sum to 60" is a red herring, an aspect of the problem that adds confusion but doesn't change the problem. I believe that's it, but if I messed up I hope someone will jump in.
-
First Post on Primes
If order matters then you have 14 + 16 and 16 + 14 and likewise for all the other pairs except for 15 + 15, which is it's own reverse. Do you mean order doesn't count? Rather than saying it does? What happened to 1/59? I had a couple of mistakes in my own list, I had 11/49 which shouldn't be there, and I forgot 7/53. So there are 7 such pairs, 14 if order matters. I assume by your example that order DOESN'T matter. Still we have 7 pairs 1/59, 7/53, 13/47, 17/43, 19/41, 23/37, and 29/31. That's seven. I think we have them all. The question comes down to taking 60 and asking, out of the first 30 positive integers, how many are divisible by at least one of 2, 3, or 5. Half are divisible by 2, 1/3 are divisible by 3 but we counted the ones divisible by 6 twice; and 1/5 are divisible by 5 but we counted the 10's and the 15's twice. But then we subtracted the ones divisible by 30 once too much so we have to add it back in. I believe you attack this kind of problem with the inclusion/exclusion formula. In fact the first example here shows how to count how many numbers from 1 to 100 are divisible by at least one of 2, 3, or 5, our exact problem here. https://en.wikipedia.org/wiki/Inclusion–exclusion_principle I jotted down some numbers but I was off-by-one somewhere, which doesn't surprise me.
-
First Post on Primes
Can you show your work in detail? I still don't understand the basic question. 2*3*5 = 30. Multiply that by 2 and you get 60. Please explain the rest because I totally do not understand what we are doing here. Where did the -2's come from, that hasn't been part of your exposition. ps -- Ok I totally don't get this. Pairs that sum to 60 and have no divisors among 2,, 3, 5: I get 1,59 11,49 13,47 17, 43 That's already four, eight if you distinguish order, and there are plenty more. So please explain clearly what you are doing. Others are 19, 41 23, 37 29, 31 That's a total of 7, times 2 to account for order as you said earlier, so there are 14 pairs that satisfy your requirement, not 3. Where do these -2's come from? In one example earlier you had 17-2 as a factor but that's not prime. Maybe it's just me but I do not understand what is being calculated. Can you work out a complete example, a simple one? Apologies if I'm being dense and this is obvious to everyone but me. Those of you who wrote programs to solves this problem, what problem are you solving? Am I just missing something that's obvious to everyone else?
-
First Post on Primes
Have you tried this out by hand for a simpler case, say N = 2 x 3 x 5 = 30? Then dropping the first and last gives you 3. Can you make your idea come out with that example? Is there something special about the case you're presenting?
-
First Post on Primes
Haven't followed the thread but this seems a little ambiguous. Let's take a simpler example, N = 2 x 3 x 5 x 7 = 210. Now you want to consider the pairs of numbers that sum to 210, such as (1, 209), (2,208), ..., (209, 1). Do you care about order? Is (1,209) the same as (209,1) or different? Not a big issue, just a factor of 2, but nice to know what is the intended intepretation. Now "... where neither is divisible by any of the primes which make its product?" was confusing to me. What is "its" in this context? Do you mean that since 209 = 11x19, and neither 11 nor 19 is one of 2, 3, 5, or 7, we count (1,209) and (209,1) as satisfying your condition? And you want to count the number of such pairs? Just want to make sure I'm understanding the question. Apologies to all if this has already been covered in the thread.
-
Infinitesimals and limits are the same thing
> it means smaller than any positive number to which you can assign a value Can you give an example or an explanation of a positive number to which you can't assign a value? I can't imagine what that could possibly mean. Like 14. I can assign the value 14 to it. What does it mean to assign a value to a number? Isn't the value of a number the number itself? I cannot understand this remark at all.
-
Infinitesimals and limits are the same thing
No, that is exactly wrong. A limit is what you get when the increment is ARBITRARILY small. It's always strictly positive but gets as close as you like to zero. I now see clearly the source of your confusion. You don't know what a limit is. You have a freshman calculus understanding at best. If you would take the trouble to learn the actual definition of a limit, you would see that no infinitesimals are involved.
-
Infinitesimals and limits are the same thing
Can you say more about second order logic in this context? My understanding is that nonstandard analysis is an alternative model of the FIRST order theory of the real numbers. If you go to second order logic you can express the completeness theorem (every nonempty subset of reals bounded above has a least upper bound). And any system containing infinitesimals is necessarily INCOMPLETE. So second order logic would seem to preclude infinitesimals entirely. Not an expert but would appreciate context.
-
Infinitesimals and limits are the same thing
There's a pdf of Bell online.
-
Infinitesimals and limits are the same thing
Hardly bears on the history of the limit concept and whether smooth infinitesimal analysis was prefigured in the 17th century.
-
Infinitesimals and limits are the same thing
I think the OP is not around but I read through the paper a couple of times and have some thoughts. There are two things going on in the paper. One, the OP is making the point that there are striking similarities between infinitesimals as they were used in 17th century math; and the nilsquare infinitesimals of Smooth infinitesimal analysis (SIA). This point of view says that, say, if we went back to the 17th century but knew all about category theory and differential geometry and SIA, we could easily show them how to logically found their subject. They were close in spirit. Ok. That might well be, and I don't agree or disagree, not really knowing enough about SIA and knowing nothing about Leibniz (being more a Newton fan). So for sake of discussion I'll grant the OP that point. But the other thing that's going on is that the OP seems to feel that the history itself supports the idea that they somehow understood this, or that they had a rigorous theory of infinitesimals that was shoved aside by the theory of limits in an act more political than mathematical. That's the second thesis of the paper as I understand it. But the OP presents no historical evidence, none at all, that there was any kind of rigorous theory of infinitesimals floating around at the time. On the contrary, the history is that Newton himself well understood the problem of rigorously defining the limit of the difference quotient. As the 18th century got going, people noticed that the lack of foundational rigor was causing them trouble. They tried to fix the problem. In the first half of the 19th century they got calculus right, and in the second half of the 19th and the first quarter of the 20th, they drilled it all the way down to the empty set and the axioms of ZFC. That is the history as it is written, and there isn't any alternate history that I'm aware of. If there were, I would be most interested to learn about it. The OP makes a historical claim, but doesn't provide any historical evidence. That bothers me. So to sum up: * From our modern category-theoretic and non-LEM and SIA perspective, all of which is math developed only in recent decades, we can reframe 17th century infinitesimals in modern rigorous terms. I accept that point for sake of discussion, though I have some doubts and questions. * But on the historical point, you are just wrong till you show some evidence. The historical record is that the old guys KNEW their theory wasn't rigorous, and that as time went by this caused more and more PROBLEMS, which they eventually SOLVED. They never had a rigorous theory and they never thought they had a rigorous theory. But if they did I'd love the references.
-
Infinitesimals and limits are the same thing
@dasnulium, Can you please explain this passage? "Mathematicians could however always claim that they were not assuming that the so-called law of excluded middle (LEM) applies to the continuum, and that nilpotency is a corollary of this. But as the supporters of LEM gained influence in the late nineteenth century this position became less tenable; ..." * What does it mean that LEM does or doesn't "apply to the continuum?" That makes no sense to me. LEM applies or doesn't apply to propositions. * How is LEM or its denial a corollary of nilpotency? * The supporter of LEM gained influence in the 19th century? Are you making the claim that 17th and 18th century mathematics was a hotbed of LEM denial? That flies in the face of the written history, doesn't it? My understanding is that denial of LEM came into math via Brouwer in the early 20th century, and not before then; and that it's making a contemporary resurgence due to the computational viewpoint. But to say that the supporters of LEM gained influence in the 19th century doesn't seem right. My understanding is that LEM had universal acceptance in math until Brouwer. Would appreciate clarity on these points, thanks.
-
Infinitesimals and limits are the same thing
I'm taking another run at your paper. I just read the intro. Some of this is sinking in. I agree with your point that infinitesimals in the hyperreals are not nilpotent hence aren't quite the right model for the powers of epsilon that go away. Am I getting that? I think you are clarifying the distinction between an approach like SIA and the nonstandard analysis model. I think you have a good point. Now what I am not too sure about is what you are saying about the status of the infinitesimal approach. I always thought it was a search for rigor; but I think you're saying they already had rigor and got unfairly demoted. Am I understanding this right? My point earlier was that SIA is very recent and quite modern in the sense of being based on category theory. They did not have that point of view in the 18th century. As far as I know. Is that the case you're trying to make? *
-
Infinitesimals and limits are the same thing
"The third possibility is that the crisis was a side-effect of the introduction of Georg Cantor’s theory of transfinite numbers. The theory depends on the Axiom of Choice, which implies LEM for the continuum ..." Sorry you didn't claim the reals require choice, you claimed Cantor's theory of transfinite numbers does. Equally wrong. And what does implying "LEM for the continuum" mean?
-
Infinitesimals and limits are the same thing
That's as true today as when I wrote it a few weeks ago. But you surely don't need choice to define the reals. See any modern textbook on real analysis for a construction of the reals using only the axioms of ZF. I'm not sure why you took my correction of a minor and inconsequential error in your paper, and doubled down to a demonstrably false claim. It seems like digging a hole deeper where a simple "Thanks for the clarification" would be appropriate.