113

Content Count
32 
Joined

Last visited
Posts posted by 113


Normally limits are used instead of infinitesimals, but is it possible to calculate limits using infinitesimals?
For example:
\[ \lim_{x \to 0} \frac{sin(x)  x}{x^3} \]
this is usually solved by applying L'Hopital's rule 3 times and the answer is 1/6:
0 
Every point of a number line is assumed to correspond to a real number.
https://en.wikipedia.org/wiki/Number_line
Is it possible to find points corresponding to infinitesimals on a number line? I mean finding an infinitesimal between two neighbouring points (between two real numbers).
I am assuming that every point is surrounded by neighbourhood. I got this idea of neighbouring points from John L . Bells' book A Primer of Infinitesimal Analyis (2008).
On page 6, he mentions the concept of ‘infinitesimal neighbourhood of 0’. But I think he would not consider his infinitesimals as points because on page 3 he writes
that "Since an infinitesimal in the sense just described is a part of the continuum from which it has been extracted, it follows that it cannot be a point:
to emphasize this we shall call such infinitesimals nonpunctiform."
0 
On 9/2/2019 at 12:10 AM, studiot said:Finally you shoud beware of the use of the following symbols
Δx; δx; dx and D(x)
These are used somewhat inconsistently to my way of thinking
The limit defintion of derivative in my previous post contains only the symbols h (corresponding to Δx) and dx. There is no δx.
It seems to me that the introduction of "differential calculus" gives rise to symbol δx. Then there seems to appear two representations:
f'(x) = dy/dx
f'(x) = δy/δx
I think it is possible the usage of δy/δx was chosen to escape the problem arising in real number calculus, the problem of 0/0.
0 
On 9/2/2019 at 12:10 AM, studiot said:Well Bell is aware of the problem, as were Lawvere, Bishop and the authors of my extract, Hellman and Shapiro.
As far as I am aware all use a scheme (with variations) that runs as follows
Postulate an 'extended' set R (usually extended with infinitesimals of some sort)
Postulate a suitable set of axioms to deduce the properties that are inheritable in R alone
Show that the usual rules of calculus work in the extended set
Use the axioms to transfer this to claculus in R.
I am beginning to suspect that calculus is not based on real numbers. Look at the definition of the derivative:
\[\frac{dy}{dx} = \lim_{h\to\ 0}\frac{f(x+h)  f(x)}{h}\]
where h is finite.
What is dy/dx? An infinitesimal ratio? A ratio of two infinitesimals dy and dx ? It seems to me that we are not dealing with real numbers anymore if dy and dx
are not real numbers.
0 
5 hours ago, studiot said:I don't know since you haven't told me.
But whether you use (x+dx) or (x+2x) or (x+anything that is not a real number) is irrelevant until you justify what arithmetic you are using that permits this and defines what the result is.
Do you consider (x+dx) to be a real number or what?
I don't know. Maybe the answer can be found in the book I am studying.
John L. Bell is defining the ‘derivative’ of an arbitrary given function f : R → R.
For fixed x in R, define the function g: Δ → R by g(ε) = f(x + ε) so that
f(x + ε) = f(x) + εf'(x)
is the fundamental equation of the differential calculus in S for arbitrary x in R and ε in Δ.( Δ may be considered an infinitesimal neighbourhood or microneigbourhood of 0).
Also he is stating Microaffiness Axiom:
For any map f:Δ → R there exist unique a, b ϵ R such that f(ε) = a + bε
for all ε ϵ Δ
He draws a conclusion: Our single most important underlying assumption will be: in S, all curves determined by functions from R to R satisfy the Principle of Microstraightness. The Principle of Microaffineness may be construed as asserting that, in S, the microneighbourhood Δ can be subjected only to translations and rotations, i.e. behaves as if it were an infinitesimal ‘rigid rod’. Δ may also be thought of as a generic tangent vector because Microaffineness entails that it can be ‘brought into coincidence’ with the tangent to any curve at any point on it. Since we will shortly show that Δ does not reduce to a single point, it will be, so to speak, ‘large enough’ to have a slope but ‘too small’ to bend.
QuoteYou are still missing the points of the questions I am asking. Since it is your proposition it is for you to state clearly the system of algebra (and its rules) in which you are working.SIA has problems of its own. Here is an extract from a recent paper.Mathematicians have struggled with the philosophic problem of dealing with this for centuries, but no one has yet come up with a watertight answer, or a better one that the limit process which has many other uses to boot.
I don't have all the answers to your questions. I am only studying the subject. Don't expect me to have all the answers if noone else has been able to find them. I am
looking for them in the books. I am not developing my own system of algebra.
0 
52 minutes ago, studiot said:Now you have the sum (x + 2dx) which begs two questions
How is addition defined in your working?
Do you think the sum (x + 2dx) is using some different system of algebra than, for example, the sum (x + dx) ?
QuoteHow is multiplication defined in your working?
In other words which system of algebra (arithmetic) are you working?
This is necessary since you cannot be using the standard axioms of arithmetic.
I did not invent my own system of arithmetic. I am currently learning what John L. Bell has written in his book. I think you should ask the same question about what axioms of arithmetic are used in an infinitesimal approach: dx is nilsquare infinitesimal, meaning (dx)² = 0 is true, but dx=0 need not be true at the same time. So how is multiplication defined here? What axioms of arithmetic are being used? Maybe they are to be found is John L.Bell's book, he writes: "As we show in this book, within smooth infinitesimal analysis the basic calculus and differential geometry can be developed along traditional ‘infinitesimal’ lines – with full rigour – using straightforward calculations with infinitesimals in place of the limit concept. And in the 1970s startling new developments in the mathematical discipline of category theory led to the creation of smooth infinitesimal analysis, a rigorous axiomatic theory of nilsquare and nonpunctiform infinitesimals."
0 
8 hours ago, studiot said:I was in two minds about continuing this discussion.
However although you belatedly (second post) announced that you are considering dx as an 'infinitesimal', you still haven't clarified first opening post.
What type of algebra are you using that allows you to write
f(x + 2dx) ?
\[ f'(x) = \frac{f(x+dx)  f(x)}{dx}\]
\[ f'(x + dx) = \frac{f(x + 2dx)  f(x + dx)}{dx}\]
\[ f''(x) = \frac{df'(x)}{dx}\ = \frac{f'(x+dx)  f'(x)}{dx}\]
from which, after a calculation (I skip writing this lengthy LaTeX code now, you may try it yourself), it is possible to get the result in my first post, the definition of second derivative:
\[ f''(x) = \frac{f(x+2dx)  2f(x + dx) + f(x)}{(dx)^2}\]
0 
On 8/29/2019 at 7:32 PM, 113 said:There is a book available, even for free download, A Primer of Infinitesimal Analysis by John L.Bell. It is possibly what I am looking for. The book says that: "A remarkable recent development in mathematics is the refounding, on a rigorous basis, of the idea of infinitesimal quantity, a notion which, before being supplanted in the nineteenth century by the limit concept, played a seminal role within the calculus and mathematical analysis."direct quote
Also an interesting note from the book: "A final remark: The theory of infinitesimals presented here should not be confused with that known as nonstandard analysis, invented by Abraham Robinson in the 1960s. The infinitesimals figuring in his formulation are ‘invertible’ (arising, in fact, as the ‘reciprocals’ of infinitely large quantities), while those with which we shall be concerned, being nilpotent, cannot possess inverses." direct quote
The question is: why can't John L. Bell's nilpotent infinitesimals possess inverses?
0 
There is a book available, even for free download, A Primer of Infinitesimal Analysis by John L.Bell. It is possibly what I am looking for. The book says that: "A remarkable recent development in mathematics is the refounding, on a rigorous basis, of the idea of infinitesimal quantity, a notion which, before being supplanted in the nineteenth century by the limit concept, played a seminal role within the calculus and mathematical analysis."direct quote
Also an interesting note from the book: "A final remark: The theory of infinitesimals presented here should not be confused with that known as nonstandard analysis, invented by Abraham Robinson in the 1960s. The infinitesimals figuring in his formulation are ‘invertible’ (arising, in fact, as the ‘reciprocals’ of infinitely large quantities), while those with which we shall be concerned, being nilpotent, cannot possess inverses." direct quote
0 
10 minutes ago, studiot said:Please answer this question directly.
Otherwise I can see no point continuing the conversation.I am not here to talk about those subjects. There are already enough books about them available. In the beginning, in my second post, I told that I am dealing
with an infinitesimal approach: dx is nilsquare infinitesimal, meaning (dx)² = 0 is true, but dx=0 need not be true at the same time.
0 
15 minutes ago, studiot said:Finite differences are not infinitesimals.
In my first post dx is an infinitesimal
15 minutes ago, studiot said:Further have you considered the second analytic derivative in your example, which is a constant ?
yes, I have obtained f''(x) = 2 using the definition in my first post
0 
18 minutes ago, studiot said:This is a purely analytical example, so why are you wanting to use a definition from an article entitled
just to see how useful an infinitesimal approach is
0 
1 hour ago, studiot said:It would help if you answered this question.
Let's choose an example \[ f(x) = x^2 \]
using the definition in my first post, obtain the second derivative of f(x)
0 
On 8/8/2019 at 12:38 AM, mathematic said:Because it is nonsense.
it is possible to use division by zero:
0 
On 5/28/2019 at 11:46 PM, mathematic said:dx etc. are symbols used for derivatives and integrals. They are not supposed to have numeric values.
What's wrong with division by zero ?
0 
On 5/24/2019 at 12:37 AM, studiot said:I wouldn't recommend it since the dx^{2} in
d2ydx2
means something slightly different than h^{2} , ie not the same as (dx)^{2} would be.dx is nilsquare infinitesimal, meaning (dx)² = 0 is true, but dx=0 need not be true at the same time.
https://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis
A problem seems to arise because there appears to be a division by zero in that case.
0 
Is it possible to define the second derivative of f(x) in this way:
\[ f''(x) = \frac{f(x+2dx) 2(f+dx) + f(x)}{(dx)^2} \]
I am using a finite difference approximation called "Second order forward" from the link, I use dx instead of h:
https://en.wikipedia.org/wiki/Finite_difference#Higherorder_differences
0 
On 10/24/2018 at 10:29 AM, taeto said:I still have no idea what you mean by your dx . It is used in various roles in Calculus, Analysis and Differential Geometry, and none of them agree with what you say about it.
I have explained from the very beginning what I mean by dx:
An introduction of infinity brings a duality into the definition of an infinitesimal, meaning that we have to deal with objects that both zero and nonzero at the same time.
From the very beginning you evaded answering my question " is 1/∞ zero or nonzero ?"
That's why you are stuck at asking what I mean by dx. I have explained everything exactly. It seems to me that you are ignoring what I have written. Also those sources you mention don't deal with the duality I mentioned, they simply ignore it. That's why it may look as though they don't agree with what I have written. But the truth is they don't even deal with it.
QuoteNow you state (dx)2=0. When you use an equality = sign, do you actually mean that the things on either side are identical? And your original post assumes that it makes sense to divide by dx , right? So then if you take (dx)2=0 and divide by dx on both sides, does dividing the same quantity by the same quantity produce different results depending on whether the quantity is on the LHS or the RHS of the equality sign? If you carry out this division step, do you see why it is confusing at least to some people when you insist that dx>0?
It may look as though I am dividing by 0. But again the result depends on which order the calculation is done:
take (dx)^{2} = 0 and divide by dx on both sides
\(\frac{(dx)^2}{dx} = \frac{0}{dx}\)
which is the same as \(\frac{0}{dx} = \frac{0}{dx}\) looks valid because it is 0 = 0
On the other hand if simplification is done first
\(\frac{(dx)^2}{dx} = \frac{0}{dx}\)
\(dx = 0 \)
There is confusion only if you ignore the duality that I mentioned, I don't simply just insist that dx > 0. The confusion arises if you ignore that dx is both zero and nonzero at the same time. You are trying to force, or define, dx to be either 0 or nonzero.
QuoteWhen you try to understand calculus, you should become familiar with the standard limit argument and be able to recognize that studiot applied it.
no, he did not apply limits, he just stated that he can ignore (dx)^{2 }if dx is small enough, exactly what I have done.
QuoteIt looks like you are making up your own stuff. Have you ever seen identities like dx=1/∞ or expressions like x+dx in any text which seriously teaches calculus?
no, I am not making up my own stuff. I have seen identities like 1/∞ = 0 in many texts which seriously teach calculus. Also I have seen that many of them ignore
the duality I mentioned, but not all of them. So I did not make up the duality myself, I am dealing with it.
QuoteAnyway, you are obviously not developing any "mathematics", seeing that you only work with undefined concepts.
no, I don't work with such concepts. The reason is that you can't define, or force, dx to be either 0 or nonzero, because it is both. You are ignoring the problem.
0 
On 10/22/2018 at 6:27 PM, taeto said:I had thought that you were attached to the idea that you may "ignore" dx2
at any time before you finish the calculation. Happy to see that is not the case.
You are "ignoring" terms only when doing so will lead you to the correct result?
No, I am not ignoring terms in order to get to correct result. I am ignoring terms that are equal to 0.
The result of \(\frac{(dx)^2}{(dx)^2} \) depend on the order: if the simplification is done first then the result is different than if (dx)^{2} = 0 is used :
\(\frac{(dx)^2}{(dx)^2} = \frac{dx}{dx} \frac{dx}{dx}\ = \frac{dx}{dx}\) < simplification done first
\(\frac{(dx)^2}{(dx)^2} = \frac{0}{0}\) < (dx)^{2} = 0 is used
So how can you be certain that \(\frac{dx}{dx} = \frac{0}{0}\) ?
It could be that \(\frac{dx}{dx} = 1 \) and it does not need to be equal to \(\frac{0}{0}\). Just because dx can be nonzero, so why should \(\frac{dx}{dx} = \frac{0}{0}\) ?
QuoteYou have not volunteered much motivation to show that you "definition" of a
derivative is somehow more practical than the standard definition. You refer to
studiot's calculation, in which he uses a standard limit argument.
No, he does not use a standard limit argument, he does not use limits at all. I don't see why this very fact is so hard for you to admit. Is it because what
he has shown supports what I have written? And you decided that I am doing calculations in "my style", and it would look impossible for you how studiot
is also doing calculations in "my style"? You just can't admit that I am not developing my own mathematics from scratch. All that I am trying to do is understanding calculus. I use calculus, I don't invent my own calculus.
QuoteHow about another different case to show how your suggestion may be the
superior one.
Now let g be the function for which g(x) = x if x is real, and g(x) = 0 if x is not
real. Of course in real analysis you only consider domains that are subsets of the
set of real numbers, accordingly you would compute a derivative of g'(0) = 1 at
x = 0, if x = 0 is contained in an open interval of the domain of g. But that would
be by the classical definition of derivative. What would your answer be, and how
would you arrive at it?
I am not sure of what you want me to do.
0 
20 hours ago, taeto said:Are you sure that you can divide by (dx)2 ? Do we not have to ignore (dx)2 before we continue?
I am not sure if I can divide (dx)^{2}. But I am not sure about you. Maybe you can ignore (dx)^{2} before you continue, but not me.
QuoteThis is precisely the reason why we can see that studiot was doing the classical computation. When he divides his version of (dx)2 by dx he will end up with a term dx the limit of which is 0 ,
He did not take the limit.
Quotemeaning that as an additive term it can be" ignored", that is, already considered to add 0 to the result. However, when you divide (dx)2 by dx you get dx , a term which you insist on being nonzero, hence it cannot be ignored in the same sense.
but you said that I am doing calculations in "my style" so that I should not end up with an extra term dx because I am ignoring (dx)^{2} . So why you are now telling that I am ending up with the extra term dx? Am I doing calculations in "my style" or am I not?
QuoteApparently studiot used "ignoring" to mean that we can replace additive terms which have limit equal to 0 by 0 itself when we take a limit.
He did not take the limit.
QuoteThis is a very precise notion.
Perhaps if he had taken the limit.
QuoteBut you seem to have a different idea of what it means to "ignore". I.e. if we do calculations your style, do we get 1=dxdx=(dx)2(dx)2=00 because we can "ignore" (dx)2 ?
That more or less looks like doing calculations in your style, not mine.
0 
On 10/16/2018 at 5:26 PM, taeto said:Again: to say "dx=1/∞ " does not make sense unless you explain it. Why actually do you think that 1/∞2 can be "ignored", and 1/∞ cannot be ignored? Look pretty much similar to me.
you are telling that dx and (dx)^{2} are the same. In that case we would have \(\frac{dx}{(dx)^2} = 1 \)
Instead what we have is \(\frac{dx}{(dx)^2} = \frac{1}{dx} = \infty \)
QuoteWhat studiot did was this. He replaced the h in the usual definition by your dx , and then did what one does when using the classical definition, namely calculate the expression and calculate the limit as h→0 (respectively dx→0 ). The (dx)2 bit means that when you divide h2 by h , then you get h , and h→0 , i.e. "can be ignored"
He did not calculate the limit, he just stated that " If dx is a very small quantity then (dx)^{2} is insignificant and may be ignored ". When I asked him to tell exactly how
small dx must be so that (dx)^{2} can be ignored, he told me to look for an answer in a book.
QuoteHowever, it seems that you cannot ignore the extra term dx , and it will remain a part of the answer. And you tell us that it is not zero. According to your thinking, the derivative is not zero.
No, the extra term dx will not remain part of the answer just because of what studiot said " If dx is a very small quantity then (dx)^{2} is insignificant and may be ignored ".
It would remain part of the answer only if it did not satisfy what studiot said, if it would not be small enough. All that I am asking for is for him to explain how
small exactly it must be. I have told that is dx is exactly 1/∞ , and it means that it is infinitely small, only in that case (dx)^{2} can be ignored.
0 
16 hours ago, uncool said:Technically, it does.You can define the formal "number" infinity, adjoin it to the real numbers, and use the resulting field. The problem with using this for calculus is that you then have to define your functions to include those new points. What is sin(1/infinity)? What is e^(1/infinity)?
Using \( dx = 1/\infty \)
\( sin(x) = x  x^3/3! + x^5/5!  ... \)
\( sin(dx) = dx  {(dx)}^3/3! + {(dx)}^5/5!  ... = dx \)
\( e^x = 1 + x + x^2/2! + x^3/3! + ...\)
\( e^{dx} = 1 + dx + {(dx)}^2/2! + {(dx)}^3/3! + ... = 1 + dx\)
because \( {(dx)}^2 \) and higher powers of dx are equal to 0
43 minutes ago, taeto said:IGreat. So we will get to see how you use your formula to calculate the derivative at x=0 of the function f given by f(x)=x2 for every x∈R like I asked before? Why not show us how it actually works? Show how it is meaningful.
studiot already showed how to do it. Since he said that I don't use not terminology exactly, I asked how small dx is so that (dx)^{2} can be ignored but he said that I must look for an answer in some book. I have told that dx = 1/∞ exactly so that (dx)^{2} can be ignored. Am I more exact than his book?
0 
On 10/7/2018 at 11:10 AM, taeto said:If your math book suggests to compute the derivative of y as the limit of the difference quotient of f , without saying first that y=f(x) , then it makes sense for you to throw it in the recycling bin.
Maybe it can. But this is analysis, a particular branch of math that deals with functions of real and complex numbers. If you ask whether it makes sense to have a real number d with the property 0<d≤x for all real numbers x , then the answer is that such a d cannot exist, because, e.g., d/2 has 0<d/2<d in R contradicts that d has the required property.
1/∞ is not a real number. Does it make sense to have 1/∞ with the property 0 ≤ 1/∞ < x for all real numbers x ?
On 10/7/2018 at 11:10 AM, taeto said:If this is not the property that you would want an "infinitely small" number to have, then what is the property that you are thinking of? Will you evade to answer that question?
I am not the one who is evading answering questions.
On 10/7/2018 at 11:10 AM, taeto said:Just writing up some string of symbols like 1/∞ doesn't always point to something that makes any sense. I ask again, what does it mean to you? Apparently you can make sense of it, since you keep going on about it. Why don't you answer the question: "what does it mean?"?
I already told what does it mean: it is neither zero nor nonzero. Because it is both. You evaded answering my question.
On 10/7/2018 at 11:10 AM, taeto said:Now it looks like you use "finite" to mean "nonzero".
no, I don't use finite to mean nonzero. 1/∞ is not finite.
On 10/7/2018 at 11:10 AM, taeto said:In the context of analysis, distance is given by Euclidean metric, and then the only distance that is not "finite" is identically 0. You end up with f(x+dx)−f(x)=0 always, independently of f and x .
You are ignoring an infinitely small distance dx, and it is not necessarily 0. Otherwise you end up with f´(x) = dy/dx = 0/0
On 10/7/2018 at 11:10 AM, taeto said:The most positive I can say would be something like just forget about the actual meaning of dx .
That would not be a very good idea.
On 10/7/2018 at 11:10 AM, taeto said:In ordinary usage, the functions x and dx belong to different species, they do not allow to be composed together by the binary operation of addition. It is something like trying to add a scalar to a 2dimensional vector.
are you trying to tell that I can't perform the additon x + dx ?
On 10/7/2018 at 11:10 AM, taeto said:The most you could do is to use your expression as a notational shorthand, which substitutes dx for h and removes the needs to write the lim symbol.
I don't know why it should be only a notational shorthand without meaning. I sounds like forgetting or ignoring the problem.
On 10/7/2018 at 11:10 AM, taeto said:Maybe you can think of any advantages in doing so.
no, I can't think of any advantages in doing so.
0 
8 hours ago, studiot said:It helps to finish your sentences.
Alright:
How exact is your " If dx is a very small quantity then (dx)^{2} is insignificant and may be ignored so we have" ?
I thought that you would understand that my sentence continued below.
QuoteIt also helps to answer the questions of others if you want your own answered.
Since I still don't know where you are coming from;
I suggest you read the standard Oxford University text at this level for half a century.
An introduction to the Infinistesimal Calculus
so does it tell exactly how small dx is?
0
calculating limits using infinitesimals
in Analysis and Calculus
Posted · Edited by 113
That's interesting, thanks for sharing. The main question is not answered though: " How many terms should I grab to go safe for every case? Why doesn't it suffice to take just the 1st nonzero term? "
Then they work with the limit:
\[\lim_{x \to 0} \frac{tan(x)  sin(x)}{x^3} \]
The answer is 1/2, using the limit calculator:
https://www.symbolab.com/solver/limitcalculator/\lim_{x\to0}\frac{\left(tan\left(x\right)  sin\left(x\right)\right)}{x^{^3}}
The limit calculator uses L'Hôpital three times and then plugs in the value 0.
What I was asing in my first post, is it possible to plug in an infinitesimal value? Is it possible to calculate limits using infinitesimals?