Jump to content

Where "calculus" came from


Recommended Posts

Posted (edited)

As far I can discern when inventing combinatorics I also found the product rule for differentiation and derivatives. Not "Isaac Newton" or some other name that I can't confirm ever existed. When pertaining to the scientific method, empirical evidence is based on observation which I don't rely on as heavily as mathematics for separating fact from fiction. 

I know I don't get anything out of sharing an arithmetic that can literally be used to earn a person money for cracking a password combination or solving any type of Rubiks cube. I should be retired for this alone. I only share it because I know some things that are potentially worth a lot more. Just realize whenever I post one of these there's an implied expectation somewhere in there for doing so. 

aaa.jpg

Edited by ImplicitDemands
Link to comment
Share on other sites

Posted (edited)
2 hours ago, ImplicitDemands said:

As far I can discern when inventing combinatorics I also found the product rule for differentiation and derivatives. Not "Isaac Newton" or some other name that I can't confirm ever existed. When pertaining to the scientific method, empirical evidence is based on observation which I don't rely on as heavily as mathematics for separating fact from fiction. 

I know I don't get anything out of sharing an arithmetic that can literally be used to earn a person money for cracking a password combination or solving any type of Rubiks cube. I should be retired for this alone. I only share it because I know some things that are potentially worth a lot more. Just realize whenever I post one of these there's an implied expectation somewhere in there for doing so. 

aaa.jpg

Leibnitz and Newton are thought to have independently developed calculus at around the same time, though I think it was Leibnitz who published first. As so often with science and mathematics, the idea was germinating at the time and trying to determine who got there "first" is rather debatable and of limited value. Both men most certainly existed, though.

I can't make sense of the rest of your post.    

Edited by exchemist
Link to comment
Share on other sites

Posted (edited)
8 hours ago, exchemist said:

Leibnitz and Newton are thought to have independently developed calculus at around the same time, though I think it was Leibnitz who published first. As so often with science and mathematics, the idea was germinating at the time and trying to determine who got there "first" is rather debatable and of limited value. Both men most certainly existed, though.

I can't make sense of the rest of your post.    

Well that doesn't tell me how they came up with the product rule.

As for my post, I was showing how to break down all the 10 digit passwords below the obvious 10^10 possible combinations so one could work with them. I even made it more difficult of a task by having the number 1 repeated, that means there would have been two ones in any arrangement of 10 different digits.

I had to take the zero out and find the 1-9 combinations. Then, I would have eliminated all the combinations with 9 in them, and I would have put 0 back in to replace the nine. Therefore, when computing for zero you'd have one set of 9^9 combinations where 9 is included and a second set of 9^9 combinations where 0 is included if 1 weren't repeating twice. So the 123456789 {0} is the equivalent of a square exponential, if you did the product rule it would be like you base^2, and then doing the process shown and swapping the 9 and the 0 would be like performing the product rule on base^2, that is (2 x base)^(2-1) = 2 x base. 

In essence, the difference between the 10^10 combinations of all the digits on your keyboard and the combinations you get when you sum up 123456789 {0} & 123456780 {9} is the same difference between base^2 and 2 x base (it's derivative). 

A Rubik's cube would be treated the same, with 54 digits instead of 10. 

Edited by ImplicitDemands
Link to comment
Share on other sites

Posted (edited)
1 hour ago, ImplicitDemands said:

Well that doesn't tell me how they came up with the product rule.

I don't know precisely how Newton or Leibniz obtained the product rule of differential calculus, but it seems rather easy to obtain to me: 


[math]\text{By definition:}[/math]

[math]\dfrac{df(x)}{dx} \buildrel \rm def \over = \displaystyle \lim_{h \to 0} \dfrac{f(x + h) - f(x)}{h}[/math]


[math]\text{Therefore:}[/math]

[math]\dfrac{df(x)g(x)}{dx} = \displaystyle \lim_{h \to 0} \dfrac{f(x + h) g(x + h) - f(x)g(x)}{h}[/math]

[math]= \displaystyle \lim_{h \to 0} \dfrac{f(x + h) g(x + h) - f(x) g(x + h) + f(x) g(x + h) - f(x)g(x)}{h}[/math]

[math]= \displaystyle \lim_{h \to 0} \dfrac{f(x + h) g(x + h) - f(x) g(x + h)}{h} + \displaystyle \lim_{h \to 0}  \dfrac{f(x) g(x + h) - f(x)g(x)}{h}[/math]

[math]= \displaystyle \lim_{h \to 0} \dfrac{f(x + h) g(x) - f(x) g(x)}{h} + \displaystyle \lim_{h \to 0}  \dfrac{f(x) g(x + h) - f(x)g(x)}{h}[/math]

[math]= \left(\displaystyle \lim_{h \to 0} \dfrac{f(x + h) - f(x)}{h}\right) g(x) + f(x) \left(\displaystyle \lim_{h \to 0}  \dfrac{g(x + h) - g(x)}{h}\right)[/math]

[math]= \dfrac{df(x)}{dx} g(x) + f(x) \dfrac{dg(x)}{dx}[/math]

 

 

Edited by KJW
Link to comment
Share on other sites

I was just about to explain how calculus arose out of the limiting value for a function when the change approaches zero when you posted your excellent response; including the product rule, no less.

Link to comment
Share on other sites

36 minutes ago, KJW said:

I don't know precisely how Newton or Leibniz obtained the product rule of differential calculus

I didn't find it but since most Newton's derivations in Principia were geometrical, I assume that the product rule's derivation was geometrical as well. It could be as simple as this:

image.png.07daff2c31d3cc8cbe4e8ae34405d1ee.png

Link to comment
Share on other sites

I found this article titled "Calculus Before Newton and Leibniz - An in-depth article on the history of calculus". Here is the introductory section of the article:

 

The Development of Calculus

History has a way of focusing credit for any invention or discovery on one or two individuals in one time and place. The truth is not as neat. When we give the impression that Newton and Leibniz created calculus out of whole cloth, we do our students a disservice. Newton and Leibniz were brilliant, but even they weren’t capable of inventing or discovering calculus.

The body of mathematics we know as calculus developed over many centuries in many different parts of the world, not just western Europe but also ancient Greece, the Middle East, India, China, and Japan. Newton and Leibniz drew on a vast body of knowledge about topics in both differential and integral calculus. The subject would continue to evolve and develop long after their deaths. What marks Newton and Leibniz is that they were the first to state, understand, and effectively use the Fundamental Theorem of Calculus. No two people have moved our understanding of calculus as far or as fast. But the problems that we study in calculus—areas and volumes, related rates, position/velocity/acceleration, infinite series, differential equations—had been solved before Newton or Leibniz was born.

It took some 1,250 years to move from the integral of a quadratic to that of a fourth-degree polynomial. But awareness of this struggle can be a useful reminder for us. The grand sweeping results that solve so many problems so easily (integration of a polynomial being a prime example) hide a long conceptual struggle. When we jump too fast to the magical algorithm and fail to acknowledge the effort that went into its creation, we risk dragging our students past that conceptual understanding.

This article explores the history of calculus before Newton and Leibniz: the people, problems, and places that are part of the rich story of calculus.

 

 

Link to comment
Share on other sites

Posted (edited)

In my experience the content in the OP was learned before the power rule which is the basis of both the product rule and derivatives. 

1 hour ago, KJW said:

integration

I found that through limits prior to learning (a/(b+1))^(b+1). You'd think looking at the power rule a^b; f(a)=(ba)^(b-1) that an integral would be f(a)=(a/b)^(b+1) where does that a/b+1 come from. Before learning integration I learned R=r+1 to work on a miss-written derivation problem:

27.jpg

The diameter of the garden was actually 8", the radius was for 4" on the bottom lift picture there. Anyway the problem was admittedly miss-written. I learned why the integral adds one to the dividend which had the same value as the exponent from an error in the construction of a word problem. 

If I had to guess I'd assume I was in a simulation where a learning computer feeds off what I figure out to create more arithmetic. https://en.wikipedia.org/wiki/Intelligence_amplification

Edited by ImplicitDemands
Link to comment
Share on other sites

On 5/10/2024 at 10:28 AM, ImplicitDemands said:

I had to take the zero out and find the 1-9 combinations. Then, I would have eliminated all the combinations with 9 in them, and I would have put 0 back in to replace the nine. Therefore, when computing for zero you'd have one set of 9^9 combinations where 9 is included and a second set of 9^9 combinations where 0 is included if 1 weren't repeating twice. So the 123456789 {0} is the equivalent of a square exponential, if you did the product rule it would be like you base^2, and then doing the process shown and swapping the 9 and the 0 would be like performing the product rule on base^2, that is (2 x base)^(2-1) = 2 x base. 

In essence, the difference between the 10^10 combinations of all the digits on your keyboard and the combinations you get when you sum up 1123456789 {0} & 1123456780 {9} is the same difference between base^2 and 2 x base (it's derivative). 

Let's take 4 unique digits for example: 1234

(1123, 2113, 2311, 1231, 1132, 3112, 3211, 1321) {4)

+

(1124, 2114, 2411, 1241, 1142, 4112, 4211, 1421) {3)

= 16 different combinations. 

4^4 unique combinations has derivative of 16^3, 16^3/4^4 = 16

Link to comment
Share on other sites

I mean the power rule has to be a derivative of the combinatorics above, where there is only one repeating number that repeats once, in that combinatorics you have to leave one number out to find the number of combinations and then swap it for one other number. For 4 digits you could have 4 repeating once, twice, thrice, or all numbers could be 4, same for 3,2,1; where altogether including the 4 x 24 value, you have a total of 4^4 possible figurations. But when only one of those repeats once, it's only 16 possible configurations. The derivative 16^3=16 x 4^4.

You can clearly see that the formula where the power rule comes into play is logn(x), where x=c times d, where if the total number of possible configurations is d=a^b and n=a times b. Finally, c=total number of possible configurations with one repeating number that repeats once as shown above. From that you can recognize the power rule a^b = (a times b)^(b-1), and its integral a^b=(a/(b+1))^(b+1) which has nothing to do with the economic optimization problem I had written out about the garden other than it using a the power rule one time. 

So if you had up all those functions in the OP picture you should get 10^18/10^10=10^8, log100(10^18)=9; (10^10)'=100^9

 

Edited by ImplicitDemands
Link to comment
Share on other sites

3 hours ago, ImplicitDemands said:

I mean the power rule has to be a derivative of the combinatorics above, where there is only one repeating number that repeats once, in that combinatorics you have to leave one number out to find the number of combinations and then swap it for one other number. For 4 digits you could have 4 repeating once, twice, thrice, or all numbers could be 4, same for 3,2,1; where altogether including the 4 x 24 value, you have a total of 4^4 possible figurations. But when only one of those repeats once, it's only 16 possible configurations. The derivative 16^3=16 x 4^4.

You can clearly see that the formula where the power rule comes into play is logn(x), where x=c times d, where if the total number of possible configurations is d=a^b and n=a times b. Finally, c=total number of possible configurations with one repeating number that repeats once as shown above. From that you can recognize the power rule a^b = (a times b)^(b-1), and its integral a^b=(a/(b+1))^(b+1) which has nothing to do with the economic optimization problem I had written out about the garden other than it using a the power rule one time. 

So if you had up all those functions in the OP picture you should get 10^18/10^10=10^8, log100(10^18)=9; (10^10)'=100^9

 

Obviously the OP was wrong about calculating the configurations but, given 10 x ((25 x 10!)/4!), 1123456789 {0} is like 1/100th of the possible combinations of repeating number so in theory there's plenty of room left for the rest of repeating configurations. 

And this is why the derivative of an exponent maximizes its dimensions. 

Edited by ImplicitDemands
Link to comment
Share on other sites

 

eee.jpg

Obviously if anyone 5 digits they would have seen why it is an incompatible way to proof the power rule. 

6's log links to 2.

Let's try it for next lower even number, 2 digits. 11 {2}, has a value of 1 configurations, x2 is 2.

((2x2)^(2-1))/2^2 = 1. 

If I did 8 digits (64^7/8^8=262144), I'm betting that since log64(262144)=3 you wouldn't have to do it with 8 again either, because 8's log links to 3. If you had 3 digits

113

131

311 = 3 and (9^2)/(3^3)=3

Let me see if I didn't do 4 wrong:

1123 {4}

1. 1123

2. 1132

3. 1231

4. 1321

5. 2311

6. 3211

7. 3112

8. 2113

9. 1213

10. 1312

11. 3121

12. 2131

1124 {3}

13. 1124

etc...

12 x 2 = 24 so yeah it wouldn't work like I said with 1 repeating it had 4 extra configurations in it. And even if I hadn't multiplied by 2, 16^3/4^4=16, so 4^4 x 12 wouldn't do it. 

It seems that Newton could have come up with the power rule by using one repeating digit in a combinatorics equation where the logarithms link, like 3 and 8, or 2 and 6. 

Edited by ImplicitDemands
Link to comment
Share on other sites

100^9/10^10=100000000->log100100000000=4

Yeah I'm betting this method wouldn't work for 10 digits since it links to 4 which didn't work. Maybe 12 digits?

144^11/12^12=61917364224->log14461917364224=5

Nope!

14? 196^13/14^14=56693912375296->log19656693912375296=6 

✓!

So 2, 3, 6, 8, & 14 all check out for this method of placing one repeating digit into a combinatorics problem which I'm betting is where the power rule came from. 

 

Edited by ImplicitDemands
Link to comment
Share on other sites

On 5/11/2024 at 5:28 AM, ImplicitDemands said:

I found that through limits prior to learning (a/(b+1))^(b+1). You'd think looking at the power rule a^b; f(a)=(ba)^(b-1) that an integral would be f(a)=(a/b)^(b+1) where does that a/b+1 come from.

One thing about the integral of [math]x^n[/math] that I find interesting is the case of [math]n = -1[/math]:


[math]\displaystyle \int x^n\, dx = \begin{cases}\ \dfrac{x^{n+1}}{n+1} + C & \text{if } n \neq -1 \\ \\ \ \log(x) + C &\text{if } n = -1 \end{cases}[/math]


Note that:

[math]\displaystyle \int x^{-1 + \varepsilon} \ dx = \dfrac{x^{\varepsilon}}{\varepsilon} + C[/math]

for all [math]\varepsilon \neq 0[/math] regardless of how small [math]\varepsilon[/math] is.

Furthermore, note that [math]x^{-1 - \varepsilon}[/math] can be deformed to [math]x^{-1 + \varepsilon}[/math] without discontinuity at [math]x^{-1}[/math]. Therefore, one would expect that:

[math]\displaystyle \int x^{-1 - \varepsilon} \, dx[/math]

can be deformed to:

[math]\displaystyle \int x^{-1 + \varepsilon} \ dx[/math] 

without discontinuity at:

[math]\displaystyle \int x^{-1} \ dx[/math]

even though the above formula seems to indicate that this is not the case.

But let's consider the definite integral:

[math]\displaystyle \lim_{\varepsilon \to 0}  \displaystyle \int_{1}^{x} u^{-1 + \varepsilon} \ du[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon} - 1}{\varepsilon}[/math]

[math]= \log(x)[/math]

Thus, it can be seen that the definite integral of [math]x^{-1 + \varepsilon}[/math] is continuous with respect to [math]\varepsilon[/math] at [math]x^{-1}[/math].


Interestingly, this notion can be extended to the definite integral of [math]\log(x)[/math] as follows:

[math]\displaystyle \int_{1}^{x} \log(v) \ dv[/math]

[math]= x \log(x) - x + 1[/math]


And:

[math]\displaystyle \lim_{\varepsilon \to 0} \displaystyle \int_{1}^{x} \displaystyle \int_{1}^{v} u^{-1 + \varepsilon} \ du \ dv[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \displaystyle \int_{1}^{x} \dfrac{v^{\varepsilon} - 1}{\varepsilon} \ dv[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon (\varepsilon + 1)} - \dfrac{x}{\varepsilon} - \dfrac{1}{\varepsilon (\varepsilon + 1)} + \dfrac{1}{\varepsilon}[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon (\varepsilon + 1)} - \dfrac{x (\varepsilon + 1)}{\varepsilon (\varepsilon + 1)} - \dfrac{1}{\varepsilon (\varepsilon + 1)} + \dfrac{(\varepsilon + 1)}{\varepsilon (\varepsilon + 1)}[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon (\varepsilon + 1)} - \dfrac{x \varepsilon}{\varepsilon (\varepsilon + 1)} - \dfrac{x}{\varepsilon (\varepsilon + 1)} - \dfrac{1}{\varepsilon (\varepsilon + 1)} + \dfrac{\varepsilon}{\varepsilon (\varepsilon + 1)} + \dfrac{1}{\varepsilon (\varepsilon + 1)}[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon (\varepsilon + 1)} - \dfrac{x \varepsilon}{\varepsilon (\varepsilon + 1)} - \dfrac{x}{\varepsilon (\varepsilon + 1)} + \dfrac{\varepsilon}{\varepsilon (\varepsilon + 1)}[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon} - \dfrac{x \varepsilon}{\varepsilon} - \dfrac{x}{\varepsilon} + \dfrac{\varepsilon}{\varepsilon}[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon + 1}}{\varepsilon} - x - \dfrac{x}{\varepsilon} + 1[/math]

[math]= x \Big(\displaystyle \lim_{\varepsilon \to 0} \dfrac{x^{\varepsilon} - 1}{\varepsilon}\Big) - x + 1[/math]

[math]= x \log(x) - x + 1[/math]


However, if one starts with [math]x^{\varepsilon}[/math] and form the derivative:

[math]\displaystyle \lim_{\varepsilon \to 0} \dfrac{dx^{\varepsilon}}{dx}[/math]

[math]= \displaystyle \lim_{\varepsilon \to 0} \varepsilon x^{\varepsilon - 1}[/math]

[math]= 0[/math]

If we consider [math]\varepsilon[/math] to be small but not infinitesimal, then for the integral, we start with [math]x^{\varepsilon - 1}[/math] and end with [math]\dfrac{x^{\varepsilon}}{\varepsilon}[/math], whereas for the derivative, we start with [math]x^{\varepsilon}[/math] and end with [math]\varepsilon x^{\varepsilon - 1}[/math]. That is, the derivative is smaller than the integral by factor of [math]\varepsilon[/math], becoming zero in the limit. Thus, although repeated integration starting from [math]x^{\varepsilon - 1}[/math] can use the power function integration formula, the resulting sequence of functions are distinct from power functions obtained by starting from, for example, [math]x^0[/math].

 

 

Edited by KJW
Link to comment
Share on other sites

8 hours ago, KJW said:

One thing about the integral of xn that I find interesting is the case of n=1 :


xndx= xn+1n+1+C log(x)+Cif n1if n=1


Note that:

x1+ε dx=xεε+C

for all ε0 regardless of how small ε is.

Furthermore, note that x1ε can be deformed to x1+ε without discontinuity at x1 . Therefore, one would expect that:

x1εdx

can be deformed to:

x1+ε dx  

without discontinuity at:

x1 dx

even though the above formula seems to indicate that this is not the case.

But let's consider the definite integral:

limε0x1u1+ε du

=limε0xε1ε

=log(x)

Thus, it can be seen that the definite integral of x1+ε is continuous with respect to ε at x1 .


Interestingly, this notion can be extended to the definite integral of log(x) as follows:

x1log(v) dv

=xlog(x)x+1


And:

limε0x1v1u1+ε du dv

=limε0x1vε1ε dv

=limε0xε+1ε(ε+1)xε1ε(ε+1)+1ε

=limε0xε+1ε(ε+1)x(ε+1)ε(ε+1)1ε(ε+1)+(ε+1)ε(ε+1)

=limε0xε+1ε(ε+1)xεε(ε+1)xε(ε+1)1ε(ε+1)+εε(ε+1)+1ε(ε+1)

=limε0xε+1ε(ε+1)xεε(ε+1)xε(ε+1)+εε(ε+1)

=limε0xε+1εxεεxε+εε

=limε0xε+1εxxε+1

=x(limε0xε1ε)x+1

=xlog(x)x+1


However, if one starts with xε and form the derivative:

limε0dxεdx

=limε0εxε1

=0

If we consider ε to be small but not infinitesimal, then for the integral, we start with xε1 and end with xεε , whereas for the derivative, we start with xε and end with εxε1 . That is, the derivative is smaller than the integral by factor of ε , becoming zero in the limit. Thus, although repeated integration starting from xε1 can use the power function integration formula, the resulting sequence of functions are distinct from power functions obtained by starting from, for example, x0 .

 

 

I misspoke, bottom line is there is no difference between the integral and the derivative, the derivative a^b multiplies b and subtracts it by one in the exponent, the integral doesn't change the original b value that gets subtracted in the derivation which is why it has to divide by b+1. Bottom line is I falsely claimed it had something to do with limits. Really it is a product of combinatorics with one repeated digit.

In my previous post I write that it doesn't work for the combinatorial of 4 where you have one digit repeat, I believe this is because 4 is a square. It worked for 2, 3, and 6. But I also believe it should work for 5, 8, 12 and 14 digits because none of those are squares. It probably wouldn't work for 9. Bottom line is the number of configurations where there is one repeated digit is exactly equal to dividend of the derivative of a number of digits and that same number of digits if that number is not a square, this has been repeatedly proved in this topic and is so uncanny that I picked one repeated digit to start examining password combinations just before taking calculus using math and that's what worked with the power rule, without which there would be no calculus, that I'm seriously questioning the nature of my reality. 

 

Edited by ImplicitDemands
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.