Jump to content

Calculus Fun Challenge


JustStuit

Recommended Posts

In order to promote the interest, welfare, and enjoyment of calculus I propose we start a random calculus problem thread. Just answer the previous question and provide a new one, however easy/hard you want (hopefully someone can answer it.) Have fun :D

 

I'll start with

 

[math]\int{\sec{x}^3dx}[/math]

(I think this one is fun and different)

Link to comment
Share on other sites

[math]\int{\sec{x}^3dx}=F[/math]

[math]u=Sec(x)[/math]

[math]du=Sec(x)Tan(x)dx[/math]

[math]dv=Sec(x)^2dx[/math]

[math]v=Tan(x)[/math]

[math]F=[/math][math]Sec(x)Tan(x)[/math] [math]\int{\tan{x}^2sec{x}dx}[/math]

[math]F=[/math][math]Tan(x)^2=Sec(x)^2-1[/math]

[math]F=[/math][math]Sec(x)Tan(x)[/math] [math]-\int{\sec{x}^3dx}[/math][math]+\int{sec{x}dx}[/math]

[math]F=[/math][math]Sec(x)Tan(x)[/math] [math]-F[/math][math]+ln(sec(x)+tan(x))[/math]

[math]2F=[/math][math]Sec(x)Tan(x)[/math] [math]+ln(sec(x)+tan(x))[/math]

[math]F=[/math][math](Sec(x)Tan(x))/2[/math] [math]+(ln(sec(x)+tan(x))/2[/math]

 

thank you very much for giving me the hint. I so forgot how to do this.

Link to comment
Share on other sites

Two comments:

 

1. is that (sec x)^3 or sec(x^3)?

2. if this is what people call calculus then I agree with insane_alien.

 

here's an interesting calculus problem:

 

work out the sum

 

[math] \sum_{n=1,3,5,\ldots} \frac{ (-1)^{(n-1)/2}}{n^2}[/math]

 

You will need to learn about fourier series (why is this interesting: because it is surprising that you can work out this sum at all, but you can, similarly you could try proving that the sum 1/n^2 is whatever fraction of pi it is supposed to be.)

Link to comment
Share on other sites

Matt, most likely insane_alien will even more dislike what you call calculus :D .

 

Well, let's call the stuff in this thread "entry level calculus", but could you elaborate what you think is real calculus?

 

--------------------------------------------------------------------------------------------

 

1. is that (sec x)^3 or sec(x^3)?

I bet, the first is meant. Solving that in principle is easy, but tedious.

 

[math]dx / cos^3(x) = cos(x) dx / cos^4(x) = d sin(x)/(1-sin^2(x))^2[/math]

 

By taking u = sin(x), the problem reduces to integrating [math]1/(1-u^2)^2[/math], which in turn can be written as [math]A/(1+u) + B/(1+u)^2 + C/(1-u) + D/(1-u)^2[/math].

 

The second one cannot be integrated using an expression with elementary functions only. Solving such things requires numerical quadrature algorithms and then only approximate numerical solutions can be obtained.

 

----------------------------------------------------------------------------------------------

 

Matt, I have a nice one for you.

 

The totient function phi(n) (which takes positive integer arguments) is a function with one argument n, which tells how many numbers in the set 1, 2, 3, .... n-1 are relative prime to n. Two positive integer numbers x, y are relative prime if they have no common factors other than 1, i.e. gcd(x,y) = 1.

 

E.g. phi(10) = 4. The numbers 1, 3, 7, 9 are relative prime to 10.

phi(15) = 8. The numbers 1, 2, 4, 7, 8, 11, 13, 14 are relative prime to 15.

phi(p) = p -1, when p is a prime number. All numbers 1 ... p-1 are relative prime to p.

 

Define the sum S(N): S(N) = phi(1) + phi(2) + phi(3) + ..... phi(N-1) + phi(N)

 

What is the limit of [math]S(N) / N^2[/math] when N goes to infinity? This limit is defined and has a very nice and surprising value.

Link to comment
Share on other sites

Two comments:

 

1. is that (sec x)^3 or sec(x^3)?

 

Sorry, I meant the first but typed it in wrong I think. It doesn't matter too much though.

Also, I've only completed Ap Calc AB (I'm still in high school) so my calculus will be a low level. I didn't try to restrict it though, post any level problems (maybe I'll learn something.)

Link to comment
Share on other sites

Juststuit:

 

The difference is very important, juststuit. I imagine solving when the integrand is sec(x^3) is impossible in elementary functions.

 

Woelen. I haven't seen that question before but the answer is e. It is always e. Or pi. Or possibly log(2), or something. (The limit seems to be 0.307...something, though I've not bothered to figure out what that is or why at the moment).

 

Real calc, since someone asked, would at least need the definition that f is continuous if and only if f^-1 sends open sets to open sets.

Link to comment
Share on other sites

Woelen. I haven't seen that question before but the answer is e. It is always e. Or pi. Or possibly log(2), or something. (The limit seems to be 0.307...something, though I've not bothered to figure out what that is or why at the moment).

Matt, the limit is appr. 0.30396355. I did the computation for N = 10^9, which took hours and hours on my old 300 MHz laptop ;) .

 

I have more of this kind of questions, but they are not easy to answer at all. I posted one of them more than a year ago, but no response :-( .

 

http://www.scienceforums.net/showthread.php?t=12017

 

In this thing I am interested in the curve's equation, but I also noticed that the leftmost zero (on the real axis, for an odd truncation), divided by N has a limit value as well (it is appr. -0.76).

 

These of course just are some mathematical recreations and they have no real practical application.

 

-------------------------------------

 

I now responded to Matt, but of course, others are invited as well :) .

Link to comment
Share on other sites

Matt, the limit is appr. 0.30396355. I did the computation for N = 10^9, which took hours and hours on my old 300 MHz laptop ;) .

 

Why would you bother with the computation? At any rate, if my memory of error terms serves*, S(N)/N^2 should be within 1/N of the actual limit (the error is O(1/N), I think 1 worked as a constant, but I'm not positive).

 

 

*edit-my memory of error terms failed me, O(log(N)/N) is easy enough to prove, I had thought the log could be removed with more work. It can be reduced a little bit, but apparently not to something less than logloglog(N), so a constant is right out.

 

 

see http://www.mai.liu.se/~halun/complex/taylor/

Link to comment
Share on other sites

Why would you bother with the computation?

Just out of curiousity. The totient function is a very irregular function, and I wondered what the sum of this would be for large N. For prime numbers, there is a distribution with density 1/log(p) and this of course affects the sum for phi(n). I'll see if I can find an analytic expression for that sum, I'm quite sure there is something, but it is not as simple as Matt suggests, it is not linear in pi, e, log(2), sqrt(2) or any other well-known number.

 

Shmoe, that link you provide about the truncated exp(x) series is great. I have been playing around a lot with this and also did numerical computations, finding a polynomial approximation of the curve, on which the roots reside, but what you gave is really great. Do you also have the proof, mentioned on that site? The person, who proved this must be a great mathematician.

Link to comment
Share on other sites

Isn't 0.30396355.... a very nice value :rolleyes: ? It is surprising to me, because I expected S(N) to be sub-quadratic in N for large N. So many values of phi(N) are MUCH smaller than N.

 

But I must admit, now, when I do some analysis on this function, it becomes less surprising to me. There are many values of phi(n), which are much smaller than n, but the majority of them is of order O(n). You know that the sum 1+2+3+....N is of order N^2, so then it becomes less surprising that S(N) also is of order N^2, but with a coefficient, less than 0.5.

Link to comment
Share on other sites

Just out of curiousity. The totient function is a very irregular function, and I wondered what the sum of this would be for large N. For prime numbers, there is a distribution with density 1/log(p) and this of course affects the sum for phi(n). I'll see if I can find an analytic expression for that sum, I'm quite sure there is something, but it is not as simple as Matt suggests, it is not linear in pi, e, log(2), sqrt(2) or any other well-known number.

 

Ahh, so you don't have the answer. I had thought you did for the same reason matt did, when you said "This limit is defined and has a very nice and surprising value". You can compute values as high as you like, it does not show this limit exists! And no, I wouldn't consider some as yet to be identified randomish looking string of digits 'nice' in any way.

 

From your last post...phi(n) is O(n), you trivially have phi(n)<n.

 

To do this problem, use [math]\phi(n)=n\sum_{d|n}\frac{\mu(d)}{d}[/math] where mu(n) is the mobius function (the sum is over divisors d of n). Change order of summation. You'll also need to know that [math]1/\zeta(s)=\sum_{n=1}^{\infty}\mu(n)n^{-s}[/math] when the real part of s>1.

 

The limit is indeed a 'nice' value as matt was guessing at (expressible in terms of naturally occuring well known constants), and it's not unrelated to the question he posed earlier (another reason I thought you might have known the answer!)

 

 

 

I don't have the proof about the taylor polynomials of e^x, but you should be able to find it with his name and the year given. Did you try mathscinet?

Link to comment
Share on other sites

For very large n, phi(n) more and more behaves like n. I was mislead by the first hundred or so values of phi(n). For these, there are values, which are MUCH smaller than n, and these are quite numerous. The lowest value is obtained, when n is a product of all different small primes.

 

Far larger n, however, the numbers, which are products of all different small primes become very sparse. So, actually phi(n) on average is of order O(n), so it becomes less and less surprising that S(N) is of order N^2. What still remains of course, is the actual value. It still is interesting to see whether this is just some weird constant without meaning, or some nice expression of well known numbers. If the latter is the case, then there is some interesting underlying structure. I have done no actual research on this yet, I just did some computations, out of curiousity. I'll delve more into this and the story will continue...

Link to comment
Share on other sites

For very large n, phi(n) more and more behaves like n. I was mislead by the first hundred or so values of phi(n). For these, there are values, which are MUCH smaller than n, and these are quite numerous. The lowest value is obtained, when n is a product of all different small primes.

 

Far larger n, however, the numbers, which are products of all different small primes become very sparse. So, actually phi(n) on average is of order O(n), so it becomes less and less surprising that S(N) is of order N^2. What still remains of course, is the actual value.

 

You are using O(n) in a non-standard way, at least not one I'm familiar with if you think saying phi(n) is on average of order O(n) is anything but a trivial statement. I guess you mean to say phi(n) is "close" to n for "lots" of values of n? This, or a precise statement of this, is essentially equivalent to proving you limit is something non zero.

 

We can find the 'average' value for phi(n)/n (average in an asymptotic kind of sense). I'll give some details once you've resolved your limit.

 

 

It still is interesting to see whether this is just some weird constant without meaning, or some nice expression of well known numbers. If the latter is the case, then there is some interesting underlying structure. I have done no actual research on this yet, I just did some computations, out of curiousity. I'll delve more into this and the story will continue...

 

 

I'm telling you it is the latter case! Everything you need is in my last post. I can fill out the details if you like, but I'll wait until you ask for them.

Link to comment
Share on other sites

Shmoe,

 

With your first hint, I found the following.

 

[math]phi(n) = \sum_{d|n}(\mu(d) (n/d))[/math]

 

I write this as [math]phi(n) = \sum_{dm = n}(m*\mu(d))[/math]

 

My sum S(N) can be written as [math]S(n) = \sum_{n=1}^N\sum_{dm = n}(m*\mu(d))[/math]

 

Now, I write this out for each n

 

mu(1)

mu(2) + 2mu(1)

mu(3) + 3mu(1)

mu(4) + 2mu(2) + 4mu(1)

mu(5) + 5mu(1)

mu(6) + 2mu(3) + 3mu(2) + 6mu(1)

...

...

 

This can be written as a sum of sums

 

[math]S(N) = \mu(1)\sum_{k=1}^{N}k + \mu(2)\sum_{k=1}^{N/2}k + \mu(3)\sum_{k=1}^{N/3}k + ... + \mu(N)[/math]

 

In fact, the upper limits for all sums are not N/2, N/3 etc, but the floor() function of these. Here is some little quirk in my proof, but for now, I first assume this to affect the order of the error term only, and for very large N this effect will be negligible. But I must admit, this is some point of additional refining. But now I want to go on with the major outline of my computation.

 

For very large N: [math]\sum_{k=1}^{N/q}k = (1/q^2)\sum_{k=1}^{N}k + \epsilon(N,q)[/math]

Here eps(N,q) is some error term depending on N and q. For now, I'll leave out the error analysis.

 

Now, S(N) can be written as

 

[math]S(N) = (\sum_{q=1}^{N}\mu(q)/q^2)*(\sum_{k=1}^{N}k) + errorterm(...)[/math]

 

Using your second hint about summation of mu(n)/n^s: The first sum is 1/zeta(2) + someerror(N).

The second sum simply is 0.5*N² + O(N)

 

For N very large, S(N) will be of the form (1/zeta(2))*0.5*N²

 

So, the nice value for the limit of S(N)/N² is 1/(2*zeta(2))

 

Of course, I understand that this is not the full proof. The error analysis is left out, but writing that out all over here would require a very lengthy post. This is just the main outline.

 

--------------------------------------------------------------------

 

EDIT: The value is even nicer: zeta(2) = pi²/6, so the limit of S(N)/N² is 3/pi².

This value also perfectly matches my computation I mentioned in a previous post (up to 8 digits are the same).

 

WOW, this is really cool! Shmoe, you have learnt me a lot of new and interesting things. This opens up a whole new area of studying and learning about. Finding this "proof" almost has taken me 4 hours, plus some waking hours in the night :D, but it feels good that I could solve this problem with my limited knowledge of this stuff.

Link to comment
Share on other sites

Good, I'm glad this had a decent answer. It ought to have done. The reason I was disappointed earlier was that I thought you had this nice proof in your head and were going to tell us it. Thank God shmoe was around to point you in the right direction straight away. I was thinking about asking you to consider that phi(p^n)=p^n-p^{n-1} and multiplicative and then trying to rewrite the sum, which would have led nowhere fast.

Link to comment
Share on other sites

That's essentially it woelen, here's how you can deal with the error terms. I'll use [x] to denote the floor of a real number x. Note that [x] is always within 1 of x, so [x]=x+O(1). This gives [x]^2=x^2+O(x) and [x]=O(x) .

 

[math]\sum_{k\leq N}\phi(k)=\sum_{k\leq N}k\sum_{dm=k}\frac{\mu(d)}{d}[/math]

 

this can is the sum over all d and m where d*m<=N. Changing this to a sum over d and m and using the formula for the sum of the first n integers gives (we can pull the d out of the inner sum):

 

[math]=\sum_{d\leq N}\frac{\mu(d)}{d}\sum_{m\leq [N/d]}md=\sum_{d\leq N}\frac{\mu(d)}{d}d\left(\frac{[N/d]^2}{2}+\frac{[N/d]}{2}\right)[/math]

 

replacing [N/d] with N/d gives:

 

[math]=\frac{N^2}{2}\sum_{d\leq N}\frac{\mu(d)}{d^2}+O\left(\ \sum_{d\leq N}\frac{N}{d}\right)=\frac{N^2}{2}\sum_{d\leq N}\frac{\mu(d)}{d^2}+O\left(N\log{N}\right)[/math]

 

since [math]\sum_{d\leq N}d^{-1}=O(\log{N})[/math], just bound the sum by the corresponding integral. Also bounding by an integral, you can show [math]\sum_{d> N}\mu(d)d^{-2}=O(N^{-1})[/math], which gives [math]\sum_{d\leq N}\mu(d)d^{-2}=\zeta(2)^{-1}+O(N^{-1})[/math]. Inserting this into the above and we get:

 

[math]\sum_{k\leq N}\phi(k)=\frac{N^2}{2\zeta(2)}+O\left(N+N\log{N}\right)=\frac{N^2}{2\zeta(2)}+O\left(N\log{N}\right)=\frac{3}{\pi^2}N^2+O(N\log{N})[/math]

 

which proves your limit. Now, if P(N) is the probability that two randomly selected integers less than or equal to N are relatively prime, what is the limit of P(N) as N-> infinity?

 

You can likewise show that [math]\sum_{k\leq N}\frac{\phi(k)}{k}=\frac{6}{\pi^2}N+O(N\log{N})[/math] using the same approach (this will be slightly simpler), or you can use partial summation to get from one result to the other (partial summation is the 'integration by parts' of summation, it's actually integration by parts for Riemann-Stieltjes integrals). This leads to the conclusion that the 'average value' of phi(n)/n is 6/pi^2.

Link to comment
Share on other sites

Shmoe, again thanks for your explanation. The error analysis still was quite cumbersome for me and involved a lot of work. Your explanation is compact and can be understood well.

 

Now, if P(N) is the probability that two randomly selected integers less than or equal to N are relatively prime, what is the limit of P(N) as N-> infinity?

This does not seem a difficult question to me, but if I am missing something, then please let me know. Take two functions A(N) and B(N). A selects all numbers, less than or equal to N and counts the selected numbers. B selects a fraction of all numbers less than or equal to N for all N.

 

The function A(N) is equal to N.

The function B(N) is equal to b*N, with 0 < b < 1.

 

Now, if we take A(1) + A(2) + .... A(N), we come to ½N² + O(N).

 

If we take B(1) + B(2) + .... B(N), we come to b*½N² + O(N).

 

The probability that a number is selected, less than N equals 1 for function A, for function B it equals b*½N² / ½N², which equals b.

 

Now take B(N) equal to phi(N).

 

The sum now equals (3/pi²)*N² . The probability is equal to

(3/pi²)*N² / ½N² = 6/pi².

 

I did not check my answer numerically, by taking a lot of numbers, but I have a strong evidence that this is the correct answer. The answer somehow surprises me, however. Intuitionally I would expect this chance to be much lower. So, more than 60% of all pairs of randomly selected very large numbers are relatively prime to each other!

 

Actually, I have answered a slightly different question. You asked for picking two random numbers, less than n. I answered to the problem of given a number N, what is the chance that the a random number less than N is relative prime to N.

These questions are equivalent to my opinion. Suppose I pick two numbers N1, N2, both less than N in a random way. These numbers still are of order N. Now take the largest one of these two and determine the chance that the other one is relative prime to it. This works, because N1, N2 still are of order N and we still may assume the limit situation for N-->infinity.

Link to comment
Share on other sites

You've answered something that isn't quite equivalent.

 

(phi(1)+..+phi(N))/(1+...+N)

 

is the probability that a pair selected from A={(n,m)| 1<=m<=n<=N} has gcd(n,m)=1.

 

This isn't the same as selecting from the set B={(n,m)| 1<=m<=N, 1<=n<=N} as I originally asked for. The diffence is the diagonal, the points where n=m. The map f from B to A that orders the pair (e.g. f(1,2)=(2,1)) is a 1-1 map on the diagonal but 2-1 everywhere else (f(1,2)=f(2,1)). Essentially, you're giving the diagonal elements more weight and your probability of relatively prime pair for any N is smaller.

 

Since there are only N elements on the diagonal and N^2 choices overall, this doesn't affect the limit though.

 

 

About the appearance of the 1/zeta(2) in the solution, this is no accident of course. Using the relation of phi with the mobius function you can prove, for Real part of s>2:

 

[math]\sum_{n=1}^\infty\frac{\phi(n)}{n^s}=\frac{\zeta(s-1)}{\zeta(s)}[/math]

 

if you look up Perron's formula you will find an analytic approach to approximating the summatory function of the coefficients of a Dirichlet series like this.

Link to comment
Share on other sites

You've answered something that isn't quite equivalent.

Yes, I realized that, while giving the answer. I already told in my previous post:

 

Actually, I have answered a slightly different question. You asked for picking two random numbers, less than n. I answered to the problem of given a number N, what is the chance that the a random number less than N is relative prime to N.

But I also explained that for large N, these have the same limit. I could have expressed it a little bit more careful. They are not equivalent, as I may have suggested in my previous post, but for large N, their limit is the same, and hence I considered the question answered.

 

But it is good that you pointed out this non-equivalency explicitly. In other cases, such non-equivalencies may indeed result in different answers.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.