Jump to content

Q on e^x


Meital

Recommended Posts

the infinite series for [math]e^{x}[/math] is:

[math]

\sum_{n=0}^{\infty}\frac{x^{n}}{n!} = 1 + x + \frac{x^{2}}{2!} + \frac{x^{3}}{3!} + \frac{x^{4}}{4!} + \frac{x^{5}}{5!} + ... + \frac{x^{n}}{n!} + ...

[/math]

 

Taking the derivitive of each term of the series one by one, we get

 

[math]

\sum_{n=0}^{\infty}\frac{x^{n-1}}{n!} = 0 + 1 + 2*\frac{x^{1}}{2!} + 3*\frac{x^{2}}{3!} + 4*\frac{x^{3}}{4!} + 5*\frac{x^{4}}{5!} + ... + n*\frac{x^{n -1}}{n!} + ...

[/math]

 

because [math] \frac{n}{n!} = \frac{1}{(n-1)!}[/math], the above simplifies into

 

[math]

\sum_{n=0}^{\infty}\frac{x^{n}}{n!} = 1 + x + \frac{x^{2}}{2!} + \frac{x^{3}}{3!} + \frac{x^{4}}{4!} + ... + \frac{x^{n}}{n!} + ...

[/math]

 

so, as you can see, [math]e^{x} = \frac{d}{dx}e^{x}[/math]

 

well crap... for some reason my latex isnt working...

Link to comment
Share on other sites

the infinite series for [math]e^{x}[/math] is:

[math]

\sum_{n=0}^{\infty}\frac{x^{n}}{n!} = 1 + x + \frac{x^{2}}{2!} + \frac{x^{3}}{3!} + \frac{x^{4}}{4!} + \frac{x^{5}}{5!} + ... + \frac{x^{n}}{n!} + ...

[/math]

 

Taking the derivitive of each term of the series one by one' date=' we get

 

[math']

\sum_{n=0}^{\infty}\frac{x^{n-1}}{n!} = 0 + 1 + 2*\frac{x^{1}}{2!} + 3*\frac{x^{2}}{3!} + 4*\frac{x^{3}}{4!} + 5*\frac{x^{4}}{5!} + ... + n*\frac{x^{n -1}}{n!} + ...

[/math]

 

because [math] \frac{n}{n!} = \frac{1}{(n-1)!}[/math], the above simplifies into

 

[math]

\sum_{n=0}^{\infty}\frac{x^{n}}{n!} = 1 + x + \frac{x^{2}}{2!} + \frac{x^{3}}{3!} + \frac{x^{4}}{4!} + ... + \frac{x^{n}}{n!} + ...

[/math]

 

so, as you can see, [math]e^{x} = \frac{d}{dx}e^{x}[/math]

 

well crap... for some reason my latex isnt working...

Since his latex isn't working, i will fill in the missing step

 

[math] e^x \equiv \sum_{n=0}^{n=\infty} \frac{x^n}{n!} =1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...+\frac{x^n}{n!}+... [/math]

 

Now, differentiate everything with respect to x, like so:

 

[math] \frac{d(e^x)}{dx} \equiv \frac{d}{dx} \sum_{n=0}^{n=\infty} \frac{x^n}{n!} =\frac{d}{dx}(1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...+\frac{x^n}{n!}+...) [/math]

 

Which leads to this:

 

[math] \sum_{n=0}^{n=\infty}\frac{d}{dx} ( \frac{x^n}{n!} ) =1 +\frac{2x}{2!}+\frac{3x^2}{3!}+...+\frac{nx^{n-1}}{n!}+... [/math]

 

Which leads to this:

 

[math] \sum_{n=0}^{n=\infty} n\frac{x^{n-1}}{n!} = 1 +\frac{x}{1!}+\frac{x^2}{2!}+...+\frac{x^{n-1}}{(n-1)!}+... [/math]

 

Which leads to:

 

[math] \sum_{n=0}^{n=\infty} \frac{x^{n-1}}{(n-1)!} = 1+x+\frac{x^2}{2!}+...+\frac{x^{n-1}}{(n-1)!}+... [/math]

 

As you can see, the infinite series on the RHS is the original series, with the arbitrary term written as x^(n-1)/(n-1)! rather than x^n/n!... but the series is the same.

 

Kind regards

Link to comment
Share on other sites

By using the def of e^x ( series function) and considering the sequence of partial sums, how can you prove that f(x) = e^x is differentiable on R and that (e^x)' = e^x?

 

Here is wolfram on partial sums

 

Partial sums

 

You can use the concept to move from Riemann sums, to the definition of a definite integral, if memory serves me.

 

So, let me see if I can answer your question the way you want it answered...

 

Using partial sums.

 

Suppose you are given the following sequence:

 

[math] a(k) [/math]

 

The first term of the sequence is a(1), the second term of the sequence is a(2), the third term of the sequence is a(3), the nth term of the sequence is a(n), and so on...

 

So lets take a randomly chosen specific example. Suppose that:

 

[math] a(k) = 3k^2 -5k+2 [/math]

 

We can begin to generate the terms of the sequence ourselves.

 

a(1) = 3(1)(1)-5(1)+2=3-5+2=-2+2=0

a(2) = 3(2)(2)-5(2)+2=12-10+2=2+2=4

a(3) = 3(3)(3)-5(3)+2=27-15+2=14

a(4) = 3(4)(4)-5(4)+2=48-20+2=30

 

and so on

 

So, we can represent the sequence as:

 

(0,4,14,30,...)

 

And we can use the method of "differencing" to go from the terms of the sequence back to the polynomial generating function, but I will save that for some other time.

 

Definition Of Partial Sum

 

Given a sequence a(k) at random, the sum SN of the first N terms of the sequence, beginning at a(1), and terminating at a(N), is given by:

 

[math] S_N \equiv \sum_{k=1}^{k=N} a(k) [/math]

 

So let us define e^x, as follows:

 

[math] e^x \equiv \sum_{n=0}^{n=\infty} \frac{x^n}{n!} [/math]

 

and investigate the sequence of partial sums, given that a(k) = x^k/k!.

 

The first term in the sequence of partial sums is S1.

The second term in the sequence of partial sums is S2.

The third term in the sequence of partial sums is S3.

The fourth term in the sequence of partial sums is S4, and so on.

 

[math] a(k) = \frac{x^k}{k!} [/math]

 

a(1) = x

a(2) = x^2/2!

a(3) = x^3/3!

a(4) = x^4/4!

and so on

 

S1= a(1)=x

S2= a(1)+a(2)=x+x^2/2!

S3= a(1)+a(2)+a(3) = x+x^2/2!+x^3/3!

S4= a(1)+a(2)+a(3)+a(4) = x+x^2/2!+x^3/3! +x^4/4!

and so on

 

The sequence of partial sums is given by:

(S1,S2,S3,...Sn,...)

 

Now, you want to go from considering the sequence of partial sums, to first proving that e^x is differentiable, and secondly prove that e^x is equal to its own derivative.

 

In order to prove that a function f(x) is differentiable at a point Z, you first have to check to see that f(Z) is defined. If it is, you then have to check to make sure that the limit as x approaches Z from the right, is equal to the limit as X approaches Z from the left.

 

Let me check to make sure that's right.

 

 

Here is something on Banach Spaces

Link to comment
Share on other sites

By using the def of e^x ( series function) and considering the sequence of partial sums, how can you prove that f(x) = e^x is differentiable on R and that (e^x)' = e^x?

 

Eh? Proving that a function is differentiable on R by using its Taylor series? Don't you have to assume that the function is differentiable at the center of the series in order to even produce that series? Don't you further have to use the fact that (ex)'=ex to generate the series?

 

To prove differentiability, you should be looking at the definition of differentiability. And to calclulate the derivative of a function, you should be looking at the definition of the derivative.

 

Once you have established those things, then you can go about talking about Taylor series.

Link to comment
Share on other sites

The only one talking about Taylor series here is you. The sum over x^n/n! is in fact the definition of exp(x).

 

That is the Taylor series for exp(x), centered at zero.

 

But OK, I've Googled up the following page:

 

http://encyclopedia.laborlawtalk.com/Definitions_of_the_exponential_function

 

In the calculus books I normally refer to, Characterizations 3 and 4 are normally presented as the "definition" of ex, and the series is derived from that.

Link to comment
Share on other sites

That is the Taylor series for exp(x)' date=' centered at zero.

 

But OK, I've Googled up the following page:

 

http://encyclopedia.laborlawtalk.com/Definitions_of_the_exponential_function

 

In the calculus books I normally refer to, Characterizations 3 and 4 are normally presented as the "definition" of e[sup']x[/sup], and the series is derived from that.

 

 

How did Euler do it?

 

This is a biography:

 

Leonhard Euler

 

If you read down, you will see this:

 

In 1765 Euler published another major work on mechanics Theoria motus corporum solidorum in which he decomposed the motion of a solid into a rectilinear motion and a rotational motion. He considered the Euler angles and studied rotational problems which were motivated by the problem of the precession of the equinoxes.

 

 

That was 1765.

 

There is also this comment here:

 

Other work done by Euler on infinite series included the introduction of his famous Euler's constant , in 1735, which he showed to be the limit of

 

[math] \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+... \frac{1}{n} -log_e n[/math]

 

 

 

And that is a harmonic series.

Link to comment
Share on other sites

The expansion for exp(x) uses the fact that it is its own derivative. The natural base is "fudged" so that it's its own derivative. Plug it into the definition of the derivative and you end up with mf(x), where m is the gradient at (1,0). So in setting m=1 the only problem remaining is finding the base that has this property, for which you use a Maclurin expansion (a taylor series about a=0) with x=1.

Link to comment
Share on other sites

Of course it is the Taylor series when you develop it. The point in the Taylor series is that it often converges towards the function you develop (and it does very well in this case).

Nevertheless, at least one definition (and the one everyone I know also uses as far as I can tell) is that over the sum. The advantage over definition 3 is that you don´t need to allready have the log function (which I only know as being defined via the exp function so you run in a loop, there). Using 4 allready implies that exp is differentiable so there´s no need to show that anymore. Def 1 has no real use as far as I can see that atm.

EDIT: But there´s no need to discuss which def is the best one. Each author will have his/her reason for presenting it the way he/she does. In the case of my book, for example, that´s because it´s a book written mainly for mathematicans. And they tend to develop stuff from the very bottom. A book for natural scientists or engineers might well favor to introduce the exp function by one of it´s main uses that it solves linear differential equations.

Link to comment
Share on other sites

In this post, I am going to try and figure out what the heck Euler was trying to do, by relating the harmonic series to log base e. I will try to answer my own question.

 

Ok so...

 

Harmonic Series

 

First start off with the finite series, defined as follows:

 

[math] H(k) = \sum_{n=1}^{n=k} \frac{1}{n} [/math]

 

So, for example, the third harmonic number is:

 

H(3) = 1/1+1/2+1/3=6/6+3/6+2/6=(6+3+2)/6=11/6

 

Now, in the limit as k goes to infinity, we get the harmonic series:

 

[math] H(\infty) = \sum_{n=1}^{n=\infty} \frac{1}{n} [/math]

 

So why was Euler messing with this?

 

 

Here is an article on John Napier.

 

He died in 1617, Newton born 1642, so he died 25 years before Newton was born.

 

That article was dumb.

 

 

In introductory calculus texts, e^x is often introduced as follows:

 

[math] e^x = \lim_{n \to \infty} (1+\frac{x}{n})^n [/math]

 

And prior to this, the student is introduced to limits.

 

 

So look at the case where x=1. Then you have:

 

[math] e = \lim_{n \to \infty} (1+\frac{1}{n})^n [/math]

 

And you can just make something resembling a harmonic term.

 

What in the world led him to that, that just doesn't pop out of thin air.

 

You need to understand the mathematics of the time.

 

1707-Leonhard Euler-1783

 

1501-Giralamo Cardano-1576

 

The first mention of the square root of -1 in print, appears to have been by Giralamo Cardano, in 1545, however he dismissed them as useless. In 1572, Raphel Bombelli used them in calculations in his work L'algebra.

 

In 1702, Leibniz appears to have mistrusted root -1, and in 1770 Euler argued that root(-2) times root (-3) = root (-6)

 

See this document for a source: Geometry and Complex arithmetic

 

1698-Colin MacLaurin-1746

 

MacLaurin's biography

 

The following quote is taken from the article above:

 

Maclaurin appealed to the geometrical methods of the ancient Greeks and to Archimedes' method of exhaustion in attempting to put Newton's calculus on a rigorous footing. It is in the Treatise of fluxions that Maclaurin uses the special case of Taylor's series now named after him and for which he is undoubtedly best remembered today. The Maclaurin series was not an idea discovered independently of the more general result of Taylor for Maclaurin acknowledges Taylor's contribution. Another important result given by Maclaurin, which has not been named after him or any other mathematician, is the important integral test for the convergence of an infinite series. The Treatise of fluxions is not simply a work designed to put the calculus on a rigorous basis, for Maclaurin gave many applications of calculus in the work. For example he investigates the mutual attraction of two ellipsoids of revolution as an application of the methods he gives.

 

1865-Brook Taylor-1731

 

Taylor's biography

 

The following quote is taken from the article above:

 

Taylor added to mathematics a new branch now called the "calculus of finite differences", invented integration by parts, and discovered the celebrated series known as Taylor's expansion. These ideas appear in his book Methodus incrementorum directa et inversa of 1715 referred to above. In fact the first mention by Taylor of a version of what is today called Taylor's Theorem appears in a letter which he wrote to Machin on 26 July 1712. In this letter Taylor explains carefully where he got the idea from.

 

 

Ok that gives you an idea of the state of mathematics between 1500-1800. And for those who don't know, Cardano published "Cardan's formula" for solving a cubic equation, but it appears that he stole the idea from Niccolo Tartaglia.

 

But what I am trying to figure out, is where Euler got the idea to take the limit of (1+x/n)^n.

 

That's such an odd thing to just do for no reason at all.

 

Here is a good article on e=2.71828...

 

2.71828...

Link to comment
Share on other sites

But what I am trying to figure out' date=' is where Euler got the idea to take the limit of (1+x/n)^n.

 

That's such an odd thing to just do for no reason at all.

[/quote']

 

I don't think that is a relevant question at all, I mean, there are a lot of other theorems where one could ask where the hell they got that idea from (like Schinzel's Hypothesis H).

Link to comment
Share on other sites

  • 4 weeks later...

But what I am trying to figure out, is where Euler got the idea to take the limit of (1+x/n)^n.

 

That's such an odd thing to just do for no reason at all.

 

Here is a good article on e=2.71828...

 

2.71828...

I don't know where Euler got the idea, but there are good reason why one would think of it. Say you have a method for estimating exp(x) when x is small. For example exp(x)~1+x when x is small. x/n is small if n is large. exp(x)=(exp(x/n))^n~(1+x/n)^n if n is large.

Link to comment
Share on other sites

  • 11 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.