Jump to content

Equtations Problems


newton333

Recommended Posts

I doubt if there is an analytic solution. However, let y = 1/x.

Then you have a simpler looking problem f(y) = yexp(y) = 1.

f(0) = 0, f(1) = e, and f(y) is increasing.

So using numerical method (such as Newton's method), one can approximate the value for y, and then get x.

Link to comment
Share on other sites

I doubt if there is an analytic solution. However, let y = 1/x.

Then you have a simpler looking problem f(y) = yexp(y) = 1.

f(0) = 0, f(1) = e, and f(y) is increasing.

So using numerical method (such as Newton's method), one can approximate the value for y, and then get x.

 

You beat me to the punch tongue.png !!! Oh well, that's what I get for being hardcore and going into exquisite detail eyebrow.gif (jk). As mathematic suggested, Newton's method can be used to approximate the answer to this problem.

 

Newton's method, also called the Newton-Raphson method, is a root-finding algorithm that uses the first few terms of the Taylor series of a function [math]f\left(x\right)[/math] in the vicinity of a suspected root. Newton's method is sometimes also known as Newton's iteration, although in this work the latter term is reserved to the application of Newton's method for computing square roots.

...

Unfortunately, this procedure can be unstable near a horizontal asymptote or a local extremum. However, with a good initial choice of the root's position, the algorithm can be applied iteratively to obtain

 

[math]x_{n+1}=x_{n}-\frac{f\left(x_{n}\right)}{f'\left(x_{n}\right)}[/math]

 

Although Newton's method is a root-finding algorithm, it can also be used to approximate the result of many different types of operations (e.g. ath roots [math]\sqrt[a]{b}[/math], logarithms [math]\text{log}_{\,a}b[/math], nested logarithms [math]\text{nLog}_{\, a}b[/math], inverse trig functions [math]\text{sin}^{-1} x[/math] or [math]\text{cos}^{-1} x[/math], etc...), which is actually pretty cool when you think about how powerful it is when used for numerical analysis wink.png.

 

In order to use Newton's method for our purpose, we have to shift our function along the [math]y[/math] axis such that the root / zero is connected with our [math]b[/math] variable (I haven't been to bed in a couple of days so I'm not sure if I worded that correctly, but the following example should clarify what I mean):

 

[math]x_{n+1}=x_{n}-\frac{f\left(x_{n}\right) - b}{f'\left(x_{n}\right)}[/math]

 

For instance, to use Newton's method to find the [math]a[/math]th root of [math]b[/math], or [math]\sqrt[a]{b}[/math], we simply use the following algorithm:

 

[math]x_{n+1}=x_{n}-\frac{\left(x_{n}\right)^{a}-b}{a\,\left(x_{n}\right)^{a-1}}[/math] where [math]a[/math] and [math]b[/math] are constants, [math]f(x)=x^{a}-b[/math], and [math]f'(x)=a\left(x\right)^{a-1}[/math].

 

Because this method should converge to a value, all you have to do is repeat the process until you reach the desired accuracy. It is important to note that this method works for both, unary (trig functions, etc...) and binary (addition, subtraction, etc...) operations. For now, let's rework your equation into a more interesting form (again, I'm probably making this more complicated than it should be due to a lack of sleep):

 

[math]e^{1/x}-x=0[/math]

 

Add [math]x[/math] to both sides:

 

[math]e^{1/x}=x[/math]

 

Take the natural log of both sides:

 

[math]\text{ln}\left(e^{1/x}\right)=\text{ln}\left(x\right)[/math]

 

Simplify the result:

 

[math]\frac{1}{x}=\text{ln}\left(x\right)[/math]

 

Multiply both sides by [math]x[/math]:

 

[math]1=x\,\text{ln}\left(x\right)[/math]

 

Let's get rid of that natural logarithm by raising [math]e[/math] to the power of both sides:

 

[math]e^1=e^{x\,\text{ln}\left(x\right)}[/math]

 

Simplify the result:

 

[math]e=x^x[/math]

 

Now that's an interesting result that we can work with. So, let's define our function as

 

[math]f(x)=x^x-b[/math]

 

where [math]b = e \approx 2.718281828459045...[/math] and with [math]x^x[/math] possibly representing a nested exponential with [math]a=2[/math] such that [math]x^{\left \langle 2 \right \rangle} = (x)^x[/math]. Anyways, without delving further into madness, we find that the derivative of [math]f(x)[/math] is

 

[math]f'(x)=x^x\left(1+\text{ln}\,x\right)[/math]

 

Substituting all of this into Newton's method we finally arrive at our algorithm, and can now determine the value of [math]x[/math] that satifies the equation [math]e^{1/x} - x = 0[/math] :

 

[math]x_{n+1}=x_{n}-\frac{\left(x_{n}\right)^{\left(x_{n}\right)}-e}{\left(x_{n}\right)^{\left(x_{n}\right)}\left(1+\text{ln}\,\left(x_{n}\right)\right)}[/math]

 

For those who wish to play with Newton's method in Mathematica, feel free to use the following code:

 

NewtonsMethod[f_, {x_, x0_}, n_] := Nest[# - Function[x, f][#] / Derivative[1][Function[x, f]][#]&, x0, n]

 

where f_ is the function, x_ is the variable, x0_ is the initial value to try, and n_ is the number of recursions / iterations applied to the expression. For the given problem

 

NewtonsMethod[x^x - e, {x, 2.0}, 10] yields an answer of 1.7632228343518968...

 

By substituting this value into the original equation, we can be assured that we have found the correct answer:

 

[math]e^{1/1.7632228343518968}-1.7632228343518968=0[/math]

 

Of course, if you have Mathematica you could've just typed:

 

Solve[e^(1/x) - x == 0.0, x] and it would've given you the same answer of {{x [math]\to[/math] 1.76322}}

 

Enjoy!!! evil.gif

Edited by Daedalus
Link to comment
Share on other sites

In such transcendental equations, you can often find a simple loop algorithm that runs on a calculator without any programming - the advantage over Newton and the like.

 

Here:

x = exp(1/x)

 

gives without long rewriting two loops:

-> Log -> Reciprocal (which after short trying doesn't converge)

-> Exp -> Reciprocal (which does converge, starting with 2 for instance)

 

Someone interested in maths could try to prove that one of both directions of the loop must converge. At least I saw it work about every time, if the initial value is lucky.

 

More generally, x can act, after a transformation like exp(), at more than two terms; then you have more than 2 loop directions, as you choose which term shall return x, the other terms being used in the forward direction.

 

Marc Schaefer, aka Enthalpy

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.