Jump to content

Log help


sriram

Recommended Posts

I've never even seen a log table, I knew they existed but well, calculators do it now. It is possible to do some logs mentally, it's easier when your dealing with whole numbers obviously. Surely once you know what logs are you can work a few of them out?

 

The basic definiton is that if [math]a^{b}=c[/math] then [math]\log_{a}c=b[/math].

Link to comment
Share on other sites

Computing a logarithm of a general number without special properties is quite a tedious job when it needs to be done by hand. If you want to have it done by means of a piece of software, then the task is not that difficult.

 

The way, this is done in numerical software (and also in hardware like math-coprocessors) is to take an interval [1, 2>. A very good approximating polynomial function in x is determined for the log value in this interval. These polynomials are published in tables, but one can also dermine themselves by means of interpolating software. Once you have such a polynomial, it is easy to compute log(x) in that interval.

 

If you want to obtain the log of any number z, then that number must be written as z0*2ⁿ, with z0 in the interval [1, 2>. E.g. for 7, we write 1.75*2².

 

Now for general z, we have log(z) = log(z0) + n*log(2).

 

In binary computers, this type of algorithms can be very fast. Determining n is easy, due to the way numbers are represented in the computer hardware. The value of log(2) can be stored as a constant to the desired precision.

 

With this algorithm, computation of log() functions is reduced to some multiplications and additions and a little (possibly) machine-dependent bitstuffing to determine n.

 

There is one flaw in this method, and that is that the relative precision of logarithms near for x near 1 is quite bad. The reason of this is that log(1+k) = k + O(k²). A number near 1, with k close to zero itself has full precision, but for the number k, there only remains a low precision left. For this reason, a more complete set of functions is offered by many math libraries, which also include a function log1(k), which is log(1+k), but now one can supply a value k at full precision and also gets an answer at full precision. For the same reason, many libraries also supply a function exp1(x), which is exp(x) - 1. For x close to 0, otherwise one would loose a lot of bits of precision, due to the term 1, which swamps all bits of precision for small x.

Link to comment
Share on other sites

The tree, how would you want to use such a method for approximating a logarithm? You still would need a method to approximate a transcendental function (most likely the exp() function), so what is the gain? It is better to directly approximate a log() by means of some polynomial approximation.

Link to comment
Share on other sites

The tree, how would you want to use such a method for approximating a logarithm?
Such a method, you mean the one that I mentioned? (I never recomended it). Well I guess if I were looking for logac then I'd make up a function f(x)=ax-c and find two values for x where f(x) is positive and negative, that'd be my interval and then I'd start with the whole bisection thing (I'm sure you know that really).
so what is the gain?
There isn't one, I was just making the point that this kind of iterative procedure doesn't reek of praticality.
Link to comment
Share on other sites

This kind of iterative procedures is not of value for trying to compute log(x) for any base a. But they have immense importance in general situations.

 

Many problems in physics, computer sciences and engineering result in equations of the form f(x) = 0. The function f(x) can be evaluated easily (at least by computers), but there is no analytic expression for x. In engineering we usually are not interested in an analytic expression for x, we just want an approximation. And here is, where iterative methods are of greatest importance.

 

Just an example: Look at the equation x + exp(x) = 0. Whatever you try, you will not find a method of expressing x in standard functions like log, sin, cos, exp, etc. But with your procedure you will quickly find an approximation. Take left value equal to -1, the right value equal to 0 and iterate. In n steps, you have reduced the error to the order of 2ֿⁿ. After e.g. 30 steps, you have the value of x to roughly 10 decimal digits.

 

There even are better iterative methods. There are methods, which, once you are close enough at the desired value, give you a doubling of precision at each step performed. What is "close enough" depends on the actual situation. With the bisection method, you mentioned, you gain a little less than one digit per 3 steps.

Link to comment
Share on other sites

Of course I mentioned interval bisection because it so incredibly easy, not because it is any good.

 

If one has a fairly decent aproximation in the first place then I think the Netwon-Raphson method would work for this, and that is quick, even on paper. Although I still maintain that doing these things on paper is silly.

Link to comment
Share on other sites

Yes, you're right. Bisection is kind of last resort, when more advanced methods do not converge, or when you are dealing with a very irregular function.

 

Doing these things on paper is not anything you want to do unless you are a mental masochist :D.

Link to comment
Share on other sites

  • 3 weeks later...

umm basically, if you want to solve a log, you use a taylor power series for the natural log, and then u can use the identity that LOGa B= (LOGn B)/(LOGn a)

it actually doesnt take that long if you onli do the first few terms, and happen to have a low x. if you dont have a low x, factorise into its primes, bring the power down, much easier. anyway, using this you get about 3 decimals accuracy in about 2 or 3 minutes, which i guess is alright...

 

o i almost forgot the actual power series lol

 

http://upload.wikimedia.org/math/9/8/3/983c61be3acc035a433782c3fbff0f3b.png

 

o and did i mention that its even worse with non-intergers..happy calculating

Link to comment
Share on other sites

Ragib, for log(x) there is no power series. You can use a taylor power series for log(1+x), but this is VERY cumbersome. The radius of convergence for the series for log(1+x) only is 1, and it converges very slowly and is not of any practical use. The series for all functions sin(x), cos(x) and exp(x) converges rapidly, even for moderately large x, but the series for the inverse functions are really slowly converging.

 

You can make good approximate power series for log(x), but these are not derived by means of taylor expansions, but e.g. by chebyshev polynomial approximation. Also, polynomial interpolation by using samples with |z| on the unit circle in the complex plane gives good approximating polynomials. In fact, you are doing a DFT in that case.

Link to comment
Share on other sites

aww damn it, i was hoping no1 would notice lol...yea it says it only converges for x between negative 1 and 1 on da link i gave anyway.. to tell you the truth im not that knowledgeable when it comes to this kind of math, ive heard of chebyshevs method of polynomial approximation, but ive never really tried to apply it before, i guess if you do you remember it when you need it, like now..

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.