Jump to content

Kepler Orbit


h4tt3n

Recommended Posts

Hello,

 

For the last few years I've been programming a number og small apps for visualizing gravitational interaction, elliptic orbits and the sort of thing. So far I've done this the Newtonian way, by using an integration algorithm to find the new velocities and positions of each body based on gravitational interaction. Now I'd like to express movement as Kepler orbits instead, and in order to be able to do that I started reading through the wikipedia articles on Kepler orbit, Kepler's laws of planetary motion, and orbital elements.

 

Strangely, I can't seem to find a way to calculate an orbiting body's position as a function of time. It seems like there's always an element missing, even though all data defining the orbit is there - such as mass of both bodys, semimajor axis, eccentricity and so on. Could anybody here please shed some light on the missing part? Clearly, I've stared myself blind on the problem.

 

Ps. I have taken the liberty to post my question both here and in "classical physics", hope it's ok.

 

Cheers,

Mike

Link to comment
Share on other sites

A couple of caveats first. One, there are many different sets of orbital elements. I will discuss a few of the variants. Secondly, Keplerian orbits are an approximation. They are only valid only in the Newtonian two body problem. The existence of multiple bodies (e.g., the solar system), a central body with a non-spherical shape (e.g., the Earth) and general relativity (e.g., relativistic precession) mean that Keplerian orbits are not quite exact.

 

The canonical set of Keplerian orbital elements is

  • [math]a[/math], the semi-major axis of the orbit,
  • [math]e[/math], the eccentricity of the orbit,
  • [math]i[/math], the inclination of the orbit relative to some reference plane,
  • [math]\Omega[/math], the right ascension of the ascending node referenced to that reference plane,
  • [math]\omega[/math], the argument of periapsis, and
  • [math]T_{\mbox{peri}}[/math], the time of periapsis passage.

 

You also need to know the gravitational coupling parameter of the central body. Naively this is the product of the universal gravitational constant G and the central body's mass M. For the Sun and most of the planets, the product [math]\mu=GM[/math] is known to a much higher precision than are G and M. You can find a list of standard gravitational parameter at http://en.wikipedia.org/wiki/Standard_gravitational_parameter.

 

The mean motion [math]\dot m[/math] is a function of the the coupling constant [math]\mu[/math] and the semi-major axis length [math]a[/math]. For an orbital body of inconsequential mass, this is given by

 

[math]n = \sqrt{\frac{\mu}{a^3}}[/math]

 

If the mass of the orbiting body is a significant fraction of the mass of the central body (e.g., Jupiter), it is better to use

 

 

[math]n = \sqrt{\frac{\mu_c+mu_p}{a^3}}[/math]

 

Some variants of the above list of orbital elements include

  • Instead of [math]T_{\mbox{peri}}[/math], some specify an epoch time [math]T_0[/math] and the mean anomaly [math]m_0[/math] (or the true anomaly [math]\nu_0[/math]) at that epoch time. You can calculate [math]T_{\mbox{peri}}[/math] via [math]T_{\mbox{peri}} = T_0 - m_0/n[/math]. If you are given [math]\nu_0[/math] instead you will have to compute the corresponding eccentric anomaly and then compute [math]m_0[/math] using Kepler's equation.
  • Instead of [math]a[/math] and [math]\epsilon[/math], some specify the perifocal and apofocal distance or the perifocal and apofocal altitude. Going from these to semi-major axis and eccentricity is again simple math via [math]r_p = a(1-e)[/math] and [math]r_a = a(1+e)[/math]. You will need to incorporate the planet's radius if you are given altitudes instead of distances.
  • Instead of [math]\omega[/math], some specify the argument of argument of latitude [math]u_0=\nu_0+\omega[/math].

 

 

Assuming you have that canonical set, to calculate the Cartesian position at some point in time [math]t[/math] you will need to

  • Calculate the mean anomaly at that time:
     
    [math]m(t) = n(t-T_{\mbox{peri}})[/math]
     
     
  • Calculate the corresponding eccentric anomaly. The relation between mean and eccentric anomaly is called Kepler's equation,
     
    [math]m = E - e sin E[/math]
     
    Note that this specifies [math]m[/math] as a function of [math]E[/math] and [math]e[/math]. You want [math]E[/math] as a function of [math]m[/math] and [math]e[/math], the inverse of Kepler's equation. Since Kepler's equation is transcendental, the inverse cannot be expressed algebraically. Solving for [math]E[/math] given [math]m[/math] and [math]e[/math] is called "Kepler's problem". Using a Newton-Raphson iterator works very well or small eccentricities. For largish eccentricities you will need to do something more advanced such as using Bessel's solution.
     
     
  • Calculate the corresponding true anomaly via
     
    [math]\sqrt{1-e} \tan \frac{\nu} 2 = sqrt{1+e} \tan \frac E 2
     
     
  • Calculate the position in the canonical plane via
     
    [math]r = \frac{a(1-e^2)}{1+e\cos\nu}[/math]
    [math]x_c = r\cos\nu[/math]
    [math]y_c = r\sin\nu[/math]
     
     
  • The canonical plane is a related to your reference axis as a rotation about the z axis by the the right ascension angle [math]\Omega[/math] followed by a rotation about the rotated x axis by the inclination i. Reverse this to transform the canonical position [x_c,y_c,0] to the desired [x,y,z] position.

 

I hate, loathe, ..., expletive deleted this new system.

 

The last two elements of the last list in the above post should be

 

 

  • Calculate the position in the canonical plane via
     
    [math]r = \frac{a(1-e^2)}{1+e\cos\nu}[/math]
    [math]x_c = r\cos\nu[/math]
    [math]y_c = r\sin\nu[/math]
  • The canonical plane is related to your reference axis as a rotation about the z axis by the the right ascension angle [math]\Omega[/math] followed by a rotation about the rotated x axis by the inclination i followed by a rotation about the rotated z axis by the argument of periapsis angle [math]\omega[/math]. Reverse this to transform the canonical position [x_c,y_c,0] to the desired [x,y,z] position.

Link to comment
Share on other sites

Thanks for the elaborate answer D H.

 

When asking, I did so in the hope that there was some way of determining E (eccentric anomaly) other than solving Kepler's equation using an iterative method, or even a solution not involving E at all. I find it stange that it is possible to isolate all orbital elements from a body's state vectors (I did so in an app that draws the elliptic orbit of a body based on velocity and position only), but you can't seem to go the other way around - finding a body's state vectors from orbital elements and time.

 

Cheers,

Mike

Edited by h4tt3n
Link to comment
Share on other sites

Sorry, I ment algebraically. What bothers me about this method is that you need an iterative solver, which will only return eccentric anomaly at a limited precision, and that the accuracy of the result furthermore depends on eccentricity, giving lower precision at higher eccentricities. I was of the impression that you could find an orbiting body's position at time t exactly (within the limits of classical mechanics), not just approximately. My problem is, I'm no longer sure what I would gain by leaving the newtonian method for the Keplerian. A sixth order symplectic solver like the one I'm using now is very accurate.

 

Cheers,

Mike

Link to comment
Share on other sites

What bothers me about this method is that you need an iterative solver, which will only return eccentric anomaly at a limited precision, and that the accuracy of the result furthermore depends on eccentricity, giving lower precision at higher eccentricities. I was of the impression that you could find an orbiting body's position at time t exactly (within the limits of classical mechanics), not just approximately.

You can solve Kepler's problem to any desired degree of accuracy. A very high degree of accuracy might require some sophisticated techniques, but that is just an implementation detail. Your "sixth order symplectic solver" is only an approximation. If you are dealing with a two-body problem you should be viewing the orbital elements solution as the analytic solution against which you can judge the accuracy of your numerical integration.

 

If you are dealing with something beyond the two-body problem, such as our solar system with multiple planets that interact gravitationally with one another as well as the Sun, a satellite in orbit about the non-spherical Earth, a satellite making a hyperbolic gravity slingshot fly-by of a planet, then you are stuck in a situation in which no analytic solution does exist. You need to use some numerical integration technique. Even then, using an orbital element approach can be "better" than your presumably cartesian approach. One approach is to use an Encke-style integrator. Another is to integrate Lagrange's planetary equations or some variant.

 

There are small number of metrics in the "my integrator is better than yours" game:

  • What is the best that a given integrator can possibly do?
    Every numerical integrator has to trade off between two extremes. Errors inherent in the integration scheme dominate when the time step is too large while numerical errors dominate at small time steps. The optimal step is a balance between these extremes. The error at this optimal step size varies from integrator to integrator.
  • This optimal time step is one of two main factors in determining the the amount of CPU time needed to achieve some desired degree of accuracy. The other is the number of derivative calculations per step. Putting these together yields the number derivative calls to achieve some desired degree of accuracy for a given period of time. This varies dramatically from integrator to integrator.
  • Every numerical integrator starts producing garbage eventually. The time span over which a numerical integrator produces meaningful results varies from integrator to integrator.

Link to comment
Share on other sites

Ok, I guess I have to admit that you're right and I'm wrong. There are a large number of methods with which E can be determined at any desired accuracy, although - as you mentioned - some are quite sophisticated. I'm reading up on some of these now. It appears that many people in the almost 400 years since the discovery of Kepler's equation have experienced similar frustrations trying to solve it, glad to know I'm not alone :) I also realise that integration algorithms are only approximations, and that even very good, higher order equations can return invalid results if you do not use it wisely (ie. by picking an unreasonable small or large delta time value).

 

I do have to mention that there are other considerations to make than the ones you mention: Implementing a high-accuracy Kepler orbit only makes sense if you desire a quantitatively correct simulation of an orbit. Ie. you want to know the near-exact position of some planet at time t. If, on the other hand you want a qualitatively plausible simulation, I would still think that even lower order integrators like the 2nd order velocity Verlet are preferrable over the Kepler method. They are much simpler to implement and allow for many-body simulations with a high number of interacting bodies due to low computational cost. Despite of beeing inaccurate, such an implementation can still produce a plausible simulation which conserves energy and momentum, and at the same time illustrates how many-body systems evolve and change.

 

Cheers,

Mike

Link to comment
Share on other sites

Ok, I tested a number of different algorithms designed to find E and - strangely enough - the more than 300 year old Newton-Raphson iterative method appears to be the most robust one.

 

In pseudocode, this is what it looks like:

 

E = M + ( Sign(M) - (M / Pi) ) * (ecc/2)

i = 0

Do

i += 1

Eold = E

E = Eold + ( (M + ecc * Sin(Eold) - Eold) / (1 - ecc * Cos(Eold)) )

Loop while abs(E-Eold) > 1e-16 And i < 10

 

where E is eccentric anomaly, ecc is eccentricity, M is mean anomaly, and i is number of iterations. Even for eccentricities near 1 and mean anomalies near 0, it converges in less than 10 iterations.

 

cheers,

Mike

Edited by h4tt3n
Link to comment
Share on other sites

Try a naive implementation of a Newton Raphson iterator with an eccentricity of 0.999, M=1.999*pi, E0=0. This diverges.

 

That said, there are some easy ways to make a Newton Raphson iterator extremely robust. Restrict the mean anomaly to [-pi,pi], use the mean anomaly as the initial guess. Finally, if some step either changes the sign of E or makes abs(E)>pi, use the fixed point iterator E=M+e*sin(E) in lieu of that found by Newton-Raphson iteration. (The fixed point iterator is guaranteed to converge, but slowly.)

 

The number of papers written on different techniques for solving Kepler's equation is immense. It is quite amazing that good old Newton's method remains right at the top of the heap.

Link to comment
Share on other sites

Try a naive implementation of a Newton Raphson iterator with an eccentricity of 0.999, M=1.999*pi, E0=0. This diverges.

Ok, I gave it a try. For the following values: E0 = 0, eccentricity <= 0.99999, M <= 1.99999*pi, the iterator converges at a value of E < 2*pi, but it goes haywire beyond that. For E0 = M, on the other hand, it behaves perfectly for the highest values possible with double precision variables and converges in <= 34 iterations. I'd say this is as much a lesson in the importance of picking the right starting value as in the limits of the iterator.

 

That said, there are some easy ways to make a Newton Raphson iterator extremely robust. Restrict the mean anomaly to [-pi,pi], use the mean anomaly as the initial guess.

Yes, I noticed strange behaviour with values of M > 2*pi, got that fixed.

 

Finally, if some step either changes the sign of E or makes abs(E)>pi, use the fixed point iterator E=M+e*sin(E) in lieu of that found by Newton-Raphson iteration. (The fixed point iterator is guaranteed to converge, but slowly.)

Please help me understand this right. If if the sign of En+1 is different from En or En+1 > pi, then I should throw away the value of En+1 and continue iterating from En with the fixed point iterative method En+1 = M + e * sin(En)? Am I supposed to continue using the slower iterator until it converges or switch back to Newton-Raphson when the above conditions are met?

 

Cheers,

Mike

Edited by h4tt3n
Link to comment
Share on other sites

Please help me understand this right. If if the sign of En+1 is different from En or En+1 > pi, then I should throw away the value of En+1 and continue iterating from En with the fixed point iterative method En+1 = M + e * sin(En)? Am I supposed to continue using the slower iterator until it converges or switch back to Newton-Raphson when the above conditions are met?

 

Cheers,

Mike

First, forget what I said about using the fixed point iterator. It is agonizingly slow for large eccentricities.

 

 

Let's step back just a bit. We are trying to find a zero of [math]f(E;e,M)=E-e\sin E - M[/math]. Without any loss of generality, we can restrict the mean anomaly to a range of 2*pi, [0,2*pi] for example, or [-pi,pi]. We can do even better by noting that [math]f(-E;e,-M) = -f(E;e,M)[/math]. This means reduce M to [-pi,pi] and then further reduce this to [0,pi] by negating negative values of M. Simple negate the result when the mean anomaly reduced to [-pi,pi] is negative. Next note that for M in (0,pi) that [math]f(0)=-M<0[/math] and [math]f(\pi)=\pi-M>0[/math]. From this it can be seen that 0 and pi "bracket" the solution.

 

The bisection method is a very safe but rather slow approach to finding the root of some equation. The basic idea of the bisection method is to always maintain a pair of values that bracket the solution. The method is called bisection because each step shrinks the size of the search window by half. Here's how the method works: To find a root of some function f(x), you first need to find a pair of values x0 and x1 such that f(x0) is negative and f(x1) is positive. You have just bracketed the solution. In general, if the function f(x) is continuous, at least one root is guaranteed to exist somewhere between x0 and x1. Next evaluate the function at the midpoint between x0 and x1, x2=(x0+x1)/2. One of three cases will apply:

  • f(x2) is equal to zero, within some tolerance. You have just solved the problem.
  • f(x2) is negative. In this case you now know that a root exists somewhere between x2 and x1. You have just halved the size of the search window.
  • f(x2) is positive. In this case you now know that a root exists somewhere between x0 and x2. You have once again halved the size of the search window.

 

The bisection method can be used in conjunction with a potentially faster but also potentially non-convergent technique such as Newton Raphson. The basic idea: Always maintain a bracketed solution. Use the faster iterator to generate a tentative next guess. If that tentative next guess lies within the bracketed region, use it. Otherwise reject it and use the safe but slow midpoint of the bracketed region as the next guess. You can use this approach to take advantage of even more than two techniques. For example, Brent's method uses Newton Raphson first, then the secant method, and finally bisection as the safe backup plan.

 

With regard to Kepler's equation, there is a method that is faster than Newton Raphson, but is also even more fragile than Newton Raphson. This is Halley's method (the discoverer of Halley's Comet):

 

[math]x_{n+1} = x_n - \frac{f(x)f'(x)}{f'(x)^2 - f(x)f''(x)/2}[/math]

 

While Newton-Raphson yields quadratic convergence for points sufficiently close to the root, Halley's method yields cubic convergence. It can be very, very fast.

Link to comment
Share on other sites

After implementing a number of more or less sophisticated iterators, the Newton-Raphson and Halley's have clearly come out on top. Especially when using Danby's initial guess E = M + ( sign(M) - (M / Pi) ) * (e/2). I find it interesting that both methods are of considerable age. So far I think I'll stick to these, but sooner or later I'll have to look at the bisection method as well, thanks for describing it in such a straight-forward way.

 

Also, I couldn't help making a little experiment. I made an orbit simulation which is a sort of intermediate between Kepler's and Newton's ways. I describe the movement on the orbiting body with angular velocity and calculate movement based on conservation of angular momentum. Angular velocity is a function of distance, since w = L/m*r^2, where w is angular velocity, L is constant angular momentum, m is orbiting body mass, and r is distance. True anomaly at time T is simply found by integrating with time: v(t+1) = v(t) + w(t) * dt, where v is true anomaly, and dt is delta time. The new distance can be found from v, since r = a*(1-e^2/1+e*cos v), where a is semimajor axis, and e is eccentricity. Finally, the cartesian position can be found from distance and angle. Although very simple and computationally cheap, this method conserves energy and momentum perfectly and prevents the orbit from "drifting" off like other simulations based on integration algorithms. I think it has some potential, but it can probably be improved and there may be some undiscovered issues.

 

cheers,

Mike

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.