Jump to content

Numerical differentiation with several variables


hobz

Recommended Posts

I am trying to teach myself numerical analysis, and I have come to numerical differentiation.

 

I understand how this works have only one independent variable.

 

Now suppose I have functions of two or more variables.

 

How can I calculate the partial derivatives and the gradient numerically?

Link to comment
Share on other sites

[math] \frac{\partial}{\partial x} f(x,y) \approx \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x} [/math], as a simplemost example. More sophisticated methods can be applied just as in the 1D case, the idea of keeping all but the variable you differentiate to constant should remain. The gradient should follow from the partial differentials just as in the analytical case.

Link to comment
Share on other sites

Nice. Think I'll stick to dfdx2 = (f(x1,x2+h)-f(x1,x2-h))/(2*h).

Yep, it's just the next-better approximation

 

Do you know of a code example of this?

No, but writing one is not too tedious, so:

from scipy.xplt import *  # comment out if not available

class fR2toR:
   """An f: R²->R, with partial derivatives. """
   def value(self,x,y):
       """f(x,y)"""
       return cos(x)*cos(y)
   def dx(self, x,y):
       """df(x,y)/dx"""
       return -sin(x)*cos(y)
   def dy(self, x,y):
       """df(x,y)/dy"""
       return -cos(x)*sin(y)


class ForwardStepDifferentiation:
   """Simple forward step."""
   def __init__(self,delta):
       """ init with stepsize delta."""
       self.delta = delta
   def dx(self,f,x,y):
       """partial diff of f wrt. to x."""
       return (f.value(x+self.delta,y)-f.value(x,y))/self.delta
   def dy(self,f,x,y):
       """partial diff of f wrt. to y."""
       return (f.value(x,y+self.delta)-f.value(x,y))/self.delta

class CentralDifferentiation:
   """1st order taking half a step in both directions."""
   def __init__(self,delta):
       """ init with stepsize delta."""
       self.delta = delta
   def dx(self,f,x,y):
       """partial diff of f wrt. to x."""
       return (f.value(x+self.delta/2.,y)-f.value(x-self.delta/2.,y))/self.delta
   def dy(self,f,x,y):
       """partial diff of f wrt. to y."""
       return (f.value(x,y+self.delta/2.)-f.value(x,y-self.delta/2.))/self.delta

class ExtrapolatedDifferentiation:
   """ 'Extrapolated' differentiation."""
   def __init__(self,delta):
       """ init with stepsize delta."""
       self.c1 = CentralDifferentiation(delta/2.)
       self.c2 = CentralDifferentiation(delta)
   def dx(self,f,x,y):
       """partial diff of f wrt. to x."""
       return (4*self.c1.dx(f,x,y)-self.c2.dx(f,x,y))/3.
   def dy(self,f,x,y):
       """partial diff of f wrt. to y."""
       return (4*self.c1.dy(f,x,y)-self.c2.dy(f,x,y))/3.


def getSqError(f,method):
   """ get summed squared error of the method over [0:9]x[0:9]. """
   error = 0.
   for x in arange(100)*0.1:
       y = arange(100)*0.1
       gradAna = f.dx(x,y)**2+f.dy(x,y)**2
       gradNum = method.dx(f,x,y)**2 + method.dy(f,x,y)**2
       for i in arange(30):
           error += abs(gradAna[i]-gradNum[i])
   return error

def plotError(f,method,windowNum):
   """ plot error over stepsize. """
   window(windowNum)
   x = 1.*arange(20)
   y = 1.*arange(20)
   for i in arange(20)+2:
       x[i-2] = 1./i
       method.delta = x[i-2]
       y[i-2] = getSqError(f,method)
   plg(y,x)


f = fR2toR()     # test function
m1 = ForwardStepDifferentiation(.1)
m2 = CentralDifferentiation(.1)
m3 = ExtrapolatedDifferentiation(.1)

print "Some simple test, no plots"
print "Error 1= ",getSqError(f,m1)
print "Error 2= ",getSqError(f,m2)
print "Error 3= ",getSqError(f,m3)

print "Plots over stepsize; comment out if scipy.xplt not available"
plotError(f,m1,1)
plotError(f,m2,2)
plotError(f,m3,3)

raw_input("Press <enter> to ... errr ... leave.")
winkill()

The super-low error of the 3rd method is a bit strange, though.

Link to comment
Share on other sites

Thanks for the code! Will try it when I get to a computer, that has python.

 

I am looking into the DFP for optimization, and there the search direction is based on purely the information from the (negative) gradient.

With [math]s_k = H_k \cdot (-\nabla f(x_k))[/math]

where [math]s[/math] is the search direction and [math]H[/math] the hessian, how can I obtain the gradient at a single point?

Just using your functions, and evaluating at a few points more (around x_k +/- delta)?

Link to comment
Share on other sites

The gradient effectively just is the collection of partial derivatives wrt. to the different parameters x_k. [math]\nabla = \left(\frac{\partial}{\partial x}, \frac{\partial}{\partial x}, \dots \right)[/math]. Or akin to the notation in the code above: grad = (dx,dy,...). Note that in above the variables gradNum and gradAna are the squared magnitudes of the gradient (taking the square of the difference vector rather than the difference of the squares would probably give a better error estimate, btw). So initially there is no reason to evaluate additional points. However, you might need to do that for the Hesse matrix, depending on your implementation.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.