Jump to content

On Generalisations of well-known Differential Equations on the line to superlines


ajb

Recommended Posts

I thought it might be fun to show you some elementary aspects of superanalysis and in particular how we might generalise two well known differential equations on the line to superlines.

 

First what is a superline, [math]\mathbb{R}^{1|m}[/math]? We will define the superline as the space with local coordinates [math]\{t, \theta^{\alpha}\}[/math] where [math]\alpha[/math] runs from [math]0[/math] to [math]m[/math] and [math]\theta^{\alpha}\theta^{\beta} = - \theta^{\beta} \theta^{\alpha}[/math]. In particular, [math]\theta^{\alpha}\theta^{\alpha}=0[/math]. As we will be doing some elementary analysis, we won't worry about coordinate transformations or anything like that.

 

Functions on the supeline are

 

[math]f(t,\theta) = f_{0}(t) + f_{\alpha}(t) \theta^{\alpha} + f_{\alpha \beta}(t)\theta^{\beta}\theta^{\alpha} + \cdots + f_{\alpha_{1}\alpha_{2} + \cdots \alpha_{m}}(t)\theta^{\alpha_{m}} \cdots \theta^{\alpha_{1}}[/math].

 

Where we have used the Einstein summation convention and f's are smooth functions of [math]t[/math]. In fact, I will only add one anticommuting coordinate. Functions split into two parts, the even part defined by an even number of [math]\theta[/math]'s and an odd part defined by an odd number of [math]\theta[/math]'s.

 

As the extra coordinates [math]\theta[/math] anticommute, so do the derivatives. That is;

 

[math]\frac{\partial}{\partial \theta^{\alpha}}\frac{\partial}{\partial \theta^{\beta}} = - \frac{\partial}{\partial \theta^{\beta}}\frac{\partial}{\partial \theta^{\alpha}}[/math].

 

We will for now not worry about integration.

 

First, lets consider the line [math]\mathbb{R} = \mathbb{R}^{1|0}[/math]. The only coordinate here is [math]t[/math]. Now lets consider the elementary differential equation

 

1) [math]\frac{\partial f(t)}{\partial t } = f(t)[/math].

 

In words "what function do you differentiate to get the same function?" The answer as you know is the exponential function;

 

[math]f(t) = A e^{t}[/math] with [math]A \in \mathbb{R}[/math].

 

What can we say about [math]\mathbb{R}^{1|1}[/math]? We give this obvious coordinates [math]\{t, \theta \}[/math].

 

Claim The most natural generalisation of the differential equation 1) to [math]\mathbb{R}^{1|1}[/math] is

 

2) [math]\left(\frac{\partial }{\partial t} + \theta \frac{\partial }{\partial \theta}\right) f(t,\theta) = f(t,\theta)[/math].

 

The reason being that it is a first order differential equation that is even. You could easily add other factors and signs if you wished.

 

What is the solution?

 

Claim the solution is [math]f(t,\theta) = Ae^{t} + \theta B[/math].

 

Proof left as an exercise for the reader. Hint, it is very easy to verify if you remember that [math]f(t,\theta) = f_{0}(t) + \theta f_{1}(t)[/math].

 

So we see that adding an extra anticommuting coordinate does not change the solution very much apart from the (expected) linear part in [math]\theta[/math].

 

Now lets consider the Poisson equation on [math]\mathbb{R}[/math].

 

3) [math]\Delta f(t) = g(t)[/math]

 

where [math]\Delta = \frac{\partial^{2}}{\partial t^{2}}[/math]. We are given [math]g(t)[/math].

 

Assuming that we can integrate [math]g(t)[/math] at least twice we get

 

[math]f(t) = C_{1} + t C_{2} + \int dt \left(\int dt g(t) \right)[/math]

 

as the general solution. Just take the derivative twice and you will see this is correct.

 

Now on to [math]\mathbb{R}^{1|1}[/math]. What is the analogue of the Laplacian here? We are looking for a second order operator built from just the derivatives. Note that taking the derivative twice with respect to [math]\theta[/math] vanishes identically. Our only non-trivial choice is

 

[math]\Delta = \frac{\partial^{2}}{\partial t \partial \theta}[/math].

 

However, we must note that this is an odd operator, i.e takes even functions to odd ones and vice versa and that it squares to zero, [math]\Delta^{2}=0[/math]. (This operator is very important in quantum field theory, but we won't go into that here).

 

The corresponding Poisson equation is

 

4) [math]\frac{\partial^{2}f(t,\theta)}{\partial t \partial \theta} = g(t)[/math].

 

Note that we have [math]g(t)[/math] and not [math]g(t,\theta)[/math]. This is due to the even part of [math]f(t,\theta)[/math] vanishing under the Laplacian.

 

Claim the solution to 4) is

 

[math] f(t, \theta) = f_{0}(t) + \theta \left( \int dt g(t) + C\right)[/math].

 

Proof left to the reader.

 

Note that in this case [math]f_{0}(t)[/math] is an arbitary function of [math]t[/math] and not simply linear as on [math]\mathbb{R}[/math].

 

Anyway, that will do for now. It is just a flavour of what can be done with anticommuting coordinates. I may posy other interesting cases as and when I find them.

Link to comment
Share on other sites

I'm not sure I understand all the symbols you're using, but I'll try and keep what you've shown here in mind to maybe look at later. As a tool, do these superlines actually make the task of solving differential equations any easier, or are they simply a way of giving another perspective on these types of problems? Then again, this stuff may be well beyond me as I'm having enough trouble trying to understand canonical transformations and the Hamilton-Jacobi equation with my revision...

Link to comment
Share on other sites

It is true that sometimes the "superversion" can be a lot easier to solve and sometimes this can allow you to get at the pure even solutions.

 

The motivation for me comes from physics. If we wish to have a theory of semi-classical fermions then we need to understand spaces (manifolds) with anticommuting coordinates. Also, anticommuting coordinates arise naturally in gauge theories as the FP ghosts and in the BV formalism.

 

My post really is just the outcome of me thinking about the simplest generalisation of the Poisson equation to a supermanifold. The simplest supermanifold being [math]\mathbb{R}^{1|1}[/math] (well, [math]\mathbb{R}^{0|1}[/math] is really the simplest, be we could never get beyond one derivative).

 

As to if it is of any use, maybe. As I said in the original post the odd Laplacian is fundamental in perturbative quantum field theory. I would have to think about generalising the Poisson equation to higher dim supermanifolds first. There is however one slight issue here, the odd Laplacian is not invariantly formulated unless it acts on semidensities. But I think that is a story for another time.

 

Here is another one to think about.

 

Consider on [math]\mathbb{R}[/math] the differential equation

 

[math]\frac{d y(t)}{dt}+ f(t)y(t)=0[/math] given [math]f(t)[/math].

 

Most of you will know that the solution is

 

[math]y(t) = A e^{- \int dt f(t)}[/math]

 

Now what about a [math]\mathbb{R}^{0|1}[/math]? Clearly, the corresponding differential equation is

 

[math]\frac{d y(\theta)}{d \theta}+ f(\theta)y(\theta) = 0[/math] Again we are given [math]f(\theta)[/math]

 

Claim the only solution is [math]y(\theta) = 0 (= 0 + 0 \theta)[/math]

 

Proof Expand out [math]y(\theta) = y_{0} + y_{1}\theta[/math] and similar for [math]f(\theta)[/math]. We then get the algebraic equations

 

[math] y_{1}f_{0} + y_{0}f_{1}=0[/math]

[math]y_{1} + y_{0} f_{0}= 0 [/math]

 

The solution to which is [math] y_{0} = y_{1} = 0[/math]

 

;)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.