Jump to content

Grassman variables for Hamilton cycle problem?


Duda Jarek

Recommended Posts

While determining the existence of Euler cycle (going once through each edge) for given graph is trivial, for Hamilton path (going once through each vertex) it is NP-complete problem (e.g. worth a million dollar).
Denoting adjacency matrix of this graph by [math]A[/math] and its number of vertices by [math]n[/math], diagonal elements of [math]A^n[/math] count also eventual Hamilton cycles – the problem is that they also count other length n cycles – going more than once through some vertex.
The question is if we could somehow "subtract" those going multiple times through some vertex ...

Grassman variables (anticommuting), used in physics to work on fermions, seem to be perfect for this purpose. Assume we have [math]g_1,..., g_n[/math] Grassman variables: [math]g_i g_j = - g_j g_i[/math] what also implies [math]g_i g_i = 0[/math]. So multiplication of [math]n[/math] of such variables is nonzero iff it contains all indexes (vertices).
Denote [math]G = diag(g_i)[/math] as diagonal nxn matrix made of these variables. It is now easy to see that:

Graph [math]A[/math] contains Hamilton cycle iff [math]Tr((AG)^n) \neq 0 [/math].

Grassman variables can be realized by matrices – so we could see this formula in terms of block matrices ... unfortunately known realizations require e.g. [math]2^n[/math] size matrices, what is not helpful - we get only an implication:

If P != NP, then there is no polynomial size matrix realization of Grassman variables.

So probably these realizations just require exponentially large matrices, what seems reasonable.
We could easily find [math](AG)^{-1}[/math], so maybe there is a way to quickly find [math](1-AG)^{-1}=\sum_{i=0}^n (AG)^i[/math], what should be sufficient?

Any thoughts?

Link to comment
Share on other sites

My initial worry is that you have to take care when defining matrices over superalgebras. Are you sure that AG is invertible? Do you have an even invertiable supermatrix or not? (I have not looked at this closely, just this is what springs to my mind).

Link to comment
Share on other sites

Indeed you woud need [math]g_i^{-1}[/math] for such direct inverse, eg. [math](AG)^{-1} = diag(g_i^{-1}) A^{-1} [/math].

However, I think, having a quick (polynomial) way to find determinant of such matrices should be sufficient (no inverse needed).

I was fighting with Gauss elimination, but there can appear terms with all combinations - thieir number grows exponentially ...

Link to comment
Share on other sites

Indeed you woud need [math]g_i^{-1}[/math] for such direct inverse, eg. [math](AG)^{-1} = diag(g_i^{-1}) A^{-1} [/math].

If g is an odd element of a Grassmann algebra, say one of the chosen generators, then there is no inverse. The Grassmann algebra is a ring not a field, we do not have a multiplicative inverse for all non-zero elements.

 

However, I think, having a quick (polynomial) way to find determinant of such matrices should be sufficient (no inverse needed).

Again you will need to be careful with the notion of a determinant here.

 

There is the notion of the Berezinian for even invertible matrices. This cannot be extended to all matrices over a superalgebra.

 

For the odd case the is the queer determinant which is far less well known.

Link to comment
Share on other sites

Determinant is just a sum over all permutations of products - I don't see a problem here?

Cramer formula allows to write inverse matrix as rational expression of determinants - what seems sufficient ...

 

Anyway, there still seems to be required exponential number of terms to find the determinant ...

But maybe there can be a better way to just find n-th power of (Grassman) matrix ... ?

Link to comment
Share on other sites

Determinant is just a sum over all permutations of products - I don't see a problem here?

Cramer formula allows to write inverse matrix as rational expression of determinants - what seems sufficient ...

For one you may need to include some signs with these permentations, the product is supercommutative not strictly commutative. Anyway, taking the standard determinant won't have the correct properties of a super-version of a determinant. But then that might not matter for what you are doing, but I still doubt you will get at an inverse matrix that way.

 

 

Anyway, there still seems to be required exponential number of terms to find the determinant ...

But maybe there can be a better way to just find n-th power of (Grassman) matrix ... ?

For supermatrices the product is define as for stanard matrices. You will though have to take care with minus signs and nilpotent terms.

Edited by ajb
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.