Jump to content

Four-dimensional understanding of quantum computers


Duda Jarek

Recommended Posts

In modern view on quantum mechanics wavefuntion collapse is no longer a ‘mystical out of physics’ phenomena, but is seen as a result of interaction with the environment (‘einselection’) – there is still some concrete unitary evolution behind. So there should exist ‘Hamiltonian of the Universe’ describing evolution of everything.

We have similar situation in (classical!) field theories: for Euler-Lagrange equations (like Klein-Gordon: [math] d_{tt} \psi = \Delta\psi - m^2 \psi [/math] ) the evolution operator is self-adjoint – can be diagonalized (spectral theorem). The evolution on the [math] \lambda [/math] coordinate is: [math] d_{tt} x = \lambda x [/math].

So this operator should be non-positive, because otherwise some coordinates would explode.

For negative eigenvalues, we get unitary evolution – like in quantum mechanics, we can imagine it as superposition of different eigenfunctions, ‘rotating’ with different speeds. And so such hyperbolic PDE are called wave-like.

We have limited knowledge: cannot fully trace these unitary evolutions – from our perspective they 'loose their coherence':

- we don’t/can’t know precise parameters, like initial conditions,

- we cannot fully trace complicated motion (chaos),

- thermodynamically stable state usually have own dynamics, like atomic orbitals or quantum phases.

If we model such our lack of knowledge with proper statistical ensemble among possible scenarios - maximize uncertainty not locally like in Brownian motion, but globally - we get thermodynamical going to quantum mechanical ground state probability density. These new models also show why to translate from amplitude we are working on to the probability, we should take ‘the square’ ( http://arxiv.org/abs/0910.2724 ).

 

To understand the strength of quantum computers, it should be enough to focus on models with constant (fixed) number of particles, for what classical field theory is enough.

What is nonituitive about them is that natural picture for such Lagrangian mechanics is ‘static 4D’ – particles are no just ‘moving points’, but rather their trajectories in the spacetime ... let’s look what gives it us for computational capabilities.

 

Quantum algorithm usually look like:

- initialize qbits,

- use Hadamar gates to get superposition of all possible inputs,

- calculate classical function of the input,

- extract some information from the superposition of results,

look at the classical function calculation – it has to use reversible gates, like

(x,y,z)->(x,y,z XOR f(x,y) )

they are also reversible classically, so we can easily reverse the whole function calculation on standard computer.

Unfortunately it’s not so simple: there is a problem about it – such reversible calculations usually requires quite large number of auxiliary (q)bits, which had been initialized (to zero).

While taking classical reverse of such function, we rather cannot control that these auxiliary (q)bits are zeros – they would usually be just random – so we hadn’t really calculated what we wished.

If we could for example calculate square of a number modulo N or multiplicate of two numbers using ‘small’ number of auxiliary bits, we could guess their final value (e.g. randomly) and in a small number of trials we would be able to reverse such function (getting all zeros), what would allow to factorize N – so probably simple multiplication requires linear number of auxiliary bits.

The strength of quantum computers is that they can ‘mount qbits trajectories’ in both past and future – simultaneously initialize auxiliary qbits and using measurement focus only on scenarios having the same final value (the measured one).

In Shor’s algorithm case, we wouldn’t even need to know all the scenarios to make Fourier transform – knowing two would be already enough: if these powers gives the same value modulo N, their difference gives 1 modulo N.

On the 18th page of my presentation is diagram for Shor’s algorithm: https://docs.google.com/fileview?id=0B7ppK4IyMhisODI5ZTU4YjYtNmU0MC00ZTM3LTg5MWQtMTJiYTY4MWVkOTJk&hl=en

 

For physics it’s natural to find global minimum of action, but simulating such computer in classical field theory, even after simplifications probably still would be difficult, but anyway it suggests that to attack algorithmically difficult problems, we should translate them into continuous ones.

For example in 3SAT problem we have to valuate variables to fulfill all alternatives of triples of these variables or their negations – look that x OR y can be changed into optimizing

[math] ((x-1)^2+y^2)((x-1)^2+(y-1)^2)(x^2+(y-1)^2)[/math]

and analogously seven terms for alternative of three variables. Finding global minimum of sum of such polynomials for all terms, would solve our problem.

I’ve just found information that it looks like it is successfully done for a few years – enforcing that there is only one minimum, so local gradient would show the way to the solution: http://en.wikipedia.org/wiki/Cooperative_optimization

 

What do you think about it?

Link to comment
Share on other sites

  • 7 years later...

After 7 years I have finally written it down: https://arxiv.org/pdf/0910.2724v2.pdf

Also other consequences of living in spacetime, like violation of Bell inequalities.

fqc.png

Schematic diagram of quantum subroutine of Shor's algorithm for finding prime factors of natural number N. For a random natural number y<N, it searches for period r of f(a)=y^a mod N, such period can be concluded from measurement of value c after Quantum Fourier Transform (QFT) and with some large probability (O(1)) allows to find a nontrivial factor of N. The Hadamar gates produce state being superposition of all possible values of a. Then classical function f(a) is applied, getting superposition of |a> |f(a)>. Due to necessary reversibility of applied operations, this calculation of f(a) requires use of auxiliary qbits, initially prepared as |0>. Now measuring the value of f(a) returns some random value m, and restricts the original superposition to only a fulfilling f(a)=m. Mathematics ensures that {a:f(a)=m} set has to be periodic here (y^r \equiv 1 mod N), this period r is concluded from the value of Fourier Transform (QFT). Seeing the above process as a situation in 4D spacetime, qbits become trajectories, state preparation mounts their values (chosen) in the past direction, measurement mounts their values (random) in the future direction. Superiority of this quantum subroutine comes from future-past propagation of information (tension) by restricting the original ensemble in the first measurement.

Edited by Duda Jarek
caption
Link to comment
Share on other sites

Thanks, I would gladly discuss.

The main part of the paper is MERW ( https://en.wikipedia.org/wiki/Maximal_Entropy_Random_Walk) showing why standard diffusion has failed (e.g. predicting that semiconductor is a conductor) - because it has used only an approximation of the (Jaynes) principle of maximum entropy (required by statistical physics), and if using the real entropy maximum (MERW), there is no longer discrepancy - e.g. its stationary probability distribution is exactly as in the quantum ground state.

In fact MERW turns out just assuming uniform or Boltzmann distribution among possible paths - exactly like in Feynman's Eulclidean path integrals (there are some differences), hence the agreement with quantum predictions is not a surprise (while still MERW being just a (repaired) diffusion).

Including the Born rule: probabilities being (normalized) squares of amplitudes - amplitude describes probability distribution at the end of half-paths toward past or future in Boltzmann ensemble among paths, to randomly get some value in a given moment we need to "draw it" from both time directions - hence probability is square of amplitude:born.png

Which leads to violation of Bell inequalities: top below there is derivation of simple Bell inequality (true for any probability distribution among 8 possibilities for 3 binary variables ABC), and bottom is example of their violation assuming Born rule:

belll.png

Edited by Duda Jarek
Link to comment
Share on other sites

Its interesting work, I will have to take time to examine your arxiv in greater detail to add any comments. However thank you for sharing on our forum and welcome aboard or rather welcome back after 7 years. Its nice to see a continuation of a thread years later showing a produced result. I happened upon your thesis paper on extended MERW. 

Going to take a bit of time studying your paper above.

I probably will have questions involving SU(n) and SO(n) symmetry aspects. However  I need to do some familaritation with the algorithms involved.

Edited by Mordred
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.