Jump to content

Controlled delayed quantum erasure?


Duda Jarek

Recommended Posts

In quantum eraser experiments, getting information about one entangled photon decides if the second photon behaves classically or quantum (interfere). Optical lengths for these photons chooses time order of these events, so we can delay the "decision" to happen after what it decides about. But in "standard version" of such delayed choice quantum erasure this decision is made randomly by physics.

I've just found much stronger version - in which we can control this decision affecting earlier events.

Here is a decade old Phys. Rev. A paper about its successful realization and here is simple explanation:

PHY5656.gif

We produce two entangled photons - first spin up, second spin down or oppositely.

Photon s comes through double slit on which there are installed two different quarter wave plates changing polarization to circular in two different ways.

Finally there are two possibilities:

u d R L

d u L R

where columns are: linear polarization of p, initial linear polarization of s, circular polarization of s after going through slit 1, circular polarization of s after going through slit 2.

So if we know only the final circular polarization of s, we still don't know which slit was chosen, so we should get interference. But if we additionally know if p is up or down, we would know which slit was chosen and so interference pattern would disappear.

So let us add polarizer on p path - depending on its rotation we can or cannot get required information - rotating it we choose between classical and interfering behavior of s ... but depending on optical lengths, this choice can be made later ...

 

Why we cannot send information back in time this way?

For example placing s detector in the first interference minimum - while brightness of laser is constant, rotating p polarizer should affect the average number of counts of s detector.

What for? For example to construct computer with time loop using many such single bit channels - immediately solving NP hard problems like finding satisfying cryptokey (used to decrypt doesn't produce noise):

tlc_resize.jpg

Physics from QFT to GRT is Lagrangian mechanics - finds action optimizing history of field configuration - e.g. closing hypothetical causal time-loops, like solving the problem we gave it.

Ok, the problem is when there is no satisfying input - time paradoxes, so physics would have to lie to break a weakest link of such reason-result loop.

Could it lie? I think it could - there is plenty of thermodynamical degrees of freedom which seems random for us, but if we could create additional constrains like causal time loops, physics could use these degrees of freedom to break a weakest link of such loop.

 

What is wrong with this picture?

Link to comment
Share on other sites

What is wrong with this picture?

Does your first link not satisfactorily answer it for the second version of the experiment?

 

From http://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser:

 

"The total pattern of signal photons at the primary detector never shows interference, so it is not possible to deduce what will happen to the idler photons by observing the signal photons alone, which would open up the possibility of gaining information faster-than-light [...] The apparatus under discussion here could not communicate information in a retro-causal manner because it takes another signal, one which must arrive via a process that can go no faster than the speed of light, to sort the superimposed data in the signal photons into four streams that reflect the states of the idler photons at their four distinct detection screens.

In fact, a theorem proved by Phillippe Eberhard shows that if the accepted equations of relativistic quantum field theory are correct, it should never be possible to experimentally violate causality using quantum effects"

 

Link to comment
Share on other sites

The version from wikipedia article is much weaker - the erasure is made randomly, while here we can control it - let us understand what's wrong here?

 

There is also possible Mach-Zehnder version of this experiment, which seems to be clearer:

mzer_resize.jpg

Here is reported realization: http://singlephoton.wikidot.com/quantum-eraser

 

There is a question of the coincident counter: is it required to get dependency of Ds1,Ds2 counts from rotation of the polarizer?

We know about such dependency for coinciding photons - which entangled partner hit Dp.

For photons pairs which hit Ds1 or Ds2 but not Dp, one would say that we should get interference - the readings shouldn't depend on the angle of polarizer(?)

So without the coincidence counter, while we cannot distinguish which exactly photons were erasured, still the total number should be affected by polarizer rotation - the dependency would be probably weaker, but shouldn't it be still seen on statistical level?

 

Physics from QFT to GRT is Lagrangian mechanics - finding optimizing action history of field configuration.

I don't see why it contradict using constrains with causal loops, as long they are imperfect - leave physics a place to lie, breaking such loop.

Hypothetical wormholes would allow to construct a perfect loop, like a cannon which shoots to itself if and only if in doesn't do it - it could be so large that thermodynamics couldn't prevent it ... it's why I don't believe in wormholes.

But here everything from electronics to such hypothetical backtime channel is based on thermodynamics, statistics - degrees of freedom which seems random for us now, but physics could bend this randomness to optimize the action - e.g. if we could try to impose constrains with causal loop.

Link to comment
Share on other sites

So without the coincidence counter, while we cannot distinguish which exactly photons were erasured, still the total number should be affected by polarizer rotation - the dependency would be probably weaker, but shouldn't it be still seen on statistical level?

 

Seen on a statistical level --- after the fact?

If you look at the statistics perhaps they suggest that information travelled faster than light, but if you have no way of extracting that information until afterwards, then it is impossible to use that information to affect events retrocausally.

 

To be honest I don't understand the variation. My weak understanding of the first variation comes from going through dumbed down details. "Statistical" usually means based on some connection between a lot of individual data, and a single datum may tell you nothing. If you want to control the outcome of a single photon event, information regarding that event is only available later, I think. Even if later it seems like somehow it was "known" by something ahead of time, there is no theoretical way to exploit that information at the photon's source.

Edited by md65536
Link to comment
Share on other sites

From perspective of Lagrangian mechanics like QFT or GRT, there is practically no difference between past and future - they are time/CPT symmetric - just finds action optimizing history of field configurations.

For example if photon will be measured by given linear polarizer, it should be imagined as its trajectory - like a string fixed in one of two possible angles in this polarizer. If you would like to rotate this string, angular tension would make it more difficult - and it works in both directions.

The past-future asymmetry appears only on thermodynamical/statistical level - is not written in the fundamental equations, but is only

a property of their solution we live in.

Here is simple thought experiment showing that thermodynamics is in fact also deeply symmetric: http://www.scienceforums.net/topic/62327-thermodynamical-thought-experiment-with-nonorientable-spacetime/

 

So asking physics to find fixed point of something like that:

tlc_resize.jpg

is analogue of asking to minimize tension of strings (action) ... but sometimes it denotes lying by the way (like for paradoxes) - for example when we rely on statistics, like for such hypothetical backtime channel.

Imposing imperfect causal loop constrain to lagrangian mechanics seems to be against intuition, while "standard" quantum computers also do similar stuff (wider explanation).

It is said that their strength is the reversible calculations, while we can also make them classically like

(x,y,z)<->(x,y,z xor f(x,y))

the problem with trying to reverse such calculations are the auxiliary bits - we don't know how to initialize them...

Here is the real strength of quantum computers - that the measurement is kind of attaching trajectories in the future - for example "selecting" arguments leading to the same value of function in Shor algorithm:

fqcomp.jpg

 

Another hypothetical possibility of backtime channel I cannot find problem with, is CPT analogue of laser - lasar (stimulated absorption).

To see that it seems doable, imagine free electron laser - we enforce electron to move on sinus-like curve, emitting photons ... which finally e.g. are absorbed by some target.

Physics is CPT invariant, so let us imagine such tranformation of this picture - excited target emit photons, which fly to the lasar and finally are absorbed by positron going in reverse direction.

So such free electron laser should also work as lasar - but to make it work, it has to (anti-)hit target which is already excited to given specific wavelength - it doesn't occur often.

Imagine we constantly excite the target to required energy (e.g. is sodium lamp) and it is surrounded in all but to the lasar directions by detectors - they usually get the produced light, but if we turn the lasar on, more energy should goin that direction and so we should see a disturbance in energy balance in the lamp-detectors system ... before turning the lasar on by the optic length.

What's wrong also with this picture?

Edited by Duda Jarek
Link to comment
Share on other sites

  • 1 month later...

In last week Nature Physics there is nice experiment about controlling in the future if photons in the past are entangled or not:

nphys2294-f2.jpg

Victor chooses later (QRNG) if Alice-Bob photons are correlated (left) or not (right):

nphys2294-f3.jpg

so obtaining |R>|R> means that more probably Victor has chosen entanglement, |R>|L> separation - there is nonzero mutual information, so once again I don't see a reason it couldn't be used to send information?

Here is good informative article with link to the paper: http://arstechnica.c...-beforehand.ars

 

ps. If someone is anxious about the "conflict" of fundamental time/CPT symmetry with our 2nd law-based intuition, it should be educative to look at very simple model: Kac ring - on a ring there are black and white balls which mutually shift one position each step. There are also some marked positions and when a ball goes through it, this ball switches color.

Using natural statistical assumption ("Stoßzahlansatz"): that if there is p such markings (proportionally), p of both black and white balls will change the color this step, we can easily prove that it should leads to equal number of black and white balls - maximizing the entropy ...

... from the other side, after two complete rotations all balls have to return to the initial color - from 'all balls white' fully ordered state, it would return back to it ... so the entropy would first increase to maximum and then symmetrically decrease back to minimum.

Here is a good paper with simulations about it: http://www.maths.usy...ts/kac-ring.pdf

The lesson is that when on time/CPT symmetric fundamental physics we "prove" e.g. Boltzmann H theorem that entropy always grows ... we could take time symmetry transformation of this system and use the same "proof" to get that entropy always grows in backward direction - contradiction.

The problem with such "profs" is that they always contain some very subtle uniformity assumption - generally called Stoßzahlansatz. If underlying physics is time/CPT symmetric, we just cannot be sure that entropy will always grow - like for Kac ring and maybe our universe also ...

Edited by hypervalent_iodine
external thread advertisement removed
Link to comment
Share on other sites

I don't get why people have an issue with quantum erasers and entanglement. The probability of a particle collapses to a single point, that's it, no paths were destroyed because they weren't causally connected in the first place.

Edited by questionposter
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.