Duda Jarek
Senior Members
Posts
572 
Joined

Last visited
Content Type
Profiles
Forums
Events
Everything posted by Duda Jarek

Thanks for the paper. It's surprising for me that decay is faster in lower temperature ... Generally 2.7K looks to be much too small to have some essential influence on such processes ... I don't like this explanation, but remaining way to cross such energy barrier is some kind of tunneling ... About 'the temperature of vacuum' (2.7K)  the existence of different interactions (weak, strong, gravitational), which should be carried by some fundamental excitations (modes),  the requirement of quite large cosmological constant  energy of vacuum, strongly suggest that there is more than observed EM modes  microwave radiation. The speed of loosing temperature by black body radiation suggests that it's made practically only by photons  EM modes, so interactions with these 'other modes' should be extremely weak. The idea of this topic was the only way I could think of to observe directly these 'other modes' ... but they are probably too weak ... The standard 293K temperature known from chemistry is stored in kinetic and EM energy and they interact extremely weakly with these 'other modes'  the rate of thermalisation between them should be extremely slow  they probably could thermalise in billions of years, but for time scale used by us, these temperatures  chemical (~293K) and of 'the other modes' (~2.7K) can be different.

I'm not taking about finestructure constant  it's a combination of some fundamental physical constants. I'm also not talking about absorption like neutron capture  in these cases the energy barrier is crossed thanks of energy of captured particle. I'm talking about decay  there is a stable state and after some statistical time it spontaneously exceed the energy barrier which made it stable and drops to a lower stable state ... where energy required to cross the barrier comes from? For me it's clearly thermodynamical process ... this energy has to came form some thermal noise ...

Particle decay is clearly some statistical process. Generally speaking, particles are some stable solutions of some physics (like a field theory)  they are some local/global energy minimums for given constrains like spin or charge. So from energetic point of view, particle decay should be getting out of some local energy minimum by crossing some energy barrier and finally reaching some lower energy minimum  just like in thermodynamics (?) Energy required to cross such energy barrier usually comes from thermal noise  in case of particle decay there would be required some temperature of vacuum ... Generally the universe is built not only of particles, but also can carry different interactions  EM, weak, strong, gravitational. This possibility itself gives vacuum huge amount of degrees of freedom  some fundamental excitations, which not necessarily have nonzero mass like photons ... and if there is some interaction between them, thermodynamics says that they should thermalize  their energy should equilibrate. We can measure thermal noise of EM part of these degrees of freedom  2.725K microwave background, but degrees of freedom corresponding to the rest of interactions (weak, strong, gravitational) had billions of years to thermalize  should have similar temperature. The EM part gives about 6*10^5 of energy of vacuum required to obtain expected cosmological constant, maybe the rest of interactions carries the rest of it ... Anyway we believe that this microwave background is cooling  so 'the temperature of universe' should so. Shouldn't it became more difficult for particles to cross the energy barrier to get to a lower energy minimum? It would increase decay times ... We have experimental evidence that physical constants like e,G are unchanged with time, but is it so with decay times? Maybe radiometric dated things are a bit younger than expected... Similar situation is for example for excited electrons ...

The important is that their spin and charge parts are connected, but can behave separately  it strongly suggests that the fundamental blocks building our physics are the carriers of indivisible properties like charge or spin. Sometimes they create pairs to reduce energy and finally particles are stable when they are in the state of the lowest possible energy, like neutrino or electron. And strong argument that this spin part is just a neutrino is muon decay muon > electron + electron antineutrino + muon neutrino isn't that just exchange of the spin part to get the lowest energy and so the stable state?

From http://www.lbl.gov/ScienceArticles/Archive/ALSspinonsholons.html: and there is a nice graph with two distinct peaks ...

From the abstract of the paper: "The spinon and holon branches are found to have energy scales of approx 0.43 and 1.3 eV". Spinons and holons undergo "separation into collective modes" ... but to behave in separate modes, doesn't they have to separate themselves? Imagine a string in a harmonic mode ... now it separates into two modes/strings ... doesn't it mean that that it's atoms also separates? Ok  these amplitudes can be extremely small so they stay 'in one particle' ... but behave separately. Neutrino is 'a pure (electron) spin' ... and so is the spinal part of electron ... They energetically prefer to stay together (modifying their structure a bit), but 'pure spin' has unchangeable quantum number (spin) and has extremely small energy  doesn't have what to decay  should be stable (neutrino). 'Pure charge' (holon) interacts much stronger, have larger energy  should quickly 'catch' neutrino (spontaneously created in pair)  should have very small half life time. And we have Majorana hypothesis  there are only two types of electron neutrinos ... adding the charge we have four possibilities as in Dirac's equations ...

Recent experiments http://www.nature.com/nphys/journal/v2/n6/abs/nphys316.html confirmed theoretical results that electrons are not undividable as it was believed for some conditions it's more energetically preferable for them to separate their charge part (called holon or chargon) and its spin part (spinon). I think it's a good time to discuss about these results and their consequences. Thinking about 'pure spin' (spinon) made me associate it with low energy electron neutrino, especially that I imagine particle's properties which can only occur in some integer multiplicities like charge or spin as topological singularities  such separation greatly connects the whole picture. Another argument is for example muon(tau) decay  it looks that e.g. there has been spontaneously created neutrinoantineutrino pair (electron) and the spin part of the muon(tau) was exchanged with the one with smaller energy and so more stable. The other question is about statistics of these (quasi?)particles. For me statistics is the result of spin  so spinons should be clearly fermions ... What do you think about it?

Data correction methods resistant to pessimistic cases
Duda Jarek replied to Duda Jarek's topic in Computer Science
The simulator of correction process has just been published on Wolfram's page: http://demonstrations.wolfram.com/CorrectionTrees/ Is shows that we finally have near Shanon's limit method working in nearly linear time for any noise level. For given probability of bit damage (p_b), we choose p_d parameter. The higher this parameter is, the more redundancy we add, the easier to correct errors. We want to find the proper correction (red path in simulator). The main correction mechanism is that if we are expanding the proper correction  everything is fine, but in each step of expanding a wrong correction, we have p_d probability of realizing it. With p_d large enough, the number of corrections we should check doesn't longer grow exponentially. At each step there is known tree structure and using it we choose the most probable leaf to expand. I've realized that for practical correction methods (not requiring exponential correction time), we rather need a bit more redundancy than theoretical (Shannon's) limit. Redundancy allows to reduce the number of corrections to consider. In practical correction methods we rather have to elongate corrections and so we have to assume that the expected number of corrections up to given moment is finite, what requires more redundancy than Shannon's limit (observe that block codes fulfills this assumption). This limit is calculated in the last version of the paper (0902.0271). The basic correction algorithm (as in the simulator) works for a bit worse limit (needs larger encoded file by at most 13%), but it can probably be improved. Finally this new family of random trees has two phase transitions  for small p_d < p_d^0 the tree will immediately expand exponentially. For p_d^0 < p_d < p_d^2 the tree has generally small width, but rare high error concentrations make that its expected width is infinite (like long tail in probability distribution). For p_d>p_d^2 it has finite expected width. Used today error correction methods works practically only for very low noise (p_b < 0.01). Presented approach works well for any noise (p_b < 0.5). For small noises it needs size of encoded file practically like for Shannon's limit. The difference starts for large noises: it needs file size at most twice larger than the limit. Practical method for large noises give new way to increase capacity of transmition lines and storage devices  for example place two bits where we would normally place one  the cost is large noise increase, but we can handle it now. For extremely large noises, we can no longer use ANS. On fig. 3 of the paper is shown how to handle it. For example if we have to increase the size of the file 100 times, we can encode each bit in 100 bits  encode 1 as (11...111 XOR 'hash value of already encoded message'. The same with 0. Now while creating the tree, each split will have different number of corrected bits  different weight. 
Is LIGO just observing viscosity of the vacuum?
Duda Jarek posted a topic in Astronomy and Cosmology
In experiments like LIGO we want to observe extremely weak gravitational waves from sources millions of light years away  we are assuming that their strength decreases like R^3. But because of this distance, even slightest interactions with the vacuum and other objects on the way, could diffuse/absorb them  and so the amplitude would decrease exponentially, making such observations completely hopeless. In November 2005 LIGO has reached assumed sensitivity "At its conclusion, S5 had achieved an effective range of more than 15 Mpc for the fourkilometer interferometers, and seven Mpc for the twokilometer interferometer." http://www.ligo.caltech.edu/~ll_news/s5_news/s5article.htm But for these 3.5 years its only success is is a nondetection: "During the intense blast of gamma rays, known as GRB070201, the 4km and 2km gravitationalwave interferometers at the Hanford facility were in science mode and collecting data. They did not, however, measure any gravitational waves in the aftermath of the burst. That nondetection was itself significant. " http://mr.caltech.edu/media/Press_Releases/PR13084.html What is vacuum? It definitely isn't just 'an empty space'  it for example is medium for many different waves, particles. Nowadays many people believe that it can for example spontaneously create particleantiparticle pairs... Modern cosmological models says that there is required cosmological constant  additional density of energy of ... this vacuum ... Anyway, even being only a medium for many kind of interactions  there is at least some field there  it has many internal degrees of freedom (like microwave radiation). We usually believe that they can interact with each other, so there should be thermalization  all of them should contain similar amount of energy. In physics there are usually no perfect mediums  there are always at least some very very very small interactions... We observe more or less uniform 2.725K microwave radiation  it is believed to be created in about 3000K and then reduce the wavelength due to red shift in expanding universe. But assume that the field of which vacuum is build is not perfectly transparent  for example that such photons interacts at average once per a million years  that would be already enough for thermalisation process. So if the field of the vacuum is not perfectly transparent (there is interaction between different interactions), its internal degrees of freedom should have temperature 2.725K. We observe only electromagnetic degrees of freedom (according to Wikipedia: about 6*10^5 of total density of universe), but we know well that there is more types of interactions (weak, strong, gravitational ... ). And their energies probably sum up to the cosmological constant... Returning to the question from topic  general relativity theory says that vacuum is kind of fluid for gravitational waves. It it already a field  it has some internal structure ... and there is QED, QCD, etc  I just don't believe we can assume that it's a perfect medium. For fluids this kind of friction  converting macroscopic energy into internal degrees of freedom  is called viscosity (try to make waves on honey). If there is some extremely small viscosity of vacuum (which has nonzero energy density/temperature), multiplying it by millions of light years, it could essentially reduce strength of gravitational waves reaching earth... They are are already believed to be extremely weak... Do You think it is why LIGO only success is a nondetection? If not  why is that? 
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
Yes I claim But instant communication seems to be much less against our intuition than retrocausality ... which is clearly seen in Wheeler's experiment ... CPT conservation suggests that causality should be able to go in both causality cones ... If we accept it, instant communication is a piece of cake  send information back and forward or oppositely... This doesn't mean that physics is 'nonlocal' as Bell's inequalities' enthusiasts claim  if we think about physics as in any field theory (QED, standard model, general relativity): fourdimesionally  it's absolutely local. Merged post follows: Consecutive posts mergedIf something is interested, there are two more interpretations of QM in which we try to understand QM fourdimensionally as in CPT conserving field theories  'transactional interpretation' and 'theory of elementary waves'. I believe here starts new large discussion about it: http://groups.google.com/group/sci.physics.electromag/browse_thread/thread/749d5a06be67485f/eac28a1f73a81aab?lnk=raot#eac28a1f73a81aab 
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
Why are You so sure that it's 'violating reality' and its 'selfconsistency'? I will cite myself, from the link I've already attached, in which I'm trying to show that it doesn't have to lead to inconsistencies like time paradoxes: "But let's think about a possibility of a channel in which we could send information back in time on which we cannot fully rely  sometimes it's easier for physics to make it lie. Observe that quantum mechanics gives physics large freedom in choosing effects below our recognition to stabilize such causal timeloops to avoid time paradoxes: like choosing what spin will be measured, or make some small influences on statistically looking behaviors ... or eventually make that this backtime channel would lie. Assume that there are allowed causal timeloops, but they have to be stable. So if there would be such 'device' to transmit information back in time and I for example would send to me a picture of the girl who will be my wife ... and just before sending it back in time I would change my mind  this timeline would never appeared  there would be completely different situation (without this knowledge). So if I'd really get this picture from myself  I will have no choice but to really send it. Like in (a good...) SF movie  they go back in time to repair something ... and finally it occurs that the situation is exactly as it was ... How to use such 'device'? Imagine we take some real random number generator  for example measuring spin of photon 45 degrees from its polarization. Now the procedure is: (1) make a choice according to this generator, (2) if from future there is a message that it was a wrong choice  take a different one (3) wait for results of this choice (4) finally if it was wrong choice  send this information back in time to (2) So if there was a satisfying choice  it has created stable time loop  so in fact the possibility of using this device made that our random number generator (quantum mechanics) has already chosen properly before the loop. It can be imagined that each of choice starts a different timeline  in our language they are all entangled and physics will choose one of them in what we call future, such that there for example are no time paradoxes (like that on a string is allowed some discrete set of wavelengths). What if there wasn't any satisfying choice? Would it create time paradox? Not necessary  most probably the physics would destroy the weakest link of such loop  make that this 'device' had lied. Observe that even without this real random number generator, such 'device' could work without actually being used: if for example there had to be a successful terrorist attack, there would be sent information to prevent it ... and finally in a stable timeline this attack would never happen (because of e.g. proper quantum choices below our recognition)." What is wrong with such point of view, especially that the channel we are talking about (with cat and telescope) works probabilistically? 
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
But in this article there is only the same argument  because it would violate the causality... Ok  there is reference to a paper in which is 'proven' that it's impossible. If it can be generally proven  shouldn't showing a problem with with given experiment be a piece of cake? So please tell me  why exactly this specific experiment won't work? Unfortunately 'proofs' in physics are extremely subtle things. To prove something we have to some complete information about the situations  assumptions. In physics we still don't have it  so we use some strange assumptions, approximations and proudly say that something is proven... No it isn't! For example  in phenomenological thermodynamics we 'prove' 2nd law of thermodynamics. But this 'proof' uses continuous functions of density, pressure, etc ... silently ignoring microscopic structure... All serious theories we use conserve CPT symmetry, so let us assume that there is a mathematical proof that in given model there is statistical property which always increase with time (entropy). Now use the same proof after CPT symmetry  the same entropy has to work  we get contradiction. Returning to the experiment  in this discussion on wikipedia is written: But what does it mean when photons goes in opposite directions? For example gets to us and the colony 'in the same moment'... What does it even mean for two points of spacetime 'not connected causally'? Special relativity says that we could change a frame of reference  make a boost: in one we get photons earlier, in a second  the colony. So from one frame of reference the experiment would work, but from the second not? Ok simpler question  still: what do You think about Wheeler's delayed choice experiment? 
Superluminal communication through entanglement
Duda Jarek replied to bascule's topic in Quantum Theory
Hi, Do You mean experiment like on: http://www.newscientist.com/article/mg19125710.900whatsdoneisdoneorisit.html If yes the problem is that it will always give wavelike behavior. If the photon will be measured on the longer path, this measurement meant that it has chosen this path  won't be seen on the shorter path. There is needed some additional split of photon: look at 6th post in http://www.scienceforums.net/forum/showthread.php?t=39787 
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
So please show me what is wrong with it? So what is causality? Do You believe in CPT conservation? How You understand Wheeler's experiment: http://en.wikipedia.org/wiki/Wheeler%27s_delayed_choice_experiment that by choosing telescope, we choose behavior of photon a million years ago...? 
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
Exactly! Choosing observation tool we can choose if probability of that is large or small  doesn't we have instant communication (a light year) here? And we get even more interesting effects if optic length to the cat is much smaller than to us... ... especially if we are near the cat ... (photons could be e.g. reflected back) If You are afraid that it's not physical, please look here: http://www.scienceforums.net/forum/showthread.php?t=39296 
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
Look at: http://en.wikipedia.org/wiki/File:Kim_EtAl_Quantum_Eraser.svg Now place the upper detector in minimum of interference and connect to a poison and so the cat. On the lower part instead of these mirrors just choose to be able to distinguish paths or not  choosing probability of cat's death... 
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
You've started with quantum eraser, but I see that this analogy was too far But ... let us take experimental setup like in the middle of: http://en.wikipedia.org/wiki/Delayed_choice_quantum_eraser A photon goes through both slits. Now using 'spontaneous parametric down conversion' (a photon is converted into pair of entangled photons having twice smaller energy)  we get 'entangled two pairs of photons'. Now as on the picture  one pair is sent to a detector in the colony. But this detector is placed in some (the first) minimum of eventual interference ... and causes death of poor car if it catches photon. The second pair is sent to us, but we don't allow beam splitters to choose if we can determine the slit or not, but as in Wheeler's experiment  just measure these photons with selected tool:  if we would use a telescope which can distinguish both slits  photons observed in the colony should behave corpuscularly  thee is no interference  the probability of killing the cat is large,  if we would use telescope with smaller resolution than required to distinguish the slits  we should get wavelike behavior  the probability of killing the cat would be smaller. What is wrong with this picture? 
Expansions of Schrodinger's cat thought experiment
Duda Jarek replied to Duda Jarek's topic in Quantum Theory
Perfect isolation denotes that inside and outside of this box are causally separated  there are two separate environments (e.g. by a light year). The cat/the interior of the box are already environments  thanks of detector+poison we amplify the quantum measurement. But this measurement doesn't inflict the outside of the box (or inflict a year later)  from this point of view we have entanglement of life and death cat. Ok  imagine that in the middle we would create two entangled photons, send them in opposite directions and in the colony measure its spin and kill the cat if and only if the spin is up. Wouldn't the photon we received be entangled with life/death of the cat? I wanted this picture to show that entangled means  connected causally in the past. And that saying about entanglement we have to specify the point of view. Yes, but in quantum erasure we usually have a coincidence counter. To make this 'coincidence measurement', we have to gather information from at least two sources  in presented experiment it takes at least half a year... 
This experiment requires a 'perfectly separated box'  that there is no interactions with its outside. It can be realized by spatial separation  using that interactions can travel with at most speed of light. So imagine we've found a planet let say a light year from us  that we've analyzed it, can simulate it as perfectly as possible. We can send some 'reports' between us and this planet with a speed of light  they are received a year after sending. Let there be a cat which life depends on a spontaneous emission  we will get a report a year later, so before that we will be out of causal cone of this episode  from our perspective  this cat is in entangled state of life and death... More sophisticated example  imagine we've settled a colony there, such that some of their essential decisions are made using 'a real random generator', like measuring spin of photon 45deg from its polarization ('blind justice'). Each of such choices (like choosing the leader) starts a different timeline  before we get the report, from our perspective this colony is in entanglement of many timelines. So from our perspective  receiving the report is the measurement act  it 'chooses' one of timelines for the colony. If this report was 'good enough'  we could precisely track their real timeline  it's like getting corpuscular behavior in double slit experiment in which we determine the slit  we get classical behavior. But if there were some information which couldn't be determined  like in double slit experiment without determining the slit  shouldn't we observe kind of interference between timelines? Understanding this thought experiment should give good understanding of quantum entanglement. There is large discussion about QM interpretation: http://www.advancedphysics.org/forum/showthread.php?p=52905 What do You think about it?

What if we make CPT transformation of free electron laser?
Duda Jarek replied to Duda Jarek's topic in Physics
I've just accidentally found out that very similar picture gives transactional interpretation of QM by John G. Cramer. You can find very good links here: http://en.wikipedia.org/wiki/Transactional_interpretation It's being taught on the University of Washington. If someone is interested in this discussion, here it has expanded: http://www.advancedphysics.org/forum/showthread.php?p=52593 
You are overestimating our possibilities The universe just optimizes some fourdimensional action of some field and we are only a product of its internal reasonresult relations ... somewhere in this spacetime ... it already knows all stupid things we will try to do and prepared for them such that we cannot do anything against its internal integrity ... So maybe we could use its protectiveness ... make it choose for us such that we will be satisfied with the results  because if not we will try to make a time paradox! This post is in fact about acceptance of CPT conservation ... against our 'evolving 3D' intuition... There is nice picture to upgrade this intuition here: http://arxiv.org/html/physics/9812021v2 The author suggests interesting view on the hypothetical picture that our universe will finally collapse and then there will be new big bang  evolution  collapse ... and so on ... Namely look at such collapse from the perspective of CPT conservation  from singularity of relatively small entropy it slowly evolves ... After this symmetry it's no longer collapse, but 'anti big bang' which creates universe built of antiparticles which evolves in opposite direction to our time arrow... which will finally (in maaany billions of years) crash with our reality...

Quantum electrodynamics (relativistic quantum field theory) is believed to be better approximation of physics than quantum mechanics  for example because it allows for extremely accurate predictions of Lamb shift. http://en.wikipedia.org/wiki/Quantum_electrodynamics One of formulations of field theories is due to Lagrangian density  physics finds the field which minimizes integral of this density over four dimensional space  so called action. Now using EulerLagrange equations, we can find necessity condition for such minimization, which is in form of time evolution  we get 'evolving 3D' picture. These equations are completely deterministic  we don't have a problem with for example probabilities, wavefunction collapses ... So why its predecessor which we commonly use  quantum mechanics is completely undeterministic  we can usually talk only about probabilities???? For me it clearly shows that QM is forgetting about something  kind of 'subquantum noise' ... which determines quantum choices. We cannot fully measure QM ... so it's probably even worse with this 'subquantum information'. We can say only about probabilities of quantum choices  but in fact they should be deterministic  they are stored somewhere there! What we can talk about is that events have been causally connected in the past  we call it entanglement: if two photons have been created together we cannot know what spin they have, but we know that it's the same one. So from our point of view  the future will decide which of entangled events will be chosen ... but in fact it's written in some subquantum information, but we cannot even think about measuring it. Bohm's interpretation uses 'pilotwave' which goes into future to choose how to elongate trajectory. But maybe it would be better to use CPT conservation as in QED and try to interpret QM fully fourdimensionally. Now everything is clear:  probability is proportional to the square of amplitude, because it has to agree in both past and future halfplanes,  knowing only the past we can predict only probabilities,  entanglements means that events are causally connected in the past. One of them will be chosen in the future (as in Wheeler's experiment). Here is expanded this topic: http://www.advancedphysics.org/forum/showthread.php?t=11844 What do You think about it?

If we want to understand physics, first of all we have to understand, create a coherent picture of the nature of time. So in this thread I would like to discuss about consequences of hypothetical assumption that there is a possibility to influence the past  does it have to lead to contradictions (time paradoxes)? Observe that CPT conservation suggests that causality can go in both time directions... in general relativity we want more or less smooth 4D manifold  to minimize tensions from both time directions... http://www.advancedphysics.org/forum/showthread.php?p=52348 First of all, let's assume some interpretation of quantum mechanics ... physics. Some people believes that physics isn't deterministic (because of wavefunction collapses), some in manyworlds interpretations (that such collapses splits worlds  creates alternative realities). In many world interpretation  time travels doesn't seem to be forbidden  such travel would just create a new alternative timeline  there is no problem with time paradoxes. For now I'll assume the most probably (and physical) for me looking point of view: full determinism  there is some 'wavefunction of the universe' which evolves in an unique way. It's like in the general relativity theory: there is some already created fourdimensional spacetime and we 'travel' through its time direction  so the future is already there (eternalism). I'm afraid that full determinism/eternalism doesn't allow for perfect time traveling, like made because of that our spacetime would be bent so much that it would create stable 'wormholes' with endings in light cones of each other  such stable timeloop would allow for paradoxes physics couldn't handle with  like a machine which sends a beam to itself iff it doesn't do it. It's one of reasons I don't like Einstein's view: http://www.thescienceforum.com/viewtopic.php?t=15841 But let's think about a possibility of a channel in which we could send information back in time on which we cannot fully rely  sometimes it's easier for physics to make it lie. Observe that quantum mechanics gives physics large freedom in choosing effects below our recognition to stabilize such causal timeloops to avoid time paradoxes: like choosing what spin will be measured, or make some small influences on statistically looking behaviors ... or eventually make that this backtime channel would lie. Assume that there are allowed causal timeloops, but they have to be stable. So if there would be such 'device' to transmit information back in time and I for example would send to me a picture of the girl who will be my wife ... and just before sending it back in time I would change my mind  this timeline would never appeared  there would be completely different situation (without this knowledge). So if I'd really get this picture from myself  I will have no choice but to really send it. Like in (a good...) SF movie  they go back in time to repair something ... and finally it occurs that the situation is exactly as it was ... How to use such 'device'? Imagine we take some real random number generator  for example measuring spin of photon 45 degrees from its polarization. Now the procedure is: (1) make a choice according to this generator, (2) if from future there is a message that it was a wrong choice  take a different one (3) wait for results of this choice (4) finally if it was wrong choice  send this information back in time to (2) So if there was a satisfying choice  it has created stable time loop  so in fact the possibility of using this device made that our random number generator (quantum mechanics) has already chosen properly before the loop. It can be imagined that each of choice starts a different timeline  in our language they are all entangled and physics will choose one of them in what we call future, such that there for example are no time paradoxes (like that on a string is allowed some discrete set of wavelengths). What if there wasn't any satisfying choice? Would it create time paradox? Not necessary  most probably the physics would destroy the weakest link of such loop  make that this 'device' had lied. Observe that even without this real random number generator, such 'device' could work without actually being used: if for example there had to be a successful terrorist attack, there would be sent information to prevent it ... and finally in a stable timeline this attack would never happen (because of e.g. proper quantum choices below our recognition). It's great exercise to imagine the world with such devices. If it would allow to send message only some very small time back, we could use two such devices cyclically to increase this time as much as we want. It would have great use in science  choose a random amino acid sequence/reaction parameters/... crypotokey ... and use above procedure to ask if they are somehow optimal. One would say  we could get technologies from the future ... but I don't think it would be good choice for us in the future, because we are not prepared for most of them ... like artificial intelligence  for it 'we' in future would have to still want to send it, knowing the consequences. I don't like a picture that only government/rich would have access it ... but if it could be cheap, it would quickly spread: Imagine our world with commonly available such 'devices'... If someone would make a wrong choice  he could send this information back to prevent it  so finally we wouldn't consciously make choices which results wouldn't satisfy us! ... like choosing a school/job/... politicians... there would be no random accidents ... There would be also no hazard ... so what about economy? If some papers would have to drop ... they would drop to zero... So economy would have to completely change  that finally all is worth as much as it is really worth ... there would no risk management ... finally it should be extremely stable. The same with persons  no one would longer rely on illusionary values  we would have to work on our real values instead ... not depending on luck, frauds ... We could concentrate on studying/working/having fun/... building our world without any worry... Is such world without bad choices/risks perfect? Could we in this way really send some information back in time? That means it would make a stable timeline  that knowing this information, after a few years we would really like to send it ... reveling own future/destiny  taking away own free will ... But people are not perfect ... one would from the beginning been told that he has to do in life exactly this and this ... but I believe if it's properly used  such timelines shouldn't be stable  never happen ... for example even saying someone about his real future would be considered a crime against his free will... in stable timeline never happen (it would be prevented). ... I want to believe that in mature society it would be marginal cases ... and the main use would be just existing of such possibility  creating timeline without 'bad' choices ... ????? So in this picture we cannot send information back to change the past. Possibility of sending information is already enough to make that to prevent time paradoxes, physics would have to to prevent making 'bad' choices: if we would make such choice  we would try to send this information creating a paradox. So using a proper choice of what we call uncertainty, physics should make that we make a 'good' choice. What do You think about all of it?

In free electron laser(FEL) is used magnetic field to make electron move on sinuslike curve  thanks of synchrotron radiation and bosonic nature of photons is created coherent 'light' beam. http://en.wikipedia.org/wiki/Free_electron_laser All fundamental equations of physics we use, conserve CPT symmetry, so imagine above picture from the point of view of such symmetry  we get position moving on this sinuslike curve in reverse direction, producing light beam ... photons traveling back in time! To understand it, imagine 'standard' free positon laser  it produces photons which goes forward in time and space and finally hit into something, for example excitating its electrons. If we look at it after CPT transformation  excited (anti)something produces photon (returning to ground state)  this photon goes to free electron laser and is absorbed. To summarize  we see that usual free electron not only makes stimulated emission, but should also make stimulated absorption (be lasar). In FEL Such effect looks clear, but such effect should be observed also in cheaper lasers. Usual photons will be most probably absorbed in some point of time, but photons 'traveling back in time' cannot hit ground state atoms (conservation of energy)  they have to hit already excited matter  we would have to prepare to allow for such emission (energy drain from future). So to observe such effect, we would have to prepare the target by exciting it correspondingly (and continuously)  normally it should spontaneously emit photons in all directions, but if it would be hit by such photons 'traveling back in time', they should stimulate production of these photons in this direction. We could cover the whole spherical angle but this direction by detectors and if this laser/lasar will be turned on, these detectors should observe decreased emission ... earlier then turning the laser on (by the length of optical way). I've previously thought about using larger particles according to Feynman Stueckelberg, but using photons is much simpler and cheaper: http://groups.google.com/group/sci.physics.research/browse_thread/thread/9d10b4e5cbda1108  This picture is against our intuition  that we live in 'dynamic 3D' world  evolving 3D world with only pact>future causality relations... Imagine throwing a rock into water  it creates waves running outside the middle ... but in fact these waves are resultants of statistics of microscopic movements which goes in both directions! Big Bang can be imagined as such rock, which created 'waves of reality' which generally goes in one time arrow, what is observed in statistics as 2nd law of thermodynamics ... but it doesn't forbid future>past causality relations... 2nd law looks to contradict CPT conservation: assume we could proof that for assumed model of physics, there is mathematical property (entropy) which statistically always increase  assumed models of physics conserve CPT, so taking this transformation we get contradiction. So this law isn't fundamental equation of our physics, but is property of its solution we are living in  artifact of Bigbang which created spacetime with relatively small entropy. Our intuition is based on classical physics, but we know well that the world is more complicated. I'll show briefly some (in fact  well known) arguments that physics is four dimensional  there are causality relations in both time directions:  general relativity theory naturally operates on 4D spacetime and says that time flow is locally chosen by solutions. Einstein's point of view even allows to bend spacetime such that we create macroscopic timeloop  that means the future is already there...  all fundamental equations of physics conserves CPT  that means that past and future are somehow similar...  we are using equations of evolution of physics ... but didn't they come from EulerLagrange equations  optimizing some fourdimensional action ... ?  other approaches of introducing physics uses usually path integral formulations: assumes that particles 'see' all possible trajectories...  there is so called Wheeler's experiment: http://en.wikipedia.org/wiki/Wheeler%27s_delayed_choice_experiment In which after a photon passes two slits, we remove the barrier and place telescope far away to observe through which hole particle passed ... changing (later) its nature from wavelike into corpuscular...  Bell's inequalities says that we cannot construct complete past>future causality theory with some hidden variables. There is also new argument from statistical physics: Bolzman distribution is some fundamental mathematical law of statistics  QM should somehow correspond to it. If we would use it to choose statistics in 3D, we would get something like p(x)~e^V(x). If we would use it to choose among paths going from past to this moment, we would get p(x)~psi(x), where psi is the ground state of hamiltonian. If we would take Bolzman statistics among full paths, we would get p(x)~psi^2(x) as expected. The square means that both past and future 'halfpaths' has to agree in this moment. http://www.thescienceforum.com/viewtopic.php?p=151272 For me these arguments clearly says that QM is just the result of fourdimensional nature of our world. So as in Wheeler's experiment  particles in fact travels through one trajectory  entanglement means that they haven't yet chosen  this choice will be specified in what we call future and if we know only past we can talk only about its probabilities. When we make a measurement  we ask about the current value  it has to agree with both past and future path, which are the same  it's where the square comes from. What do You think about it?

Quantum computing and the future of cryptography
Duda Jarek replied to Duda Jarek's topic in Computer Science
As I've argumented  if there is a possibility of quickly solving NP problems without restrictions  such cryptosystem could be still easily broken. If physics allow for that  the only safe will be based on onetime pad. If not  there will be for example restriction for time and number of entangled qbits  in that case classical ciphers (preinitialized) can be already made safe.