Daria K 1 Posted August 20, 2017 Share Posted August 20, 2017 Good evening! Recently I have found this article: https://arxiv.org/abs/1610.00022 It tells about a no-go theorem which is considered to rule out several forms of macrorealism. The description of the theorem is on the page 9. As authors say, it implies contradiction between macrorealism and predictions of quantum mechanics. I had some troubles in understanding of mathematical sense of this theorem. However, I don't ask for explaining the steps of it formulation. But it would be great if someone told me whether this theorem is sufficient to falsify macrorealism or it just provides some conditions where macroscopic objects cannot be considered as classical objects? My question appeared because I thought that we can rule out macrorealism only empirically, for example, by violating Leggett-Garg inequalities. Also we can put the parallel to Bell's inequalities and their violation ruling out local hidden variables. But the authors of this paper seem to tell that macrorealism is already falsified only by their theoretical assumptions. That is why I was confused. I am sure that I missed something in this theorem, that is why I make wrong conclusions. So, rephrasing my initial question metaphorically:"Does this theorem really prove that, for example, the moon or the table has not macroscopically definable state while one is not looking?" I should mention that my question is not about decoherence or measurement problem – it is about implications of concrete theorem. Thank you in advance! 1 Link to post Share on other sites

mathematic 98 Posted August 21, 2017 Share Posted August 21, 2017 "Does this theorem really prove that, for example, the moon or the table has not macroscopically definable state while one is not looking?" I won't try to read the paper, but if assertion is correct, it sounds like the theorem is nonsense. Link to post Share on other sites

Rob McEachern 14 Posted December 26, 2017 Share Posted December 26, 2017 The problem with this theorem, Bell's Theorem and all similar theorems, results directly from the assumptions stated on the top of page 4, in the paper that you cited: " the “macro-realism” intended by Leggett and Garg, as well as many subsequent authors, can be made precise in a reasonable way with the definition: “A macroscopically observable property with two or more distinguishable values available to it will at all times determinately possess one or other of those values.” [8] Throughout this paper, “distinguishable” will be taken to mean “in principle perfectly distinguishable by a single measurement in the noiseless case". " The first problem is, that some "macroscopically observable properties", may only ever have one value to it, zero, at all times, in "the noiseless case". Thus, it violates the assumption of "two or more" in the above definition. The second problem is, that all these theorems assume that there is such a thing as a physical state that exists in "the noiseless case." That need not be true - noise may be intrinsic to the physical state of the object being measured. In other words, so-called identical particles may only be identical, down to a certain number of identical bits of information, but no more. Strange interpretations, like Bell's, will result whenever that number of bits happens to equal one, because the theorems have, in effect, all assumed that it must always be greater than one. This is ultimately what Heisenberg's Uncertainty Principle is all about. To visualize these problems, look at the first polarized coin image, labelled "a", in this figure: http://www.scienceforums.net/topic/105862-any-anomalies-in-bells-inequality-data/?tab=comments#comment-1002938 If the one-time-pad is rotated by either 90 or -90 degrees, in an attempt to measure a horizontal polarization component, then the measured polarization will be identically zero, not +1 or -1. In other words, this object has a vertical polarization state, but not a horizontal one. If noise is present, then any nonzero, horizontal, polarization measurement will be caused entirely by the noise: that is, measurements of the supposed horizontal components of the supposed, perfectly, identical particles, will yield differing, random results, whose statistics will appear "weird" to anyone attempting to interpret them as being due to a system that actually obeys the assumptions in the above citation. Note also that when other polarization angles, between 0 and 90 degrees are measured, the results are simply caused by the "cross talk" response of the detector to the vertical component (and the noise) - there is never any response to a supposed, horizontal component, because that component is identically zero, in "the noiseless case" and consequently never exhibits any response. Theorems that assume otherwise, fail to correctly describe reality. That has lead to a lot of "weird" interpretations of quantum experimental results. Link to post Share on other sites

## Recommended Posts

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account## Sign in

Already have an account? Sign in here.

Sign In Now