Jump to content

Is quantum behavior really non-deterministic?


bascule

Recommended Posts

I've heard both physicists and non-physicists claim that quantum behavior is non-deterministic.

 

As far as I can tell, there are two basic ways this can be argued:

 

We have no deterministic explanation for certain quantum properties, therefore they're non-deterministic.

 

This is an argument from incredulity, and therefore fallacious.

 

That leaves us with: Certain quantum properties appear statistically random. Therefore we conclude they're non-deterministic.

 

However, there is a fundamental assumption here which is wrong. That is: only a non-deterministic process can produce statistically random data.

 

There is a mathematical counterexample to this: the Rule 30 cellular automaton. Rule 30 is a 2 dimensional (1 dimension of space, one dimension of time) discrete time process where the value of a cell and its two neighbors are used to compute the new state of the cell during the next iteration. The transition table looks like this:

 

rule30.gif

 

Rule 30 generates statistically random output. It's random to the point that it can be used for high security cryptography.

 

If a deterministic process can generate statistically random output, then one cannot claim that because a set of data is statistically random that it was generated by a non-deterministic process.

 

So, as far as I'm concerned, non-determinism is nothing more than an unsupported assumption, and determinism is equally likely.

Link to comment
Share on other sites

I think you'll find most physicists working in the area will claim that it's not yet know whether it is deterministic or not. It is still under discussion.

 

Statistically random is not random, the c rand function is statistically random, yet if you run the same code over and over again on the same computer it gives you the same statistically random numbers.

 

Wouldn't this be best placed in speculations?

Link to comment
Share on other sites

I think you'll find most physicists working in the area will claim that it's not yet know whether it is deterministic or not. It is still under discussion.

 

Well, some physicists (here) are under the impression that present evidence supports non-determinism.

 

Statistically random is not random, the c rand function is statistically random, yet if you run the same code over and over again on the same computer it gives you the same statistically random numbers.

 

The C rand() function doesn't generate statistically random numbers. PRNGs generate as close to an even distribution as possible, but there are many tradeoffs involved, specifically in terms of computation time and memory use. Knuth talks about this exhaustively in chapter 3 of volume 2 of the Art of Computer Programming. There are many times that a faster algorithm which generates a more uneven distribution is okay, such a stochastic sorting algorithms like skip lists. Knuth goes through dozens of algorithms and their potential tradeoffs. The ones you see in the C rand() function generally optimize for speed rather than a more even distribution. (Much to the chagrin of people who complain about how the shuffle feature of their music player always plays the same songs)

 

The real evidence that rand() doesn't generate a statistically even distribution is that it isn't cryptographically strong. If the numerical distributions it produced were as statistically random as those of Rule 30, then you could use rand() as the basis of a streaming cipher by seeding it with a shared private key, then running rand() output through some obfuscation algorithm on the plaintext (XOR is a simple but bad example)

 

Rule 30, when employed as the basis of a stream cipher, leaves no statistical signature on the output data:

 

http://www.cs.indiana.edu/~dgerman/2005midwestNKSconference/dgelbm.pdf

 

I'd like for you to find me any other PRNG which can be employed for the purpose of strong cryptography

 

Wouldn't this be best placed in speculations?

 

The point of this thread isn't to speculate. It's to point out that there's no actual evidence that quantum behavior is non-deterministic. It's been mathematically proven that a deterministic algorithm can generate statistically random numbers.

Link to comment
Share on other sites

I think bascule sort of have a point, here are some comments...

 

The way I personally see it, information is the core concept in a modern QM interpretation. And physics can be thought of as "information mechanics".

 

The way I consider the intdeterministic nature of QM is that the condition of the "what if game" that "what if we knew everything today, then we would know everything tomorrow", can never be fulfilled in reality.

 

I think the essence of the indeterminacy of the world, is that there is an fundamental uncertainty in the knowledge/information a subset of the universe possess about another subset.

 

So wether there is a rule that for fundamental reasons we can never understand, or there isn't a rule. I think the two cases are not distinguishable from each other, and thus theresponse is the same.

 

Bascule, in a certain sense I think you have a point. However I don't see it as a problem. The indeterminism is IMO about acknowledging our own incomplete information. It will give a more consistent theory, because incompleteness is a property of reality, and I think it does affect our apparent physical reality

more than what may seem intuitively clear.

 

That leaves us with: Certain quantum properties appear statistically random. Therefore we conclude they're non-deterministic.

 

In a real situation.

 

If a variable appears random to someone/someothing, it means this someone/something has not information whatsoever about thie variable. What is the best guess this someone/something can make about this variable as a based for it's responses? Clearly he has absolutely no clue. So treating the variable as "random variable" until there is sufficient new data to suggest otherwise seems like the best choice?

 

IMO, the real task here is to base a response on the information we have, given our incompleteness.

 

So in a certain sense I might think you have a point with your statement

 

So, as far as I'm concerned, non-determinism is nothing more than an unsupported assumption, and determinism is equally likely.

 

However, I raise this point which I think is the KEY here

 

Equally likely relative to what and in what setting? And what is the difference between an undetectable/unobservable determinism and real indeterminism? I think there is no difference. It's the same thing.

 

However, the possibility remains that what we have treated as noise before, have now gotten structure. But that concerns evolving and learning, which IMO is a central property of a good theory. In this respect, I think the standard formalism of QM is insufficient and inconsistent and needs improvement.

 

To summarize my personal view.

 

In my view of reality, indeterminism is real, however this indeterminism is not static. Note that I always reference a real observer here. Ie. indeterminism evaluate as per a particular observer. (Evaluations made by some imagined God or in retrospect are not valid references.).

 

/Fredrik

Link to comment
Share on other sites

Additional comments to my previous ones.

 

So, as far as I'm concerned, non-determinism is nothing more than an unsupported assumption, and determinism is equally likely.

 

Inspired from the philosophy that science deals with the actual observations rather than the in some sense "in principle possible observations", I think that the effect X implies on Y (real experienced effect relative Y) cannot depend on details or variables that is unobservable to Y, because it would be sort of non-scientific IMO. ie. to assume that there is something "unobservable" that might in fact resolve our uncertainty, does not seem like a scientific statement, except that it can be interpreted as "there is a possibility that we can learn new things" and that "we may find" that what we considered as "random", is in fact not, which is a statement I definitely agree with, but I consider that as a different thing.

 

I guess random is a weird concept anyway. I use it in the meaning lack of prior information. And wether there is a rule that is unknown to use that renders it deterministic, does not help, or make a difference does it?

 

Let me know if you feel I missed your point. Since I guess you are a comp guy perhaps you had another intention?

 

/Fredrik

Link to comment
Share on other sites

The difference between classical mechanics and QM beeing that in classical mechanics uncertainty wasn't taken seriously. The fact that observations are fundamentally correlated on an abstract information level are not acknowledged. Instead it was just considered to be experimental error that "in principle" could be resolved, and thus once we make an ultimate device we could predict everything from the initial conditions.

 

Even classicaly it is easy to see that such a scenario is simple unreal and impossible.

 

In QM philosophy, the status of information is acknowledged to be fundamental. And in fact more fundamental than the physical reality. When the mechanics of this logic is worked out there is an intrinsic uncertainty that has to do with information, and relations between possible states of information, and the references. There is still alot of conservation laws because alot of the information mechanics is just a zero sum game, but in reality there is always an incomplete reference, which can not possibly all contain information. Incompleteness is a key in this philosophy. Wrongly assuming completeness, often leads to inconsistencies.

 

This is why I think that at fundamental level all we have is an information mechanics. And in this view I think what bascule writes can fit too, it's just that I'm not sure what is the philosophy.

 

/Fredrik

Link to comment
Share on other sites

But still, I think that in the standard formalism of QM and QFT the deepest nature of this "information mechanics" is not yet understood. There is alot of abstract thinking going on, and I think that the future of this world should be a cooperation between physicists and some of the AI research.

 

I think bascules association with data cryptation and encoding are dead on. In that thinking I think we are more or less on the same page.

 

/Fredrik

Link to comment
Share on other sites

Some yes I can understand some, but you implied ALL.

 

And it depends on how you define statistically random, but it was a comment made to demonstrate that statistical randomness is not true randomness.

Link to comment
Share on other sites

Rule 30 generates statistically random output. It's random to the point that it can be used for high security cryptography.

 

This is producing a psuedo-random number - it appears random because you are restricting information. If the observer could know everything about the system (including the state of the previoud cells) then you could predict the 'random' number and it would appear deterministic.

 

As such, this is a hidden variable theory and is ruled out by Bell's Inequality (unless you make it non-local).

Link to comment
Share on other sites

The point of this thread isn't to speculate. It's to point out that there's no actual evidence that quantum behavior is non-deterministic.

 

IMO, if there on the contrary is no evidence for deterministic behavour, the principle of insufficient reason suggest that we have no reason to *complicate things* any further than to treat it as random or pesudo-random, because there are indistinguishable anyway.

 

Ie. our best, and simplest guess in the spirit of the mentioned philosophical principle is simply a uniform distribution, given no info that suggest otherwise. But at the same time, given that we have the memory capacity we continously keep evaluating the randomness of our ignorance. As soon as significant doubt is found, things take a new turn.

 

Another reason for this is that any real observer has limited memory capacity and must not fill memory with what "seems to be" random data... often, there are better use for limited memory. However, suppose the observer is put into a plain chaos, there simply isn't much else to feed on but the apparently random numbers, and chances are that eventually he will decode all data given his memory and processing constraints.

 

That's how I see it. Sorry for the many posts :embarass:

 

/Fredrik

Link to comment
Share on other sites

Doesn't a system become seemingly nondeterministic when the set of possible combinations becomes so large compared with the measurable observations we might make, that bookkeeping is not reasonable? My brother Steve works at NOAA numerically crunching models for mesoscale weather prediction of tornadoes in the central US. We debate the butterfly changing the weather issue.

Link to comment
Share on other sites

This is producing a psuedo-random number - it appears random because you are restricting information. If the observer could know everything about the system (including the state of the previoud cells) then you could predict the 'random' number and it would appear deterministic.

 

And that really is the point: the system is deterministic but the state at each timestep is dependent upon the totality of the data at each previous timestep.

 

As such, this is a hidden variable theory and is ruled out by Bell's Inequality (unless you make it non-local).

 

Since the totality of data in the system is needed to compute each successive timestep, it's inherently non-local.

 

Rule 30 provides an example of a "non-local hidden variable theory" (if that term is really apt) for explaining the statical randomness of 2D discrete time/space universe.

Link to comment
Share on other sites

However' date=' I raise this point which I think is the KEY here

 

Equally likely relative to what and in what setting?[/quote']

 

Equally likely in that they form a dichotomy and there's no actual evidence of either.

 

And what is the difference between an undetectable/unobservable determinism and real indeterminism? I think there is no difference. It's the same thing.

 

That's one way of looking at it

Link to comment
Share on other sites

A bit philosophy but...

 

Equally likely in that they form a dichotomy and there's no actual evidence of either.

 

Here is where I'd prefer to apply a philosophical principle and choose the simplest (minimal representation) construct consistent with data. Inspired by the idea that if you assume representations to appear randomly, a more complex representation are less likely to start with.

 

If we have no prior reason to assume any particular deterministic rule

 

a) from our effective point of view, it's "random".

(That's not to imply it's going to stay random forever, as we will respond promptly to deviations)

 

b) we consider the probability distribution over "all possible" deterministic rules. But since they are all equally likely, we really have not gained any information, but we have enormously increased the complexity of the representation by introducing alot of unknown variables.

 

I think expanding the representation is only motivated upon experimental evidence - ie as soon as we have significant deviations from a random distribution.

 

/Fredrik

Link to comment
Share on other sites

If you're going to apply Occam's Razor, why assume an unparsimonious non-deterministic mechanism operates in addition to the deterministic ones which govern all other known physical behavior?

 

I personally consider indeterminism unparsimonious, but the prevailing attitude among those who have their head in the actual math all day seems to favor the existence of actual non-deterministic systems.

Link to comment
Share on other sites

Inspired from the philosophy that science deals with the actual observations rather than the in some sense "in principle possible observations", I think that the effect X implies on Y (real experienced effect relative Y) cannot depend on details or variables that is unobservable to Y, because it would be sort of non-scientific IMO. ie. to assume that there is something "unobservable" that might in fact resolve our uncertainty, does not seem like a scientific statement, except that it can be interpreted as "there is a possibility that we can learn new things" and that "we may find" that what we considered as "random", is in fact not, which is a statement I definitely agree with, but I consider that as a different thing.

/Fredrik

For quite some time we just lived with acceptance of vacuum permittivity and the propagation of E&M fields. This did not prevent us from getting a lot of physics accomplished. Should we admit we do not know everything?
Link to comment
Share on other sites

Rule 30 provides an example of a "non-local hidden variable theory" (if that term is really apt) for explaining the statical randomness of 2D discrete time/space universe.

 

I think you are misunderstanding what is meant by 'non-local'. It doesn't mean that there is information flow from other points. I means that there is information flowing faster than the speed of light. So there needs to be communication between events which are causily disconnected. I don't see that (a priori) in your setup (although I suppose you could make it so just by insisting that the distande between cells is very large).

Link to comment
Share on other sites

I think you are misunderstanding what is meant by 'non-local'. It doesn't mean that there is information flow from other points. I means that there is information flowing faster than the speed of light. So there needs to be communication between events which are causily disconnected. I don't see that (a priori) in your setup (although I suppose you could make it so just by insisting that the distande between cells is very large).

 

Point taken. In CA "c" is one cell per iteration. In Rule 30 both 001 and 100 will move at c. In other CA there are patterns which move at "c" as well. The "glider" pattern from Life is an example.

Link to comment
Share on other sites

Hello, I was away for a few days. Just got back.

 

My point is that I think ignorance comes first and I acknowledge my ignorance. Learning is step 2.

 

And the so called deterministic rules we know, is something we have learned by consuming and processing data, or in terms of physics, by alot of interactions and experiments. But I think in general the deterministic rules are not static and forever lasting. There may be a time when they evolve as well. So I think often the determinisim is effetive and indistinguishable from a "macroscopic determinism" microscopic chaos. But my point is that whichever, doesn't matter. The state we face is that we don't know certain things.

 

The "explanation" I am looking for is exactly how does knowledge and structure grow out of ignorance and chaos.

 

If you're going to apply Occam's Razor, why assume an unparsimonious non-deterministic mechanism operates in addition to the deterministic ones which govern all other known physical behavior?

 

I don't label it "an non-deterministic mechanism in operation". This makes no sense like you imply, which was what I mean to say. Instead I just say "I don't know". And in reality, it is more a rule than an exception that you need to make a decision based on incomplete information, but you still need to argue for your chosen path. Howto do that? You arge in terms of information at hand, and find your weigted "best bet". And given a symmetry situation, you can perhpas deduce that you end up with a class of options that - to your information - is equally sensible, and you thus just pick one at random, and if you want to explore and break the ignorance after some random walking you have acquired more data that will allow your prior to be promptly updated. And along the way you have a minimalistic and motivated update rules.

 

If I simply don't have a clue - I have not reason whatsoever to argue that one path is more preferred than another one. Then if experience later shows, that certain path are in fact preferred, then my ignorance is broken, and I update my prior.

 

Then future learning always consider my total ignorance constrained to my prior. So my prior information does imply a pattern or geometry on my ignorance.

 

So if you choose to say that I have an non-deterministic rule that sounds odd, but I simply acknowledge that "I don't know" and leave it at that. I think that's about an simple an honest it gets. The minimalist philosophy in a sense.

 

I personally consider indeterminism unparsimonious, but the prevailing attitude among those who have their head in the actual math all day seems to favor the existence of actual non-deterministic systems.

 

My preferred choice of partitioning is in information I have, and what I don't have. I have no reason at this point to speculate on the nature of what I don't know.

 

A point is that a assumption that I "could have" complete information and still be consistent is not a proven. I think it's more evidence against that it's more often false. But I think we are reather constantly learning. A big complex subject like the human brain are clearly "learning". However a much simpler subject like say an electron, is simply updating or revising. There seems to be a limit to what an isolated electron as such can possibly every "learn". It's learning curve quickly flattens and turns into a state of updates and revisions. I think there is a uniform system behind this that can we nicely worked out from first principles and it could also probably explain alot of things in depth from first information principles. And one advantage is that this route will (yet to be formally prove of course) from scratch have a natural integration of general relativity and quantum mechanics. I've got a clear gut feeling about this, and I think it's the way to go. It can hopefully properly resolve several of the fundamental but historically ignored issues in physics, and it can do so without introducing ad hoc stuff, that need further explanation. What is better, I think fundamental physics can be interpreted as learning mechanics. Analogous to frames of reference in GR and SR, the beefed up QM will IMO need to acknowledge that interacting parties different prior informations. This also has the potential to completely unite the system vs observer issue. The observer will be treated just like a particle, because there is no reason whatsoever that the observer is distinguished. Details remain unclear but somehow the information capability of the observer must be defined here.

 

/Fredrik

Link to comment
Share on other sites

For quite some time we just lived with acceptance of vacuum permittivity and the propagation of E&M fields. This did not prevent us from getting a lot of physics accomplished. Should we admit we do not know everything?

 

I'm not sure I understand your comment here?

 

What I am suggesting is that realising that we simply can not "know everything" so to speak will make it easier to understand what is really going on here. This is by no means in contradiction to that we are learning. But there is likely a limit to what any given subject(=particle, observer, subsystem, organism) can learn, and which point we more or less hit some "residual uncertainty" we can not resolve. The residual uncertainty for the elemetary particles are clearly massive as compared to the human perspective. The view of reality as percieved by say and electron is clearly quite different from ours. The human perspective is clearly different (all perspectives are different anyway) but I think in a more abstract level is not distinguished in the fundamental sense. I think the fundamental treatment must transcend these things and work in a more abstract way. Or at least try to.

 

The problem with thinking that all the answers are out there, and that they would be accesible to us if we only found them, is that it is likely to lead us into inconsistent models. We are looking for something that probably does not exist.

 

I think of ordinary QM of particle physics as a kind of information mechanics of the residual uncertainty domain. It is basically random mechanics under some particular set of constraints and priors. Of course the more complex systems we study the more significant is the learning. I think this is also the thinking needed when gravity comes into play. The basic rules of learning and the updating that happens during the residual uncertainty domain is of course exactly the same, it's just that the changes the latter domain is distributed as per a random distribution forever, and nothing new is learned. This is clearly the simplest of possible cases, and i tihnk we need to think differently here.

 

/Fredrik

Link to comment
Share on other sites

If you're going to apply Occam's Razor, why assume an unparsimonious non-deterministic mechanism operates in addition to the deterministic ones which govern all other known physical behavior?

 

An alternative or complementary shorter comment on this is that the reason is so we can understand where the deterministic mechanisms come from in terms of a kind of learning logic. So in a sense all the deterministic mechanisms are removed altogether, and replace by a generic learning model. And the deterministic rules are output of the learning models, but they are not fundamental per see. Instead I consider learning fundamental, the specific output is dependent on input which in this case is data or experiment or experience.

 

Of course this is a massive task, and I don't have all the answers yet. But even give that, I sense that the logic and philosophy of this is promising. Which is just my personal opinion of course.

 

Instead of trying to learn/discover/build something specific. I changed the focus into to learning/exploration/creative process itself, and I think it will be found that alot of fundamentals in physics can in fact be traces back to such similar abstracts.

 

There are already today some farily deep simialrities between relativity, QFT and stochastic processes, and it's hardly a coincidence. But the missing part is the adaptive step, the learning step, itself as an stochastic evolution.

 

/Fredrik

Link to comment
Share on other sites

The quantum behavior could not be deterministic by the following reasoning.

 

First: there are quantum correlations. I.e. QM requires, that some sort of magic coins should exsist. The main property of these coins is that they always fall out equal.

 

Second: there is a limit on speed of infromation spreading. This implyies, that one coin can not inform another how to fall.

 

Third: there are Bell's inequations, which state, that coins also cannot predecide, how to fall.

 

So, the only decision, compatible with all three points is that the fall out is non-local true-random event. I.e. all possible ways of information flow are closed by experiment-verified theories, except one: information birth in both coins.

Link to comment
Share on other sites

I don't see why this thread has been demoted to the speculative. A determinate interpretation could be regarded as less speculative and more sensible than the predominate Copenhagen indeterminate interpretation of quantum physics.

 

So you could insist that anything that can't be visualised, like the superposition of quantum states in CI, does not exist. Whereas Bohmian mechanics is a detailed and mathematically systematic, determinate hidden variables interpretation that has been visualised by computer generated diagrams.

 

http://www.math.rutgers.edu/~oldstein/quote.html

http://www.math.rutgers.edu/~oldstein/papers/qts/node4.html

http://plato.stanford.edu/entries/qm-bohm/

 

One trouble is that David Bohm did not help in reinforcing the scientific status of his own account of quantum physics by being more interested in philosophising about it than justifying it scientifically.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.