Everything posted by Killtech
-
The meaning of constancy of the speed of light
Good. This is though just the formal aspect of separating the mathematical treatment from the interpretation. That throws us back to the original question raised in this thread: where do we get the notion what the space and time of an observer is? We usually assume it to be trivially clear - but in fact it is a tricky circular problem that i want to discuss. When we observe some physical processes, we get a notion of the passing of time by the rate how the process changes. in order to be able to measure it, we therefore have to define some physical reference which we then can use to establish time. The definitions underling the SI definitions of time and length do therefore specify this via emissions of the Cs atom for one and the propagation of light in vacuum for another. But let's observe that in the general case, there is no mathematical reason that all physical processes have to adhere to the same concept of time. Just to illustrate this let's assume an alternative universe where there are two electromagnetic forces that differ only in their propagation speed. each will be invariant to its own Lorentz transformations around its constant c. All physical processes like atomic emissions build from one or the other variant of the two forces will fit into this invariance but will disagree when observing processes of the other force. This is because there won't exist a totally Lorentz invariant physics. This means that each observer can denote two concepts of time: that related to physical process of one force or the other. Technically there will also exist a preferred frame where both forces take the same shape and this frame provides a third time an observer can use. Applying this observation back to acoustic physics, we can construct physical processes which rely solely on acoustic interactions and see how the coordinates produced by the acoustic Lorentz trafo fare with describing such a process in different frames: consider an in infinite grid of tiny flying sound emitters which constantly emit a sine like sound (a bit like mosquitos). As sound waves carry physical energy and momentum, this interaction of sound with the emitters will cause a constant repulsion which grows stronger the closer two emitters get to each other. This is meant to mimics the repulsion between atoms in a solid. in the case the entire grid has a velocity relative to the medium (i.e. it's center of mass CoM), the repulsion orthogonal to the CoM momentum is weakened due to the effectively prolonged distance sound waves have to move through the medium to get from one emitter to another. effectively the grid will see a contraction along this direction. But the formulas describing this contraction will look all too familiar. If we were to apply the weird coordinates produced from acoustic Loretz transformations onto this grid, we will find that these coordinates in the CoM frame make the grid look always the same regardless of how fast it moves relative to the medium. So, if your observer happens to be a bat which uses its ears to visualize its surroundings instead of its eyes, it will find that at least the spatial part of this coordinates do agree quite well with its perception of the world.
-
Quantum vs Classic Probability
yes, the mathematical measure theory. it is indeed both underlying QM and probability theory while the Lebesgue measure is a fundamental tool for both. Kolmogorovs probability theory is in fact mostly a rebranding of MT because apart from adding a bit special terminology, it is really just a specialization of MT to positive finite measures (think: finite volume). Otherwise it is the same with a few renaming given a somewhat different purpose and interpretation. Only going further to stochastic processes like Markov theory come with significant expansions of the framework. Of course the mathematical meaning of measure has nothing to do with the physical concept of measurement - the mathematical concept is build around avoiding Banach Tarsky because it will break the definition of integrals while measurement in physics and in QM is an entirely different topic altogether. we cannot avoid using both of these terminologies when discussing probability theory and physics. This has indeed created some confusion and misunderstandings so far. Indeed, Kolmogorov build on the works of former mathematicians dealing with this subject, gave it a clean axiomatic unified framework based on measure theory which was done 30 years before. The building on the latter allowed to handle problems with infinite many events and continuous problems with a solid toolset. And you really feel it: any university course on probability theory does nothing else but measure theory for the 1st semester of it. It is very reasonable to ask for precision especially in context of a discussion where we have two terminologies colliding and creating ambiguities for some words. I must admit that when i want to quickly respond to a forum post, i often do so too hastily and my answers may lack precision and thus become open for misunderstandings. i am sorry whenever that happens.
-
Quantum vs Classic Probability
The name "random variable" is really misleading. Measure theory calls the same definition a "measurable function" instead. From Wikipedia: "The term 'random variable' in its mathematical definition refers to neither randomness nor variability[2] but instead is a mathematical function [...]" (https://en.wikipedia.org/wiki/Random_variable) indeed. what makes them strange is their interpretation which attempts to force the concept of interreference into probabilities. classic probability does not have a problem to model a process with interferences but those have to go into the state space and be treated akin to some non-observable underlying physical process. This does not even require any change of the calculus of QM but translate just into a change of terminology and interpretation. The question about amplitudes is why people want to have them treated purely as a probabilistic object if in fact the behave like a function of underlying physical-like and probabilistic aspects. the latter separation allows to model them in classic probability. In fact specific non-linear waves exhibit a lot of the same behavior as the quantum states do. also in recent years many new experiments were able to conduct types of weak measurements which show more and more that there is indeed an underlying physical aspect to wave functions and the resulting amplitudes that cannot be ignored - and challenge what we though is observable.
-
Quantum vs Classic Probability
random variables are a concept of probability theory and therefore not part of QM formalism at all. it shouldn't be mixed into it without establishing a clean view how you can model QM via classical probability theory. you may seem to have a misconception though what the term means as you seem to be driven by a very intuitive interpretation which goes quite against the concepts needed in probability theory. random variables - or measurable functions as they are called in measure theory - are an abstract definition not just applied to observable quantities. it is in fact a crucial technicality that a function is compatible with your sigma algebra and that means that you can do integration over it. if the wave function would not be a random variable, then integrals over it become not well defined... and in this case for no reason, because technically we know quite well how to define them.
-
Quantum vs Classic Probability
yeah, obviously. i am not even sure what you thought i was talking about? The original question was that given a wave function of a quantum state, how much arbitrary information does it contain. the idea is to use it as a scatter target to figure that out from the resulting scattering amplitudes of test particles. that this should have been clear from my previous posts. Ultimately, it turns out that almost all of the information embedded it the wave function is physically relevant. that is you cannot drop it. it's been been quite a few years since i completed particle physics course, but i still remember it quite well. i did make a mistake of not getting help from AI to formulate my posts clearly - and i admit it is a repeated experience that this sometimes leads to misunderstandings when i talk to people. so where did i lose you? where was my formulation of what i intend to do not clear enough? maybe i should have pronounced it more that the electron bound in an atom is the scattering target - rather then the probing particle like it is used in many experiments. that is maybe quite unusual to begin with, so did that lose you? the reason is that wouldn't work here with using electrons for probing as it would disturb the scatterer wave function too much for the purpose of repeated scattering of the very same scatter target. hence i specifically wrote that i use a theoretical test particle multitudes lighter and less charged then the electron - because then the theory almost allows to effectively measure and track the target wave function.
-
Quantum vs Classic Probability
to keep this short look up the atomic form factor then https://en.wikipedia.org/wiki/Atomic_form_factor - you'll see a rho popping in that formula. It's similar to what you do in the Hartee-Fock method for many electron atoms where you also calculate effective potentials.
-
Quantum vs Classic Probability
i haven't mixed them up, just applied the first order Born approx to a target that is not a classic potential, but a quantum system, a wave function itself. In that case, this approximation uses an integral over \(\rho(x)V(x)\) where V is the classic point charge potential to calculate the effective scattering potential. And indeed here rho is calculated according to Born rule. But that's the game in scattering theory and needed to get predictions matching experimental results. Normally, physics goes into the high energy regimes of deep inelastic scattering. but it gets quite interesting going the other way towards the shallow elastic scattering regime as well and that does not disturb the wave function so much making it open for repeated measurements. that's where devil comes in. sure, this scenario represents an (1-electron, n-test paticles) system where we have n measurements. However, because all n test particles are by assumption perfectly prepared (i.e. we know their exact wave function), all n measurements effectively extract the only not-known information about the system and that is from the singular electron. if you account that the test particle has spin (or is a photon with prepared polarization), then in the second approximation we can extract all about the target wave function excluding only the global phase factor. at least according to the theory. sorry, i tend to skip over many details because i assume we all know here the details of quantum mechanics in an out, so that we do not need to get into the details how all that standard knowledge was derived exactly and can get quicker to the interesting stuff. but i tend to forget that what i say is not exactly trivial. i am getting used to talking to AI which has all the knowledge at hand to catch my drift and intension directly without the need to do a lot of explaining.
-
Quantum vs Classic Probability
feel you there. quite a lot on my mind as well. reduces the time i have to spend on the interesting matters. this is not relevant for the question if we can model given experiments via classic probability theory. even if you include completely arbitrary information in your state space & model that do have no impact on the results, it just that: surplus info you could drop. it just makes it more tedious to deal with objects full of irrelevant information. however, there is a reason i did chose this approach to start of with a rather large state space and that is because QM already implies that almost all of it is irreducible information to make correct predictions, so no model can afford to drop it and still get the correct predictions. consider some pure state of an bound electron in a H-atom and its wave function. Let's probe this system using scattering experiments with some (hypothetical) idealized test particles that are so light and marginally charged (compared to the electron) that we can scatter many of them without collapsing or disturbing the target wave function. Scattering theory says in the first Born approximation the electron will actually interact as if its charge was in fact physically distributed according to \(\rho = |\psi|^2\). Scattering amplitude in this first order approx will be the Fourier transform of that, meaning most of the information contained in the wave function will make a difference for the outcome, especially if we can freely choose the incoming angle and energy of the test particles. Higher order approx of this experiments will also give us info about the magnetic moments and so on. Interestingly, even if the wave function is a superposition of two energy eigenstates, then in the Born approx its charge distribution is not stationary (unlike for energy states) but instead oscillates with a known frequency. so if our scattering experiments has some time resolution, we would also be able to distinguish such pure states as well. however, thinking classically, such a oscillation would naturally cause an EM-emission (with exactly the same wavelength as QM predicts) and loss of energy collapsing the state to the next lower stable solution (and only energy eigenstates are stationary and thus classically stable) - that is even classically one would expect quite the same behavior as QM predicts. just saying that such a state would be very short-lived and hence difficult to observe. through such gedanken-experiments one can boil it down that is is only the global phase factor of the wave function that is truly irrelevant for any prediction, hence almost all of the information contained in the wave function does seem irreducible. this line of thought is a bit of a brute force expansion of weak measurements if we had some test particles that could do that. i mean you could technically do it with very low energy photons but uff, measuring those will be a challenge. A realistic experiments on this topic is of course https://www.nature.com/articles/nature10120 or https://www.nature.com/articles/nature12539
-
The meaning of constancy of the speed of light
Indeed, the spacetime as used in relativity does not admit a variable c, but i did account for that. But you are right in that there are many traps even thinking about the issue. Consider something historic like the Maxwell equations and its treatment in both LET and relativity spacetime, which are both equivalent. LET uses a Galilean spacetime, hence there are no restrictions on c. Furthermore there is always one frame where both descriptions will yield the identical from for Maxwell - the preferred frame - and this is the frame where we start our considerations from. This is a very important for the next part: I very much have accounted for that. What you overlook is that your approach ends in a unresolvable circular reference that renders you unable to approach the question at all. Any assumption of a variable c requires letting go of the relativistic spacetime (see OP "A Need for a Counterhypothesis"). But: as with LET (or sound equation) we know there exist one preferred frame where the equations in LET and SR spacetimes will be identical. In this frame we can make the assumption of a variable c and admit that the new assumed equation is one relative to a abstract theoretical clocks and rules defined by the assumption that they remain invariant under any changes of c. We do not need to know what these abstract clocks and rulers are - because the very next thing we do is to use these equations with a variable c and use them to model what defines our actual clocks and rulers - e.g. the Cs atom. From there we can deduct how the theoretical abstract c-invariant clock and rulers relate to our SI clocks and rulers. I have mention this already in my opening post that the assumption of a variable c still leads to c being constant if we remain with the standard clocks and rulers and instead it will manifests as curvature of spacetime identical to gravity. But now that you mention it, it may be complicated to understand that, as it effectively requires to jump between different spacetime models and the same equations written in those different spacetimes. You are getting there :). Of course you are right to mention that my proposition does not work in any frame - but i did mention explicitly that this construction requires to start from the preferred frame i.e. where the medium is at rest. Now, if we have an equation in one frame and need it in another, we can do the corresponding transformation. For the sound equation we would normally do that by Galilean trafos and hence get additional terms for the medium, right? But starting from the base frame we can now apply also a Lorentz trafo and get an equation without a medium - but in different coordinates. So for one frame where the medium is not at rest we have two equations with two different coordinates sets. We can do a sanity check and calculate whatever physics example to notice that both give identical predictions (if we account that the Lorentz variant requires us to transform time and lengths calculated in a frame from coordinate units to SI units). With the Lorentz trafo we therefore got the same shape of the sound equation in any frame as in the preferred frame where the medium is at rest. So in the acoustic spacetime, the sound equation maintains its original form in every frame! And suddenly the mediums is gone from the equation - instead it moved into the geometry of spacetime. But we are not just transforming between coordinate systems. the coordinates serve as a first step of construction. But the big step to special relativity was elevating these coordinates to a new and fundamentally different definition of spacetime. But this idea is not exclusive to light and can be applied mathematically to any other wave. Now, tensors are sensitive to the geometry and hance a zero tensor may be non-zero ins the same frame in a different geometry. best example: the medium terms in a frame is a tensor in LET which is 0 only in the preferred frame and nonzero everywhere else. In SR it is always zero.
-
Quantum vs Classic Probability
can do. would do it for a simple qubit to not mess up with spin operators. takes a bit of time to write it all down formally with some latex, so not during the week. does not involve any additional hidden variables that QM does not have itself. its just a reframing into classical probability framework. but i think it makes sense to first work out a common understanding of what a random variable is, as this is crucial for the construction.
-
Quantum vs Classic Probability
In order to use X in context of any random event to calculate probabilities for (including conditional probabilities), it must be random variable. a random variable is merely a measurable function (in the sense of measure theory). You fundamentally need that property otherwise you produce unmeasurable event sets outside your sigma algebra which would prevent to calculate anything. While that technically means that P(X) is defined and you could theoretically calculate such properties, their interpretation is left open if X is not itself observable. You can consider this to reflect our knowledge about which state X may be in from the indirect observations we have, that is something like a Bayesian interpretation rather then actual measurement. This interpretation of probabilities most often goes along with non-observable random variables. But you are trying to apply additional restriction from your interpterion which aren't required. you are trying to force some realist interpretation onto random variables which is not part of their math. your approach may be understandable from a physical point of view but within probability theory it doesn't make sense. as a mathematical theory, anything that is fits in the axiomatic framework of the theory is valid and may be used. interpretation is an issue left for others to solve. In the model we are talking about, wave functions are merely the states of the hidden process, that is elements of \(\Omega\). \(S: \psi \rightarrow \psi\) is a random variable in this space.
-
Quantum vs Classic Probability
Have you ever heard of a Hidden Markov Model (HMM)? in a HMM the underlying Markov process is not observable but we we have many observable random variables that do depend on the hidden process. The goal of this concept is to learn about the underlying process from the available observations. It would seem like quantum mechanic behavior may be a prime example for it. So no, random variables cannot be considered observable in general. A model may freely specify which are and which aren't.
-
Quantum vs Classic Probability
right, sorry, i meant \(\psi \rightarrow \langle \psi | O | \psi \rangle\) is a random variable for an observable \(O\) however, for \(S\) we use \(\psi \rightarrow \psi\) as a random variable.
-
Quantum vs Classic Probability
You are partially right about the history but indeed the devil is in the detail of how that is exactly defined on a mathematical level. It makes a huge difference about what random variable we talk about. For example the momentum operator is a function of the Hilbert space to the real numbers^3 and in our case is a valid random variable in this model. We can apply it at every time step of the states evolution and get the momentum process associated with it. is it Markovian? no, because indeed its probabilities depends on the history of its previous states. But 3 values are barely enough to characterize a quantum state, hence no surprise there. In fact no set of observables is able of produce a Markov process. Now let's look at the identity operator of the Hilbert space. Let's call it \(S\) because it gives us the current quantum state of the system. This is the default random variable for any state space \(\Omega\). For this one we have \[P(S_t = \psi(x,t) | S_{t-1} = \psi(x,t-1), S_{t-2} = \psi(x,t-2)) = P(S_t = \psi(x,t) | S_{t-1} = \psi(x,t-1))\] This is the Markov property for this specific process (the time discrete variant for simplicity). It can be proven directly from the Schrödinger equation which guarantees it by having no dependence on prior states of \(\psi\) other then its time derivative at time \(t\). Even though we introduce a random variable \(S\), it does not necessary mean we can measure it. It just means it is an object we are interested in and therefore need a random variable to track it. \(P(S_t = \psi(x,t))\) only means we have a theory that can make theoretical predictions about what state a quantum system may be in, reflecting our knowledge of the system. you can make predictions about it, sure, but not measure it. you know that in order to measure \(\psi(x,0)\) you would need to experimentally obtain its value for every \(x\), and that for a single particle this wave function belongs to. if it were measurable - i.e. an observable, QM would require that a linear operators exist that corresponds to its measurement. in case of a function, you need infinitely many of such to extract the value of the function at each \(x\).
-
Quantum vs Classic Probability
perhaps something more general. if you have a state space and some deterministic equation fully describing the time evolution of a state, then this alone is sufficient to view it as a stochastic process which in this special case is deterministic. This is simply the generalization from a equation describing the time evolution of a single state to one which describes the time evolution of a distribution of states (in QM called a mixed state). One example for this would be Liouville equation in classical Hamiltonian mechanics. The generalization from single state to a distribution of states takes a bit of additional formalism but you can always apply it. If the time evolution equation depends only on the state and not its history, then this naturally holds for its stochastic process, too, that is it is Markovian. If we start by focusing on the part of QM which is the deterministic evolution of states only, e.g. Schrödinger equation, then we can naively apply this approach here as well (lets disregard for now that the quantum state is not itself actually measurable). But that of course is what Von Neumann equation does already. Digging deeper we can figure out the former is a transformed way to write the latter using some additional simplifying assumptions about the state space and its time evolution. The reason why this yields a Markov process is again, neither Schrödinger nor Von Neumann need to know anything abouts a states history to predict how it will evolve. If you can follow this aspect, we can go into measurement. it my be basic, but i am not native English speaker and googling 'point function' turned it is a term used for the quantile function. In case of Bohm de Broglie theory, there is a hidden position and momentum variables which is referred to by the name. their values are however partially revealed by a single measurement. In Kochen-Specker and Bell's it is more general a quantity referred to by lambda without further specification - anything your calculation of predictions may depend on and is not obviously available information. Their terminology is technically general enough to question whether that involves something like the wave function. The wave function is non-observable, as QM prohibits to measure it directly. You cannot determine it in a single measurement. but what you can do is repeat an experiments with an well prepared ensemble many times and obtain a distribution of data from which you can reconstruct the wave function. for details about this you can refer to standard literature on QM.
-
Quantum vs Classic Probability
Let's assume a simple quantum space with only two states \( |0\rangle \) and \( |1\rangle \). density matrices are then 2x2. however, the Hilbert space itself has still infinitely many elements, even uncountable many, because there are so many possible superpositions. If i naively take the Hilbert space as my state space \(\Omega\) of some classical Markov process without assuming anything about its structure, then how would my probability vectors look for that scenario? Basically i have to model it as a Markov chain on a continuous state space - which adds quite a bit of complexity because my 'probability vector' has uncountable infinite dimension, that is, it is a density function on a continuous space. as you can see a probability vector in that case requires far more information to be uniquely specified then a density matrix. But if i am allowed to use the additional assumptions of state linearity from quantum mechanics, then i can do the same tricky reduction of complexity and reduce the space of my process to the size of the density matrix. that is any probability vector is fully specified by only 4 values - which are enough to reproduce all probabilities of this quantum system. however, this reduction comes at a cost. if i want to represent it as a vector, then there exist no linear time evolution of it, unlike for my much more dimensional original probability distribution. but QM still found a nice way to calculate it, and that is the von Neumann equation which allows states to interfere with each other, something probabilities cannot do. I mean these are all great techniques to reduce these special continuous space Markov process to a seemingly discrete space. you swap the overhead of a continuous state space for a much lower dimension non-linear matrix evolution. as cool as that may be, nothing of it conflicts with classic probability theory in any way.
-
Quantum vs Classic Probability
and what do you think the spherical harmonics are? these are the basis functions of your quantum mechanical Hilbert space. you use them to expand any given wave function into an infinite series of them (the spherical harmonics consist of infinitely many functions). Note that in general, a hydrogen atom may be in any state superposition, not just in an energy eigenstate. in the general case it is hence \( \psi (x) = \Sigma_i a_i \psi_i(x) \) where \(\psi_i(x)\) are the wave functions of energy eigenstates. A Hilbert space as a linear space and hence has a basis, right? if you drop that assumption that is is linear, you no longer have a basis to work with and must now handle every possible state in that space separately. Note that in probability theory we do not assume any kind of structure for \(\Omega\) automatically, so that complicates things. the question of Markov-Property depends on what you consider your underlying state space to be. if you take the entire Hilbert space as your state space, then it contains all necessary information because all previous history is memorized by the quantum state or more generally the density matrix. it is entirely irrelevant how the density matrix came to be - it is the only thing determining observable results. It's history is irrelevant, right? Therefore the process specifying the time evolution of the density matrix is Markovian. if you however choose a much smaller state space for your probabilities, like for example a naive classically inspired description of point particles characterizing them by only 6 values (3 position, 3 for momentum), these cannot incorporate all information required to produce the resulting probabilities, hence will require to include the information what previous measurements occurred and in what order. In that case your model is non-Markovian as you need to know the entire history of your state to reproduce the probabilities it produces. to prevent a misunderstanding, can you specify what you mean by a point function? did you mean a quantile function? note that this only exist for state spaces compatible with numeric operations because in order to write \(Pr(X \geq x)\), one requires that \(x\) is something that understand what \( geq \) means. if \(x \epsilon \{red, green, bluel\}\) this does not work. quantiles do not exist in general case of probability theory but are specific to numeric valued random variables usually.
-
Quantum vs Classic Probability
i was trying to make you aware how much information you actually need to uniquely define a wave function (or any function of a continuous space really). to define the position of an object you need merely 3 numbers. to uniquely define a function, how much do you need? let's say your wave is mathematically a nice one and can be written in terms of Furier like series, that is written as a linear combination of a basis of pure states. this series has a countable infinite amount of coefficients. every distinct combination of coefficients produces an distinct wave function, right? so you really need each coefficient to uniquely specify your function. and each of the coefficients is a variable in your theory. you cannot do QM without this information. but the wave function is not measurable itself. hence, it and all variables that describe it are technically hidden - but they are only called that (i think) when you interpret them to be real (whatever that actually means).
-
Quantum vs Classic Probability
indeed, my topic is about probabilities. the point is that classical probability theory knows no restriction on locality and hence is no in conflict with Bell's theorem on that ground. that is neither a restriction for classic probability theory. in fact look at the density matrix which is the equivalent object to a probability distribution in classic probability. apparently there is a difference that it is a matrix while in math it would be a vector. so let's dig deeper to work out what the core difference boils down to. if we assume the Hilbert space is our classic probability space then a probability density \(\mu\) is a measurable function from the Hilbert space to the positive real numbers - while a density matrix is much smaller as it dimensions reduce to the basis o the Hilbert space rather then the total Hilbert space. Now, it is an interesting feature of Markov theory that while it is entirely linear, it is perfectly able to model any non-linear processes - and does so by blowing up in the dimension of the probability vectors. So QM makes the novel assumption that the underlying state space itself is a linear space which description therefore we can reduce via a basis of pure states. so we have two linearities: the one is fundamental to the definition of probability vectors and another for the states (superpositions). if we introduce this additional restriction/simplification to probability theory, we can then utilize the very same density matrix formalism. note that this simplification also has to be applied to all the Markov transition matrices - which we can now identify by the very same operators quantum theory uses. So it would seem QM is a special case of probability theory by adding special assumption about the state space \(\Omega\) with a somewhat novel technique to handle it.
-
Quantum vs Classic Probability
I am not an expert on realism to discuss that nor is it a topic of my thread. but i am unsure why you would even insist 3 experiments being able to have unique answers simultaneously? that does not even work in simple real life: like i could want to make an experiment to measure how happy i would have been at the age of 30 if i took job offer A. another experiment would be to measure the same with me taking on job opportunity B. obviously conducting one experiment disqualifies the other from being possible, hence those two cannot have unique answers at the same time (except in a multiverse i guess) - so measuring one leave the other at an large uncertainty. that is an incorrect statement. of course you can handle quantum amplitudes and interferences. these correlations do not appear in classical physics but that has nothing to do with probability.
-
Quantum vs Classic Probability
in the CHSH proof you have a hidden variable {\lambda\} which describe the underlying state. if we dropped locality restriction we can simple take lambda to be the quantum state as the hidden variable and the quantum state space as our probability space, hence we start with deterministic distribution \{\mu_0\} starting in the state \{\lambda\}. in quantum mechanics we would instead write it as the corresponding density operator for the state. each measurement we do model as a decision in a markov process - which means that at each decision the probability space is transformed by a Markov transition matrix which depends on the decision (measurement configuration) taken, i.e. \{\mu_t = \mu_{t-1} p(\theta_a,\theta_b)_{i j} \}. coincidentally this is the same the quantum mechanic calculus prescribes for the density matrix after each measurement with the corresponding observable operator. Since this merely reframes the calculation from quantum mechanics in the CHSH case into the classical framework without any change to the calculation or assumption, all probabilities stay the same and thus the result as well. because both Bell and Kochen-Specker deal with them and they are needed to even understand what these theorems are meant to say. but i used the term more general to challenge a simple fact about quantum mechanics: consider the information a wave function stores (i..e. an infinite series of complex number determining its value at each point in spacetime). this information is not directly observable, yet it is crucial for quantum mechanics to make all the predictions of all observables. you cannot do QM without any such hidden variables. But: in distinction to hidden variable theories which consider them real physical quantities, QM does not leaves the Interpretation for them open - or makes them into something even more obscure as in many worlds interpretations.
-
Quantum vs Classic Probability
indeed, QM does not allow to violate CHSM maximally, meaning there is still some restriction on locality but not a full one. and of course you are right, in that all time evolution remains indeed fully local (deterministic even) and it is just the measurement part that causes the big trouble. you are also referring to the no-communication theorem, which again requires to make some additional assumptions which general validity may be questioned in quantum theory. that is however not the topic of the discussion. probability theory is neither rooted in realism nor locality. is just offers a framework to describe the outcome frequencies of events given the input information. is is very bare bone. i can write the CHSH experiment in terms of a classic Markov Decision Process if that is what you want. but it's really the same thing, just a somewhat different terminology really.
-
Quantum vs Classic Probability
question of measurement perturbation is more a question of interpretation - is a change of a quantum state after measurement a perturbation? if you have only observable with operators that do commute, then you cannot construct Heisenberg uncertainty with them and have no issue with Kochen-Specker either because you are in a fully classical maximally boring scenario. hidden variables are also a thing of interpretation. the Hilbert state space stores a lot of non-observable information that you can just as much name a hidden variable - and quantum mechanics does not work without it. The term however is usually reserved to distinguish quantum mechanics from deterministic hidden variable theories like de Broglie-Bohm theory where these variables take a more pronounced role.
-
Quantum vs Classic Probability
nothing is failing. i think you misunderstood my statement. you can look up the proofs of these theorems and see the assumptions they make. of course this discussion is only for those that do know them in detail (or are willing to look them up) and and also have sufficient knowledge of probability theory and Markov Decision Processes to understand and discuss the topic. Indeed that may be a steep requirement which is why i like to double check such discussions with AI first to spot flaws in my logic. Unlike most people, AI is always available, cheap, and does have a vast amount of knowledge over all the areas covered needed to challenge my logic - but of course just like humans it is subject to mistakes. is it against the rules to mention that i double check things with AI before posting?
-
Quantum vs Classic Probability
Quantum physics introduces a new concept of probability which at first glance differs quite a bit from original mathematical concept of probability as defined by Kolmogorov. Sadly it is rarely explained how these frameworks relate to each other and if there may be something missing in mathematics. For a mathematician it feels even more annoying as there is no experiment that is able to challenge classic probability with a result it cannot reproduce. It turns out that one can take great value to discussing such topics with AI and learn a lot from them, if one has deep enough knowledge of a lot of related topics for a proper discussion. Now, the AI noted the Kochen-Specker and Bell inequality as the two cases where the fundamental differences become apparent. In either case a deeper investigation showed that both require additional physical assumptions to be made that are not native to probability theory which lead to the corresponding theorems. For Kochen-Specker that is contextuality - where hidden variables are assumed to have a preexisting values. This assumption breaks down already in classical physics in the case a measurement cannot be done without perturbation of the underlying physical system and therefore possibly changing the state of underlying variables for consequent measurements. Such a behavior of non commuting operators we can reproduce via a classic Markov Decision Process, where each measurement is reflected by a decision (including the decision how to set input variables like the axis along which a detector measures spin) which then can reshuffle all underlying hidden variables / quantum state / wave function. That leaves only Bell's or for this purpose the more concrete CHSH inequality. Similarly to Kochen-Specker the underlying hidden variables are subject to further physical assumptions and in this case it is locality. of course it is a fair physical assumption that the measurement of one variable far away should not impact another as that would seemingly imply faster then light interaction. however, CHSH experimental results exhibit non locality and so does quantum theory. Classic probability theory does not even have a concept of locality to begin with, so when we do not explicitly enforce it, it is easily able to reproduce quantum behavior. in fact due to the lack of any locality restriction a classic (in the mathematical sense of probability) Markov Decision Process can violate CHSH inequality maximally with CSHS sum = 4. after a longer discussion the AI then concluded that dropping those two assumptions everything we see in quantum mechanics can be described (or better interpreted) by classic probabilities and doing so does not introduce any assumptions that quantum theory doesn't do itself. The non-locality feature is what distinguishes it notably from all classical theories, but it is just nothing new in terms of probabilities.