-
The meaning of constancy of the speed of light
Good. This is though just the formal aspect of separating the mathematical treatment from the interpretation. That throws us back to the original question raised in this thread: where do we get the notion what the space and time of an observer is? We usually assume it to be trivially clear - but in fact it is a tricky circular problem that i want to discuss. When we observe some physical processes, we get a notion of the passing of time by the rate how the process changes. in order to be able to measure it, we therefore have to define some physical reference which we then can use to establish time. The definitions underling the SI definitions of time and length do therefore specify this via emissions of the Cs atom for one and the propagation of light in vacuum for another. But let's observe that in the general case, there is no mathematical reason that all physical processes have to adhere to the same concept of time. Just to illustrate this let's assume an alternative universe where there are two electromagnetic forces that differ only in their propagation speed. each will be invariant to its own Lorentz transformations around its constant c. All physical processes like atomic emissions build from one or the other variant of the two forces will fit into this invariance but will disagree when observing processes of the other force. This is because there won't exist a totally Lorentz invariant physics. This means that each observer can denote two concepts of time: that related to physical process of one force or the other. Technically there will also exist a preferred frame where both forces take the same shape and this frame provides a third time an observer can use. Applying this observation back to acoustic physics, we can construct physical processes which rely solely on acoustic interactions and see how the coordinates produced by the acoustic Lorentz trafo fare with describing such a process in different frames: consider an in infinite grid of tiny flying sound emitters which constantly emit a sine like sound (a bit like mosquitos). As sound waves carry physical energy and momentum, this interaction of sound with the emitters will cause a constant repulsion which grows stronger the closer two emitters get to each other. This is meant to mimics the repulsion between atoms in a solid. in the case the entire grid has a velocity relative to the medium (i.e. it's center of mass CoM), the repulsion orthogonal to the CoM momentum is weakened due to the effectively prolonged distance sound waves have to move through the medium to get from one emitter to another. effectively the grid will see a contraction along this direction. But the formulas describing this contraction will look all too familiar. If we were to apply the weird coordinates produced from acoustic Loretz transformations onto this grid, we will find that these coordinates in the CoM frame make the grid look always the same regardless of how fast it moves relative to the medium. So, if your observer happens to be a bat which uses its ears to visualize its surroundings instead of its eyes, it will find that at least the spatial part of this coordinates do agree quite well with its perception of the world.
-
Quantum vs Classic Probability
yes, the mathematical measure theory. it is indeed both underlying QM and probability theory while the Lebesgue measure is a fundamental tool for both. Kolmogorovs probability theory is in fact mostly a rebranding of MT because apart from adding a bit special terminology, it is really just a specialization of MT to positive finite measures (think: finite volume). Otherwise it is the same with a few renaming given a somewhat different purpose and interpretation. Only going further to stochastic processes like Markov theory come with significant expansions of the framework. Of course the mathematical meaning of measure has nothing to do with the physical concept of measurement - the mathematical concept is build around avoiding Banach Tarsky because it will break the definition of integrals while measurement in physics and in QM is an entirely different topic altogether. we cannot avoid using both of these terminologies when discussing probability theory and physics. This has indeed created some confusion and misunderstandings so far. Indeed, Kolmogorov build on the works of former mathematicians dealing with this subject, gave it a clean axiomatic unified framework based on measure theory which was done 30 years before. The building on the latter allowed to handle problems with infinite many events and continuous problems with a solid toolset. And you really feel it: any university course on probability theory does nothing else but measure theory for the 1st semester of it. It is very reasonable to ask for precision especially in context of a discussion where we have two terminologies colliding and creating ambiguities for some words. I must admit that when i want to quickly respond to a forum post, i often do so too hastily and my answers may lack precision and thus become open for misunderstandings. i am sorry whenever that happens.
-
Quantum vs Classic Probability
The name "random variable" is really misleading. Measure theory calls the same definition a "measurable function" instead. From Wikipedia: "The term 'random variable' in its mathematical definition refers to neither randomness nor variability[2] but instead is a mathematical function [...]" (https://en.wikipedia.org/wiki/Random_variable) indeed. what makes them strange is their interpretation which attempts to force the concept of interreference into probabilities. classic probability does not have a problem to model a process with interferences but those have to go into the state space and be treated akin to some non-observable underlying physical process. This does not even require any change of the calculus of QM but translate just into a change of terminology and interpretation. The question about amplitudes is why people want to have them treated purely as a probabilistic object if in fact the behave like a function of underlying physical-like and probabilistic aspects. the latter separation allows to model them in classic probability. In fact specific non-linear waves exhibit a lot of the same behavior as the quantum states do. also in recent years many new experiments were able to conduct types of weak measurements which show more and more that there is indeed an underlying physical aspect to wave functions and the resulting amplitudes that cannot be ignored - and challenge what we though is observable.
-
Quantum vs Classic Probability
random variables are a concept of probability theory and therefore not part of QM formalism at all. it shouldn't be mixed into it without establishing a clean view how you can model QM via classical probability theory. you may seem to have a misconception though what the term means as you seem to be driven by a very intuitive interpretation which goes quite against the concepts needed in probability theory. random variables - or measurable functions as they are called in measure theory - are an abstract definition not just applied to observable quantities. it is in fact a crucial technicality that a function is compatible with your sigma algebra and that means that you can do integration over it. if the wave function would not be a random variable, then integrals over it become not well defined... and in this case for no reason, because technically we know quite well how to define them.
-
Quantum vs Classic Probability
yeah, obviously. i am not even sure what you thought i was talking about? The original question was that given a wave function of a quantum state, how much arbitrary information does it contain. the idea is to use it as a scatter target to figure that out from the resulting scattering amplitudes of test particles. that this should have been clear from my previous posts. Ultimately, it turns out that almost all of the information embedded it the wave function is physically relevant. that is you cannot drop it. it's been been quite a few years since i completed particle physics course, but i still remember it quite well. i did make a mistake of not getting help from AI to formulate my posts clearly - and i admit it is a repeated experience that this sometimes leads to misunderstandings when i talk to people. so where did i lose you? where was my formulation of what i intend to do not clear enough? maybe i should have pronounced it more that the electron bound in an atom is the scattering target - rather then the probing particle like it is used in many experiments. that is maybe quite unusual to begin with, so did that lose you? the reason is that wouldn't work here with using electrons for probing as it would disturb the scatterer wave function too much for the purpose of repeated scattering of the very same scatter target. hence i specifically wrote that i use a theoretical test particle multitudes lighter and less charged then the electron - because then the theory almost allows to effectively measure and track the target wave function.
-
Quantum vs Classic Probability
to keep this short look up the atomic form factor then https://en.wikipedia.org/wiki/Atomic_form_factor - you'll see a rho popping in that formula. It's similar to what you do in the Hartee-Fock method for many electron atoms where you also calculate effective potentials.
-
Quantum vs Classic Probability
i haven't mixed them up, just applied the first order Born approx to a target that is not a classic potential, but a quantum system, a wave function itself. In that case, this approximation uses an integral over \(\rho(x)V(x)\) where V is the classic point charge potential to calculate the effective scattering potential. And indeed here rho is calculated according to Born rule. But that's the game in scattering theory and needed to get predictions matching experimental results. Normally, physics goes into the high energy regimes of deep inelastic scattering. but it gets quite interesting going the other way towards the shallow elastic scattering regime as well and that does not disturb the wave function so much making it open for repeated measurements. that's where devil comes in. sure, this scenario represents an (1-electron, n-test paticles) system where we have n measurements. However, because all n test particles are by assumption perfectly prepared (i.e. we know their exact wave function), all n measurements effectively extract the only not-known information about the system and that is from the singular electron. if you account that the test particle has spin (or is a photon with prepared polarization), then in the second approximation we can extract all about the target wave function excluding only the global phase factor. at least according to the theory. sorry, i tend to skip over many details because i assume we all know here the details of quantum mechanics in an out, so that we do not need to get into the details how all that standard knowledge was derived exactly and can get quicker to the interesting stuff. but i tend to forget that what i say is not exactly trivial. i am getting used to talking to AI which has all the knowledge at hand to catch my drift and intension directly without the need to do a lot of explaining.
-
Quantum vs Classic Probability
feel you there. quite a lot on my mind as well. reduces the time i have to spend on the interesting matters. this is not relevant for the question if we can model given experiments via classic probability theory. even if you include completely arbitrary information in your state space & model that do have no impact on the results, it just that: surplus info you could drop. it just makes it more tedious to deal with objects full of irrelevant information. however, there is a reason i did chose this approach to start of with a rather large state space and that is because QM already implies that almost all of it is irreducible information to make correct predictions, so no model can afford to drop it and still get the correct predictions. consider some pure state of an bound electron in a H-atom and its wave function. Let's probe this system using scattering experiments with some (hypothetical) idealized test particles that are so light and marginally charged (compared to the electron) that we can scatter many of them without collapsing or disturbing the target wave function. Scattering theory says in the first Born approximation the electron will actually interact as if its charge was in fact physically distributed according to \(\rho = |\psi|^2\). Scattering amplitude in this first order approx will be the Fourier transform of that, meaning most of the information contained in the wave function will make a difference for the outcome, especially if we can freely choose the incoming angle and energy of the test particles. Higher order approx of this experiments will also give us info about the magnetic moments and so on. Interestingly, even if the wave function is a superposition of two energy eigenstates, then in the Born approx its charge distribution is not stationary (unlike for energy states) but instead oscillates with a known frequency. so if our scattering experiments has some time resolution, we would also be able to distinguish such pure states as well. however, thinking classically, such a oscillation would naturally cause an EM-emission (with exactly the same wavelength as QM predicts) and loss of energy collapsing the state to the next lower stable solution (and only energy eigenstates are stationary and thus classically stable) - that is even classically one would expect quite the same behavior as QM predicts. just saying that such a state would be very short-lived and hence difficult to observe. through such gedanken-experiments one can boil it down that is is only the global phase factor of the wave function that is truly irrelevant for any prediction, hence almost all of the information contained in the wave function does seem irreducible. this line of thought is a bit of a brute force expansion of weak measurements if we had some test particles that could do that. i mean you could technically do it with very low energy photons but uff, measuring those will be a challenge. A realistic experiments on this topic is of course https://www.nature.com/articles/nature10120 or https://www.nature.com/articles/nature12539
-
The meaning of constancy of the speed of light
Indeed, the spacetime as used in relativity does not admit a variable c, but i did account for that. But you are right in that there are many traps even thinking about the issue. Consider something historic like the Maxwell equations and its treatment in both LET and relativity spacetime, which are both equivalent. LET uses a Galilean spacetime, hence there are no restrictions on c. Furthermore there is always one frame where both descriptions will yield the identical from for Maxwell - the preferred frame - and this is the frame where we start our considerations from. This is a very important for the next part: I very much have accounted for that. What you overlook is that your approach ends in a unresolvable circular reference that renders you unable to approach the question at all. Any assumption of a variable c requires letting go of the relativistic spacetime (see OP "A Need for a Counterhypothesis"). But: as with LET (or sound equation) we know there exist one preferred frame where the equations in LET and SR spacetimes will be identical. In this frame we can make the assumption of a variable c and admit that the new assumed equation is one relative to a abstract theoretical clocks and rules defined by the assumption that they remain invariant under any changes of c. We do not need to know what these abstract clocks and rulers are - because the very next thing we do is to use these equations with a variable c and use them to model what defines our actual clocks and rulers - e.g. the Cs atom. From there we can deduct how the theoretical abstract c-invariant clock and rulers relate to our SI clocks and rulers. I have mention this already in my opening post that the assumption of a variable c still leads to c being constant if we remain with the standard clocks and rulers and instead it will manifests as curvature of spacetime identical to gravity. But now that you mention it, it may be complicated to understand that, as it effectively requires to jump between different spacetime models and the same equations written in those different spacetimes. You are getting there :). Of course you are right to mention that my proposition does not work in any frame - but i did mention explicitly that this construction requires to start from the preferred frame i.e. where the medium is at rest. Now, if we have an equation in one frame and need it in another, we can do the corresponding transformation. For the sound equation we would normally do that by Galilean trafos and hence get additional terms for the medium, right? But starting from the base frame we can now apply also a Lorentz trafo and get an equation without a medium - but in different coordinates. So for one frame where the medium is not at rest we have two equations with two different coordinates sets. We can do a sanity check and calculate whatever physics example to notice that both give identical predictions (if we account that the Lorentz variant requires us to transform time and lengths calculated in a frame from coordinate units to SI units). With the Lorentz trafo we therefore got the same shape of the sound equation in any frame as in the preferred frame where the medium is at rest. So in the acoustic spacetime, the sound equation maintains its original form in every frame! And suddenly the mediums is gone from the equation - instead it moved into the geometry of spacetime. But we are not just transforming between coordinate systems. the coordinates serve as a first step of construction. But the big step to special relativity was elevating these coordinates to a new and fundamentally different definition of spacetime. But this idea is not exclusive to light and can be applied mathematically to any other wave. Now, tensors are sensitive to the geometry and hance a zero tensor may be non-zero ins the same frame in a different geometry. best example: the medium terms in a frame is a tensor in LET which is 0 only in the preferred frame and nonzero everywhere else. In SR it is always zero.
-
Quantum vs Classic Probability
can do. would do it for a simple qubit to not mess up with spin operators. takes a bit of time to write it all down formally with some latex, so not during the week. does not involve any additional hidden variables that QM does not have itself. its just a reframing into classical probability framework. but i think it makes sense to first work out a common understanding of what a random variable is, as this is crucial for the construction.
-
Quantum vs Classic Probability
In order to use X in context of any random event to calculate probabilities for (including conditional probabilities), it must be random variable. a random variable is merely a measurable function (in the sense of measure theory). You fundamentally need that property otherwise you produce unmeasurable event sets outside your sigma algebra which would prevent to calculate anything. While that technically means that P(X) is defined and you could theoretically calculate such properties, their interpretation is left open if X is not itself observable. You can consider this to reflect our knowledge about which state X may be in from the indirect observations we have, that is something like a Bayesian interpretation rather then actual measurement. This interpretation of probabilities most often goes along with non-observable random variables. But you are trying to apply additional restriction from your interpterion which aren't required. you are trying to force some realist interpretation onto random variables which is not part of their math. your approach may be understandable from a physical point of view but within probability theory it doesn't make sense. as a mathematical theory, anything that is fits in the axiomatic framework of the theory is valid and may be used. interpretation is an issue left for others to solve. In the model we are talking about, wave functions are merely the states of the hidden process, that is elements of \(\Omega\). \(S: \psi \rightarrow \psi\) is a random variable in this space.
-
Quantum vs Classic Probability
Have you ever heard of a Hidden Markov Model (HMM)? in a HMM the underlying Markov process is not observable but we we have many observable random variables that do depend on the hidden process. The goal of this concept is to learn about the underlying process from the available observations. It would seem like quantum mechanic behavior may be a prime example for it. So no, random variables cannot be considered observable in general. A model may freely specify which are and which aren't.
-
Quantum vs Classic Probability
right, sorry, i meant \(\psi \rightarrow \langle \psi | O | \psi \rangle\) is a random variable for an observable \(O\) however, for \(S\) we use \(\psi \rightarrow \psi\) as a random variable.
-
Quantum vs Classic Probability
You are partially right about the history but indeed the devil is in the detail of how that is exactly defined on a mathematical level. It makes a huge difference about what random variable we talk about. For example the momentum operator is a function of the Hilbert space to the real numbers^3 and in our case is a valid random variable in this model. We can apply it at every time step of the states evolution and get the momentum process associated with it. is it Markovian? no, because indeed its probabilities depends on the history of its previous states. But 3 values are barely enough to characterize a quantum state, hence no surprise there. In fact no set of observables is able of produce a Markov process. Now let's look at the identity operator of the Hilbert space. Let's call it \(S\) because it gives us the current quantum state of the system. This is the default random variable for any state space \(\Omega\). For this one we have \[P(S_t = \psi(x,t) | S_{t-1} = \psi(x,t-1), S_{t-2} = \psi(x,t-2)) = P(S_t = \psi(x,t) | S_{t-1} = \psi(x,t-1))\] This is the Markov property for this specific process (the time discrete variant for simplicity). It can be proven directly from the Schrödinger equation which guarantees it by having no dependence on prior states of \(\psi\) other then its time derivative at time \(t\). Even though we introduce a random variable \(S\), it does not necessary mean we can measure it. It just means it is an object we are interested in and therefore need a random variable to track it. \(P(S_t = \psi(x,t))\) only means we have a theory that can make theoretical predictions about what state a quantum system may be in, reflecting our knowledge of the system. you can make predictions about it, sure, but not measure it. you know that in order to measure \(\psi(x,0)\) you would need to experimentally obtain its value for every \(x\), and that for a single particle this wave function belongs to. if it were measurable - i.e. an observable, QM would require that a linear operators exist that corresponds to its measurement. in case of a function, you need infinitely many of such to extract the value of the function at each \(x\).
-
Quantum vs Classic Probability
perhaps something more general. if you have a state space and some deterministic equation fully describing the time evolution of a state, then this alone is sufficient to view it as a stochastic process which in this special case is deterministic. This is simply the generalization from a equation describing the time evolution of a single state to one which describes the time evolution of a distribution of states (in QM called a mixed state). One example for this would be Liouville equation in classical Hamiltonian mechanics. The generalization from single state to a distribution of states takes a bit of additional formalism but you can always apply it. If the time evolution equation depends only on the state and not its history, then this naturally holds for its stochastic process, too, that is it is Markovian. If we start by focusing on the part of QM which is the deterministic evolution of states only, e.g. Schrödinger equation, then we can naively apply this approach here as well (lets disregard for now that the quantum state is not itself actually measurable). But that of course is what Von Neumann equation does already. Digging deeper we can figure out the former is a transformed way to write the latter using some additional simplifying assumptions about the state space and its time evolution. The reason why this yields a Markov process is again, neither Schrödinger nor Von Neumann need to know anything abouts a states history to predict how it will evolve. If you can follow this aspect, we can go into measurement. it my be basic, but i am not native English speaker and googling 'point function' turned it is a term used for the quantile function. In case of Bohm de Broglie theory, there is a hidden position and momentum variables which is referred to by the name. their values are however partially revealed by a single measurement. In Kochen-Specker and Bell's it is more general a quantity referred to by lambda without further specification - anything your calculation of predictions may depend on and is not obviously available information. Their terminology is technically general enough to question whether that involves something like the wave function. The wave function is non-observable, as QM prohibits to measure it directly. You cannot determine it in a single measurement. but what you can do is repeat an experiments with an well prepared ensemble many times and obtain a distribution of data from which you can reconstruct the wave function. for details about this you can refer to standard literature on QM.