Rob McEachern

Senior Members
  • Content count

    44
  • Joined

  • Last visited

Community Reputation

9 Neutral

About Rob McEachern

  • Rank
    Quark

Profile Information

  • Favorite Area of Science
    physics

Recent Profile Visitors

484 profile views
  1. Quantum Entanglement ?

    This is also the explanation underlying Feynman's Interpretation of the Path Integral Formulation of Quantum Mechanics: https://en.wikipedia.org/wi... These problems in quantum theory all arise from attempting to extend the principle of superposition, from describing all the components of a single particle (as it was correctly applied in the derivation of Schrödinger's equation for a single free particle), to simultaneously describing all the components of every particle. The latter incorrectly enables the Fourier transforms being used to mathematically describe the superposition, as having no constraints whatsoever, on particles adhering to any localized trajectories - thereby producing the illusion of a theory that describes "quantum fluctuations" as if they were a real phenomenon (particles spontaneously disappearing from one trajectory and reappearing on another) rather than just the unconstrained theory's ability to perfectly describe/incorporate every source of non-local noise and every other modeling error. Unfortunately, since this error in applying superposition via Fourier transforms ended-up producing the correct probability distribution, physicists, beginning in the 1920s and continuing to the present day, have completely failed to appreciate their error, and we have been stuck with all the "weird" interpretations of quantum theory, ever since.
  2. Quantum Entanglement ?

    The problem lies within the difference between the properties of the territory (reality) and the properties of the map (mathematics - specifically Fourier transforms) being used in an attempt to describe the territory. I am of the opinion that all the well known, seemingly peculiar properties of quantum theory, are merely properties of the map and not properties of the territory itself. This begins with the assumption that quantum theory describes probabilities. It actually describes the availability of energy, capable of being absorbed by a detector; this turns out to be very highly correlated with probability. This happens, because of the frequency-shift theorem pertaining to Fourier transforms: in the expression for the transform, multiplying the integrand by a complex exponential (evaluated at a single "frequency") is equivalent to shifting (tuning - as in turning a radio dial) the integrand to zero-frequency and the integral then acts like a lowpass filter to remove all the "signal" not near this zero-frequency bin. Thus, the complete transform acts like a filterbank, successively tuning to every "bin". Subsequently computing the sum-of-the-squares of the real and imaginary parts, for each bin, then yields the (integrated) energy accumulated within each bin (AKA power spectrum). If this accumulated energy arrived in discrete quanta, all of the same energy per quanta, then the number of quanta that was accumulated in each bin, is simply given by the ratio of the total accumulated energy divided by the energy per quanta. In other words, in the equi-quanta case, this mathematical description turns out to be identical to the description of a histogram. Which is why this description yields only a probability distribution and why all the experiments are done with monochromatic beams. If there is "white light", then there may be no single value for the energy per quanta within a single bin, to enable inferring the correct particle count, from the accumulated energy. So, quantum theory never even attempts to track the actual motion (trajectory) of anything, either particle or wave, it just, literally, describes a set of detectors (a histogram) at some given positions in space and time, that accumulate the energy arriving at those positions in space and time - energy that enables an exact inference of particles counts (probability density) whenever the energy arrives in equal-energy quanta within each bin. The mathematical description is thus analogous to the process of a police officer attempting to catch a thief that is driving though a community, with many roads, but with only one way out. Rather than attempting to follow the thief along every possible path through the community, the officer simply sits at the only place that every path must pass though (a single bin/exit), in order to ensure being detected. If there are multiple such exit-points, then multiple bins (detector locations) AKA a histogram is required, to ensure that the probability of detection adds up to unity. - every way out must pass through one detector or another.
  3. Quantum Entanglement ?

    My point is that QM imposes no cutoffs, unless it is done in a completely ad hoc manner, because it cannot do it any other way; because any physically relevant cutoff would be dependent on the specifics of the detection process used to experimentally detect and thus count anything.
  4. Quantum Entanglement ?

    The answer lies in the word density. First, density is only proportional to number, so the theory only needs to track the probability density (not the actual number) of particles being detected at particular locations and times, but not the trajectory they took to get there, or their actual number. Second, and much more interesting, is how this all relates to so-called quantum fluctuations, renormalization, and the error behavior of superpositions of orthogonal functions. Addressing these issues will quickly lead beyond the scope this tread’s topic, so I will keep this brief, so as not to incur the wrath of any of the moderators - if you wish to pursue the matter further, I would suggest starting a new topic devoted to these issues. Briefly, think of a Fourier series, being fit to some function, such as the solution to a differential equation. As each term in the series is added, the least-squared-error between the series and the curve being fit, decreases monotonically, which each added term, until it eventually arrives at zero - a perfect fit. Which means that it will eventually (if you keep adding more terms to the series) fit any and all errors and not just some idealized model of the “correct answer”. In other words, it will continue adding in global-spanning basis functions, that decrease the total error, while constantly introducing fluctuating, local errors all over the place. In essence, it treats all errors, both errors in the observed data and errors in any supposed idealized particle model (like a Gaussian function used to specify a pulse that defines a particle’s location) as though they have actual physical significance and must therefore be incorporated into the correct answer. Hence, the series forces “quantum fluctuations” to occur, by instantaneously reducing what is being interpreted as the particle numbers at some points, while simultaneously increasing it at others, in order to systematically drive down the total error between the series and the curve being fit; all because the superposition of the orthogonal functions never demands that any particles remain on any trajectory whatsoever, in order to reduce the total error. It ends up being much easier (and likely) for the method to drive the error down, by constructing a solution that has supposed particles popping in and out of existence all over the place, in order to rid the solution of any and all non-local errors or noise.
  5. How can we check a solution to the Traveling Salesman Problem

    Are you familiar with TSPLIB: http://comopt.ifi.uni-heidelberg.de/software/TSPLIB95/
  6. Quantum Entanglement ?

    We are talking about two different things. I am talking about the number of parameters required to specify the solution to a problem. You are talking about the number required to specify only the equations. The solution also depends upon the auxiliary conditions like the initial and boundary conditions, in addition to the equations. A non-parametric solution does not require any correspondence between the number of parameters in the solution and the number in the initial conditions - think about the number of parameters in a best-fit line versus the number of *points* being fit. The significance of this, is that such a theory "loses track" of the number of particles it is supposed to be describing. Thus, it should come as no surprise that it cannot keep track of their detailed trajectories, when it cannot even keep track of their number. The number of particles that it actually does describe, ends up being an artifact of the description itself, rather that a property of the entities being described. And that is why particles are excitations of a field, in such a theory. This is the direct consequence of exploiting the mathematical principle of superposition in order to formulate a non-parametric solution. It worked great, when Joseph Fourier first developed his technique to describe a temperature *field*, but when physicists in the early twentieth century tried to apply it to the tracking of *particles*, they ended up losing track of them - because that is the inevitable result of choosing to employ a mathematical, descriptive technique, that is ill-suited to that purpose.
  7. Convoluted signal into frequency

    I assume the first convolution was a square wave convolved with itself. In the frequency domain, convolution is equivalent to the multiplication of Fourier transforms. That means the operation does not add or subtact any new frequencies to the mix, it merely alters the relative amplitudes of frequency components that already exist within the transforms. In your examples, it is suppressing the upper harmonics in the spectrum of the original square wave, and then the triangle wave. So the fundamental frequency, which is what you are mostly observing in the second convolution, is the same as the fundamental frequency in the original square wave - given by the reciprocal of that wave's period. These wiki entries may help: https://en.wikipedia.org/wiki/Convolution https://en.wikipedia.org/wiki/Triangle_wave
  8. Quantum Entanglement ?

    QED does not play very well with gravity, and unlike EM, it is non-parameteric; Nobody uses wavefunctions, in classical EM, as they are used in QED.
  9. I do understand. What I am saying is that there are also less complicated objects that act... But a moderator has requested that I stop, so I shall.
  10. You (and many, many others) have assumed that the observer is the entity that has committed the omission. But it is possible that the observed is the cause of the omission, and there is consequently nothing else there capable of ever being reliably measured. Your assumption is quite reasonable in the classical realm. But that may not be the case in the quantum realm. My point is, that all or most of the seeming weirdness of quantum phenomenon, may be due to that assumption. Which may be completely unmeasurable or even detectable, in the quantum realm, due to limited interaction duration, limited bandwidth and limited signal-to-noise ratio (the cause of the omission). In which case it is possible (and, I would argue, even likely in quantum scenarios) that there is only a single bit of information that can ever be reliably extracted, from any set of measurements, made on such objects.
  11. But a spin measurement can. If there are a number of skaters (an ensemble), all spinning one way or the other, and I ask you to give me a one-bit (one number) answer to the following question, you will indeed be able to describe the spin with a single number: If you look down on the skaters from above, do they all appear to be spinning clockwise (1) or anticlockwise (-1)? The number of numbers (components) required to describe spin is critically dependent on whether or not the description pertains to before, or after, an actual observation. One might say that the "collapse of the wavefunction" corresponds to the collapse of the number of numbers required by the description. Now if I asked you how fast they were each spinning, or to determine their average speed, your answer (assuming you could actually, reliably measure their speeds) may require either more numbers, more bits per number, or both.
  12. Quantum Entanglement ?

    One has to be careful. The number of dimensions that are important, may be those existing in the logical space being used to describe the physical space, rather than those of the physical space (including time) itself. Consider the situation in numerical analysis, of attempting to fit an equation to a curve or to a set of data points. One may choose to fit an equation with a fixed number of parameters (logical dimensions) or a non-fixed number. For example, if you choose to fit a straight line, then there are only two parameters required to describe the "best fit" line. But if you were to chose to fit a Fourier series or transform to the curve, there may be an infinite number of parameters required, to specify the "best fit". Gravitational theory is a "parametric" theory, in that it requires only a fixed number of parameters, to describe behaviors. But quantum theory is a "non-parametric" theory, requiring an infinite number of parameters (Fourier transforms) to describe behaviors. This is one of the reasons why it is so hard to make the two "play together" and it is also why it is so hard to find a common-sense interpretation for quantum theory - a non-parametric theory may be consistent (sufficient to describe) virtually any behavior, and thus it may not be possible for it to eliminate any hypothetical cause for the behavior - anything goes - and weird interpretations and correlations (entanglement) may result from attempting to associate an incorrect "logical dimensionality" with the correct "physical dimensionality". The situation is further complicated by the fact that the physical dimensionality of an emitter may differ from the logical dimensionality of any emissions (observables) produced by the emitter. In other words, simply because an object has three physical, spacial dimensions, does not necessitate that its observables must also possess three logical dimensions (three independent parameters).
  13. Creating gravity in space

    Only if observed at a single point. Gravity usually has a gradient. Consequently, gravity at the top and bottom of an elevator car, will differ slightly. But the same two points in an idealized, accelerating frame of reference, will not exhibit that difference.
  14. Quantum Entanglement ?

    Exactly. The same effect can often be produced by more than one cause. An acceleration implies the application of force, and forces applied to things change how they behave. Yes. The last sentence in the article states "These include fundamental questions for our understanding of the universe like the interplay of quantum correlations and dimensionality..." As I have described in other posts, the effect of "quantum correlations" can be produced by misinterpreting measurements made on a 1-dimensional object (a single bit of information) as being caused by measurements assumed to be made on the multiple vector components existing within a 3D object.
  15. What is the difference between science and philosophy?

    Until about the year 1600 there was no difference. Then Galileo, Bacon and others created the difference. Although it is a bit of an over-simplification, you could say that philosophy prior to 1600 was mostly modeled on ancient Greek mathematics - you stated some premises, then derived, via deductive logic, some conclusions based on those premises. So Aristotle stated the premise that the cosmos is perfect, therefore the planets must move in perfect circles (I won't get into how epicycles fit into this idea), perfectly centered on the earth, with a moon that is a perfect sphere etc. But 2000 years later, Galileo discovered mountains on the moon, disproving that long-standing premise of perfection. That triggered rapid changes. Shortly thereafter, Bacon declared that science (he called it Natural History) needed to be based first and foremost on Inductive reasoning rather than deduction, because only induction, applied to actual observations, was likely to be able to ascertain the validity of the starting premises, thereby avoiding a repeat of the earlier problems resulting from dubious premises. This became the basis of the "scientific method". Bacon was also the person who first proposed massive "state funding" for this new enterprise - previously people like Galileo either had to be financially independent, or seek financial aid from wealthy patrons, a situation that did not change too much, until the mid-nineteenth century. Bacon was also mostly interested in what would be called "applied science" today, rather than basic research. He was interested in finding new ways to cause desirable effects - like finding a new medicine for curing a disease. Bacon also was of the opinion that, starting with Socrates, the Greek philosophers were responsible for a 2000 year delay in philosophical/scientific progress, because unlike the pre-Socratic philosophers, they had convinced subsequent generations of philosophers to focus almost exclusively on moral and social issues rather than Natural Philosophy, now known as Science. A number of present-day philosophers of science, have become concerned that physics, in particular, is reverting to the pre-Bacon model, of pulling dubious premises out of thin-air, deriving wonderful, elegant (fanciful?) conclusions based on such premises, and having too little concern for experimental validation of their premises.