# Any Anomalies in Bell's Inequality Data?

## Recommended Posts

The second line from your OP was "what was the frequency for Alice and Bob choosing the same test?"

The answer is entirely dependent on how many different polarity test settings, have been decided upon. For example, in the paper cited above, 360 settings have been decided upon. So, if Alice randomly selects one test setting, the probability that Bob will randomly select the same test setting, for any one particular test, is 1/360 = 0.278%

Because it is very time-consuming to test large numbers of different settings, most experiments use a much smaller number of settings, choosing only those at which the difference between the classical and quantum curves are most significant.

A triangular distribution, arises from the fact that the autocorrelation ( https://en.wikipedia.org/wiki/Autocorrelation ) of a uniform distribution (the distribution of both the test settings and the object polarizations) is a triangular distribution.

##### Share on other sites
18 hours ago, Rob McEachern said:

A triangular distribution, arises from the fact that the autocorrelation ( https://en.wikipedia.org/wiki/Autocorrelation ) of a uniform distribution (the distribution of both the test settings and the object polarizations) is a triangular distribution.

This doesn't make any sense to me. To begin with, autocorrelation is based on similarities between two signals that include a Lagg time which clearly does not apply to Entanglement.

The binomial distribution involves the distribution of results associated with 50:50 odds which are the odds that either tester experiences for spin up or spin down from their respective points of view.

##### Share on other sites

Any two statistical averages can be correlated. One does not need to even have anything to do with the prior.

Ie population of humans vs number of grains of sand on a beach.

See Pearson correlation function (tons of online calcs for this lol) however that formula is linear. To equate how this applies to Bells inequality relates to the observers Rob above has described. Observers also include detectors.

Edited by Mordred

##### Share on other sites
On 7/26/2017 at 11:28 PM, Mordred said:

Any two statistical averages can be correlated. One does not need to even have anything to do with the prior.

Ie population of humans vs number of grains of sand on a beach.

See Pearson correlation function (tons of online calcs for this lol) however that formula is linear. To equate how this applies to Bells inequality relates to the observers Rob above has described. Observers also include detectors.

No, I wasn't thinking of anything when I clicked on the link that Rob had provided. If the link was to an incorrect reference, then I have no control over that.

As far as the idea that "Any two statistical averages can be correlated". It's a radical point of view which requires some extreme justification as I'm sure you're aware.

Can statistics be intentionally obfuscated in order to misinterpret data? Yes, absolutely, political parties and/or big business does this all the time under the guise of mathematical evidence.

Never the less that doesn't mean that statistics cannot be validly applied to provide meaningful results. It only proves that math can be obfuscated to provide any kind of results that unqualified people are susceptible to believe in.

It's the kind of false results which would become immediately transparent under properly applied logic.

##### Share on other sites
1 hour ago, TakenItSeriously said:

(...) It's the kind of false results which would become immediately transparent under properly applied logic.

It has to be distinguish:

- the angle of the set up (alpha, beta), used in the definition of the quantum sum

- the polarization measure (a, b), used in the quantum correlation function.

This should give you a (properly I hope) applied logic: http://file.scirp.org/pdf/JMP_2015103010590224.pdf

##### Share on other sites
1 hour ago, Exergy said:

It has to be distinguish:

- the angle of the set up (alpha, beta), used in the definition of the quantum sum

- the polarization measure (a, b), used in the quantum correlation function.

This should give you a (properly I hope) applied logic: http://file.scirp.org/pdf/JMP_2015103010590224.pdf

I prefer to refrain from commenting on the math behind QM as I don't think I could be entirely objective in any critique based on what I've seen. Consider it my waning attempts to refrain from being indelicate which is beginning to reach it's limits.

To that end, I will only add that mathematicians should refrain from using logic based arguements IMO. Math and logic are opposites in terms of the kind of evidence that they each provide and Bells Inequalities is a logic based theory, not a mathematical based theory.

##### Share on other sites

What makes you feel a logic argument cannot be described by mathematics?  Or that mathematics regardless of if its statistical or not cannot be used to describe a logic argument?

Anyways back to your previous reply  correlation functions are based upon statistical averages. In Bells inequalities the averages is based upon polarity alignments between two detectors in particular their statistical averages.

Here is a good arxiv. As far as your mathematical issues are concerned, I always consider those arguments a form of self limitation. Its far too common to see people refusing to understand something in physics simply because they don't like the nature of math or simply type of math used.  Personally I feel thats their personal self limitation.

Lol a side note its often frustrating to always having to look for the simplist, or most heuristic explanation when helping others.I come across these sort of math adversities all the time. lol

Edited by Mordred

##### Share on other sites
14 hours ago, Mordred said:

What makes you feel a logic argument cannot be described by mathematics?  Or that mathematics regardless of if its statistical or not cannot be used to describe a logic argument?

Anyways back to your previous reply  correlation functions are based upon statistical averages. In Bells inequalities the averages is based upon polarity alignments between two detectors in particular their statistical averages.

Here is a good arxiv. As far as your mathematical issues are concerned, I always consider those arguments a form of self limitation. Its far too common to see people refusing to understand something in physics simply because they don't like the nature of math or simply type of math used.  Personally I feel thats their personal self limitation.

Lol a side note its often frustrating to always having to look for the simplist, or most heuristic explanation when helping others.I come across these sort of math adversities all the time. lol

Quote

What makes you feel a logic argument cannot be described by mathematics?  Or that mathematics regardless of if its statistical or not cannot be used to describe a logic argument?

It's because, math and logic may solve the same problem but to do so, they always need to answer different questions.

For example, logic may answer the question of how while math may answer how much.

Math tends to quantify a problem while logic tends to clarify a problem. This is why the logical model is always the model that explains the mechanism behind how something works or describes the chain of cause and effect which math cannot do.

However, math is the model that provides the definitive proofs, while logic can only provide definitive invalidation.

Also math may be validated through experimental evidence where as logic usually cannot which is usually because logic tends to quantify in relative terms such as greater than or less than such as how Bell's Inequalities works.

Also, because they are derrived through different means, a mathematical solution and a logical solution may cross validate each other.

This is why math and logic goes hand in hand and act as complimentary pairs. Its because they each do completely different jobs where one cannot replace the other.

Beyond this, logic may go further than math in many respects. So math can be thought of as thinking inside of the box while logic is thinking outside of the box.

For example, all origional solutions are found through logic, not math. New math cannot be derrived from old math. Algebra could not be derived from Arithmetic.

All forms of math were origionally derived from some logical premise such as using diminishing rectangles to define the area under a curve which is how calculous was derived

Math may only solve problems in a forward fashion which is much like deductive logic with each being definitive in their own way. However Logic may also solve problems that math cannot such as solving problems abductively or inductively, such as solving a problem backwards like a detective solves a murder after the fact. Or solving black box problems, such as reverse engineering a semiconductor chip solving a theory across unobservable domains, or solving how the hidden subconscious mechanism works. They are not necessarily definitive but they end up providing different aspects of a problem that end up being the same solutions from different perspectives.

For example in high speed digital electronics, you can think of the return path for a high speed digital signal over a ground reference plane as taking the path of least impedance, or taking the path of lowest loop inductance, or following the path of greatest capacitance. If you understand each mechanism independantly, they each sound like completely different mechanisms.

However, they are all self consistent with each other and in reality are all based on the same relationships from loop to field to wave to induce charged particle oscilations which in turn induce fields only from different perspectives in electromagnetic field theory.

The same thing is true for Logical models of Relativistic Effects, where length contraction is relativistic blueshift in front of a moving ship while time dilation is relativistic redshift behind a moving ship. All relativistic properties can be broken down in the same way as looking forawrds or backwards from a moving frame or looking at the moving frame from in front or from behind.

I assume the same was true when converging 5 different string theories into a single M Theory.

One final difference is that with math, the deeper one goes, the more complex it becomes. With logic, the deeper one goes the simpler a problem becomes. This is why math is important at the beginning of a theory where answering each question seemed to create even more questions through divergence and bring up all of the necessary questions that needed to be answered.

To finish a theory or for achieving convergence by finally answering all of the outstanding questions in a self consistent manor and providing a single self consistent model of everything then logic begins to dominate over math.

For theory that includes untestable domains, which more scientists are beginning to believe will be required for a consistent TOE, inductive/abductive logic is required in an end scenario because of hidden domains requireing black box solutions, while loops may require the reverse or backwards looking logic, and math is no longer testable in hidden domains.

Apparrently a TOE can be solved, but not proven, using math alone with string theory but then we have no model of understanding so string theory only benefits string theory. Applied science through Engineering needs logical models to understand, design and innovate around.

Logic shows how everything is obvious or at least relatively simple in hind-site while nothing is obvious or simple in fore-site.

It's why all of my solutions to unsolved problems are really quite simple or sometimes even seem like common sense, yet they went undiscovered for hundreds or thousands of years before I solved them.

However, this does not necessarily mean they were easy logic problems to solve in most cases, and some problems took years or even decades for me to find a simple solution. Often the time it took depended more on what problems I had solved before as opposed to the complexity of the problem itself. I could explain it further, but that's another long topic.

Both Einstein and Hawking believed that a TOE would be a beautiful simple model of everything that anyone could understand.

Edited by TakenItSeriously

##### Share on other sites

I am well aware of the differences between logical vs mathematical approaches. However soneone that truly wants to understand sonething will not exclude one or the other but encompass both into his/her understanding.

However far too often people refuse to accept the math simply because they don't like it or they don't wish to take the time to understand it.

This ia a self limitation plain and simple. I honestly hope your not limitting yourself.

Quite frankly it is virtually impossible to understand Bell's inequality properly without understanding the mathematics.

Any heuristic descriptive of what is going on is usually misleading. A prime example is the confusion surrounding locality and non locality. Any attempts to explain this is doomed to fail without applying math into the descriptives.

here is a prime example.

$p(ab|xy)=\int_a d\lambda q(\lambda)p(a|x,\lambda)p(b|y,\lambda)$

this expression is what defines locality in Bells experiment.  How does one describe this without the required math??? You cannot explain the above equation without referring to the math.

how do you explain that x and y can be freely chosen to be independent of $\lambda$ ?

that $q,(\lambda|x,y)=q(\lambda)$

PS $\lambda$ is an arbitrary variable denoting the joint causal influence upon A and B at the time of entanglement.

Far too often I see attempts to describe locality however those attempts are not correctly describing the correct definition of locality under bells inequality. Far too often they tend to think local to the particle and global being the background. However this isn't how locality is defined in Bells experiments. In Bells experiment it is directly related to  causality.  ( there is no causal connection if the two are spatially separated.) That part stems from action at a distance. Which if one understands entanglement and the past causal connection to superposition with regards to the conservation laws when the entangled pair is first created, one realizes no action is involved. That the problem becomes simply probablistic ( the very term superposition originates from statistical mechanics). Hence no hidden action is needed.

Edited by Mordred

##### Share on other sites
2 hours ago, Mordred said:

I am well aware of the differences between logical vs mathematical approaches. However soneone that truly wants to understand sonething will not exclude one or the other but encompass both into his/her understanding.

However far too often people refuse to accept the math simply because they don't like it or they don't wish to take the time to understand it.

This ia a self limitation plain and simple. I honestly hope your not limitting yourself.

Quite frankly it is virtually impossible to understand Bell's inequality properly without understanding the mathematics.

Any heuristic descriptive of what is going on is usually misleading. A prime example is the confusion surrounding locality and non locality. Any attempts to explain this is doomed to fail without applying math into the descriptives.

here is a prime example.

this expression is what defines locality in Bells experiment.  How does one describe this without the required math??? You cannot explain the above equation without referring to the math.

how do you explain that x and y can be freely chosen to be independent of λ ?

that q,(λ|x,y)=q(λ)

PS λ is an arbitrary variable denoting the joint causal influence upon A and B at the time of entanglement.

Far too often I see attempts to describe locality however those attempts are not correctly describing the correct definition of locality under bells inequality. Far too often they tend to think local to the particle and global being the background. However this isn't how locality is defined in Bells experiments. In Bells experiment it is directly related to  causality. Not spatial seperation. That part stems from action at a distance. Which if one understands entanglement and the past causal connection to superposition with regards to the conservation laws when the entangled pair is first created, one realizes no action is involved. That the problem becomes simply probablistic ( the very term superposition originates from statistical mechanics)

This is where my thinking tends to become somewhat controversial.

I personally believe that collaborative efforts between specialized individuals with different strengths would far exceed the sum of three individuals who try to be expert in all rolls.

In fact I believe it would improve performance exponentially if given the right combination of specializations.

For physics, I believe it should include a core of three individuals each with different specialties, aside from whatever individuals that commonly make up a research team.

one specialized with knowledge based skills

one specialized with math based skills

one specialized with logic based skills

Each individual would focus more on contributing based on their individual strengths, but in close colaboration with each other, they could each benefit by learning the skills of the other two far more effectively in one on one or three way interaction.

Edited by TakenItSeriously

##### Share on other sites

So where is the problem? If I use a set of formulas developed after decades of research and study am I not utilizing the experience, studies and efforts of a collective body of scientists ?

This is how physics advance through collective efforts, it is why research papers are readily available through sites such as arxiv.

Every single formula and math equation has involved years of research study and  community agreed upon axioms, definitions and syntax etc.

When you get right down to it the mathematical arguments require a community effort far more than a logic argument. Hence why it takes so many years of study just to properly apply the equations and produce a good working mathematical proof.

A logic argument has no such priori on research, community gained knowledge etc, anyone can produce a good logic argument though knowledge of the topic helps strengthen the argument. It isn't a requirement.

Edited by Mordred

##### Share on other sites
1 hour ago, Mordred said:

So where is the problem? If I use a set of formulas developed after decades of research and study am I not utilizing the experience, studies and efforts of a collective body of scientists ?

This is how physics advance through collective efforts, it is why research papers are readily available through sites such as arxiv.

Every single formula and math equation has involved years of research study and  community agreed upon axioms, definitions and syntax etc.

When you get right down to it the mathematical arguments require a community effort far more than a logic argument. Hence why it takes so many years of study just to properly apply the equations and produce a good working mathematical proof.

A logic argument has no such priori on research, community gained knowledge etc, anyone can produce a good logic argument though knowledge of the topic helps strengthen the argument. It isn't a requirement.

I never pointed out a problem in this thread, I was only asking if their was anomolous test results, particularly with regards to the percentages between testers when both testers measured the same spin orientation being 25% instead of the intuitive 33% that many would have expected.

I asked because that is the non-intuitive result that I predicted based on a logical conclusion of my own and I have no access to any of the results myself since I'm not a physicist or student so I can't see if my predictions had any validity to them, but I never did received an answer to that question other than an insult and replies that kept changing the subject.

I think that many posted here with a preconceived notion of some kind of agenda on my part. Probably based on a previous post I made about a problem with invalid premises of expected results in Bells Inequalities which I revealed in a thread in the speculations forum, which I still stand by.

I should also point out that even in the other thread, I never had any problems with QM predictions that were made, nor did I ever intimate that I had any disagreement behind QM predictions on local realism, other than conclusions should not be based on Bells Inequalities.

If you're asking my personal opinion over if their is some kind of instantaneous interaction between the two testors I would say that their does seem to be evidence of such interaction if the widely reported generalizations of results I've seen are correct about how test results change based on the order and choices of the two testers.

I hope that clarifies things.

BTW, I dont see that as being inconcistent with information not exceeding the SoL If using the right model to explain it, but that's another topic.

Edited by TakenItSeriously

##### Share on other sites

That does clarify your approach, thanks for that.

21 minutes ago, TakenItSeriously said:

If you're asking my personal opinion over if their is some kind of instantaneous interaction between the two testors I would say that their does seem to be evidence of such interaction if the widely reported generalizations of results I've seen are correct about how test results change based on the order and choices of the two testers.

I hope that clarifies things.

Well understanding how statistical mechanics is applied to the correlation function in Bell's is useful in seeing how it is applied to the two detectors.

Truthfully a solid route of study is precisely how correlation functions are defined under statistical mechanics then under QM/QFT applications.

The Pearson correlation function I mentioned earlier is useful to understand how they operate. At least for approximately linear correlations.

Obviously understanding the above applies to any type of correlation function

Edited by Mordred

##### Share on other sites
12 hours ago, Mordred said:

That does clarify your approach, thanks for that.

Well understanding how statistical mechanics is applied to the correlation function in Bell's is useful in seeing how it is applied to the two detectors.

Truthfully a solid route of study is precisely how correlation functions are defined under statistical mechanics then under QM/QFT applications.

The Pearson correlation function I mentioned earlier is useful to understand how they operate. At least for approximately linear correlations.

Obviously understanding the above applies to any type of correlation function

After reading the abstract of the paper you linked to, I can see that there are hidden assumptions that the reader knows what the fundamental contexts are so I need to verify that we are on the same page before going further.

Are we talking about derriving probability distributions as a function?

From which frame of reference are they correlated to? A single tester PoV or from a third person PoV of both testers? Taking the third person Pov is a very dangerous excersize.

I think that there are some pitfalls when trying to do this for certain kinds of functions from a third person PoV because then descrete valiues get to be very confusing since they can't always be treated as either dependant or independent values.

In fact you could say that is the reason that we should get the 25% value for when both testers measure the same spin angle because the results, in that case, must be completely dependent upon each other.

In the other two cases, the results are partially dependant on each other. However the strange part is that this is also how classical systems work.

It's why probability theory could never be complely definitive and considered to be impossibly complex, I assume it's because they could never make sense of the results, but also because it required infinite recursion to get a definitive result without cheating by not calculating them in real time. Besides, computers didn't even exist back then, although I suppose you'd need a quantum computer to fully realize real time results.

Edited by TakenItSeriously

##### Share on other sites

Post deleted by author as it kept drifting too far off topic.

Edited by TakenItSeriously

##### Share on other sites
17 hours ago, TakenItSeriously said:

Are we talking about derriving probability distributions as a function?

From which frame of reference are they correlated to?.

Yes we are talking about  probability distributions as a function. The two detectors are your observers, Though there is single detector variations I believe on Bells. All observers interact, all interactions cause interference. This is the measurement problem of the HUP. Though the HUP is also a fundamental that exists even if measurements are not being taken.

One can correlate if measurements taken on Alice detector affect measurements taken on B detector by comparing the statistical trends between detector A and B.

If a correlation exists then the two data tables will have in essence constructive probability (positive) trends. Hence strongly correlated, A has a strong correlation with B.

If the trends are opposite ie as one rises the other decreases this is still a correlation but a negative one or more accurately A has an inverse dependance on B and vise versa.

IF the two datasets have no symmetry in their trends, ie one trend stays constant while the other changes then you have no correlation. There is no dependancy between A and B

In Bells we are seeing if these trends exist that could be attributed to a hidden variable dependancy  but one must also look for trends in a time dependant manner. ( After all we want the time dependancy )

(the above is a bit of an oversimplification),

The point being correlation functions do not identify cause, it identifies the trends in datasets, graphs etc, it tests if one dataset has any dependancy  on another dataset. Those trend comparions form your correlation probability functions with polarity angles.

As we are also testing if their is a dependancy on distance between the two detector results.  so the time dependancy and angles is of crucial importance. Bells is a particular application of statistical dependency, The essence of Bell’s theorem is that the function
P(a| b) has distinctly different dependences on the relative angle between the analyzers for a local hidden variable description and for quantum mechanics.

PS side note we already have one dependency but it a dependency of a past interaction. The time of entanglement and the conservation of spin when the two entangled particles are first created. (This however is a seperate correlation function than the hidden variable correlation function, ) Though different functions may be related.

Edited by Mordred

##### Share on other sites
11 hours ago, TakenItSeriously said:

Post deleted by author as it kept drifting too far off topic.

Lol +1 on that glanced at it at work

##### Share on other sites
On 02/08/2017 at 8:41 PM, Mordred said:

here is a prime example.

this expression is what defines locality in Bells experiment.

That's a good one! In fact there seems to be something of a mathematical "anomaly" here. That equation is according to Jaynes an unproven simplification of locality in Bell experiments. According to him, "fundamentally correct" would be (in his notation):

P(AB|abλ) = P(A|Babλ) P(B|abλ)

Now, I'm not 100% sure that he was right, but I suppose that an expert like him would not make a mistake about such a fundamental issue. And as a matter of fact, even Bell admitted that his simple equation is not based on mathematical rigor but instead, it is based on plausible looking assumptions ("It seems reasonable to expect that" - Bertlmann's socks).

As the result led to extraordinary claims, extraordinary evidence is required. Reasonable seeming expectation does not suffice.

Edited by Tim88
copy paste error due to new system

##### Share on other sites

Tim88 said: "it is based on plausible looking assumptions"

Exactly: The very assumption, pointed out by Bernard d 'Espagnat in 1979, that I mentioned in an earlier post.

An assumption that has now been demonstrated to be false: It is not possible to measure independent, uncorrelated, attribute-values, when objects (like coins) are constructed in such a fashion, that independent attributes do not even exist.

##### Share on other sites

Ah but is the debate not simply a bottom up vs top down methodology.

Jaynes methodolgy being based on maximum Entropy and Bayesian Methods. Shannon entropy follows similar methods. Despite all the pop media hype I have seen papers such as this arxiv

Shows that the criticisms Jayne had on Bell as being a difference in methodology in how to treat the probabilities

This particular review paper concludes that the two methodologies are not incompatible but the paper studies the two methodologies not the hidden variable aspects.

Here is a decent paper on Bayesian statistics.

It is an ongoing debate but in all honesty I see far too much hype without detailing the differences in statistical methods

Edited by Mordred

##### Share on other sites
12 hours ago, Tim88 said:

Now, I'm not 100% sure that he was right, but I suppose that an expert like him would not make a mistake about such a fundamental issue. And as a matter of fact, even Bell admitted that his simple equation is not based on mathematical rigor but instead, it is based on plausible looking assumptions ("It seems reasonable to expect that" - Bertlmann's socks).

This is the primary difference is the statistical approach and the corresponding axioms. Either method in my opinion isn't conclusive enough to state which is better for this application so I won't try to sway any opinions but merely supply the details behind the two arguments.

Bayesian or Gaussian just to inform here is Gaussian statistics.

Here is how it relates to Gaussian PDF's probability density functions

A little hint Bayen statistics uses a conditional probability function

$P(A|B)=\frac{P(A|B)P(B)}{P(A)}$

I won't bore everyone detailing the differences but literally how the two handle probability and uncertainty is different statistical methods literally. One must understand both methods to get to the root of the debate.

Edited by Mordred

##### Share on other sites

Here this is an example from one of my advanced QM course notes. I honestly can't recall the source book/article but I had it written down when I was studying the three views of QM.

let us assume a classical problem (no quantum uncertainty) Suppose we are trying to measure the position of a particle and assign a prior probability function.

$p(x)=\frac{1}{\sqrt{2\pi^2_0}}e^{-(x-x_o)^2/2\sigma^2_0}$

our measuring device is not perfect due to noise etc it can only be measured with a resolution$\Delta$

thus if my detector measures position y I assign the likelhood that the postion was x by the Gaussian

$p(y|x)=\frac{1}{\sqrt{2\pi\Delta^2}}e^{-(y-x)^2/2\Delta^2}$

you then use Bayes theorem to show that given the new data you must now update your probability assignment of the position to a new Gaussian

$p(x|y)=\frac{1}{\sqrt{2\pi\acute{\sigma}^2}}e^{-(x-\acute{x})^2/2\acute{\sigma}^2}$

where

$\acute{x}=x_o+k_1(y-x_o), \acute{\sigma}^2=k_2\sigma^2_o, k_1=\frac{o^2_0}{\sigma^2_0+\Delta^2}, k_2=\frac{\Delta^2}{\sigma^2_o+\Delta^2}$

Now unfortunately I cannot post the drawing so take two axis with P(x) on the vertical, x the horizontal axis. Then draw a sinusoidal hump. Bisect this hump down the center and label it x_0, $\sigma_o$ is the error margin to the left and right of x_0.

we now measure the position and find value y in my detector the true x may still be different. Given the uncertainty of my detector $\Delta(x)$ with a Quassian distribution let the likelyhood distribution be

$P(y|x)=\frac{1}{\sqrt{2\pi\Delta^2}}e^-{\frac{(y-x)^2}{2\Delta^2}}$

thus according to Bayes rule the updated probability is

$N^{-1}=\int dxP(y|x)P(x)$

in essence we have narrowed the Gaussian distribution Google Bayes Learning for more details. However in essence we have shifted x_0 to a new location.

here is the details on maximum entropy setups

now look again at equations 7 and 8 of the first link I gave

notice it is using Bayesian notation with the use of sigmas in reference to Jaynes treatment ?

It is a natural conequence of methodolgy used in statistical math to arrive at two different correlation functions if your using two different qaussian distributions ie Bells vs Jaynes (Bayes learning with maximum entropy)

edit a side note Jaynes book is an excellent study aid in learning the above lol.

"Probability theory the logic of science" its 758 pages of excellent details

I use his Boolean algebra section on a regular basis for the basic identities idempotence, commutivity, associative, distributivity and duality...thats just chapter 1 lol

Edited by Mordred

##### Share on other sites

Anyways I hope this helps better understand papers such as this one

Entropic Dynamics and the Quantum
Measurement Problem

##### Share on other sites
16 hours ago, Tim88 said:

That's a good one! In fact there seems to be something of a mathematical "anomaly" here. That equation is according to Jaynes an unproven simplification of locality in Bell experiments. According to him, "fundamentally correct" would be (in his notation):

P(AB|abλ) = P(A|Babλ) P(B|abλ)

Now, I'm not 100% sure that he was right, but I suppose that an expert like him would not make a mistake about such a fundamental issue. And as a matter of fact, even Bell admitted that his simple equation is not based on mathematical rigor but instead, it is based on plausible looking assumptions ("It seems reasonable to expect that" - Bertlmann's socks).

As the result led to extraordinary claims, extraordinary evidence is required. Reasonable seeming expectation does not suffice.

I hope the above helps better understand the above, it isn't a case of a mistake but rather which is the better suited treatment. Bell used frequentism which is primarily the Quassian wavefunction rather than Bayer learning etc.

Another oft missed term is Stochastic. A good book on Stochastic probability theory is

"An introduction to Schotastic modelling" by Howard M Taylor. However you require Calculus.

Lol just a side note statistical mechanics is probably one of my weaker subjects so hopefully I didn't make any mistakes in the above lol I hated statistics for years till I finally realized I needed to understand it to properly understand QM/QFT.

Little side note if you want to really want to understand terms such as determinslism, local, real and how they are defined study the two books I recommended. They have specific statistical definitions

Edited by Mordred

##### Share on other sites

Hi Mordred, thanks for the wealth of references.

I doubt that Bayesian vs Gaussian statistics matters here, as I cited how Bell did not even pretend to give a rigorous proof of his starting equation. Nevertheless, since I have the statistics book of Jaynes [PS I'm well beyond chapter one lol] and still plan to work my way completely through it one day, your references are appreciated and maybe one day I'll contact you with questions about a certain chapter.

Also, I agree that the "particle" approach is probably a dead end - just for the record, as we're drifting away from the topic (sorry TakenItSeriously!).

More on topic, I recall that I read somewhere that the results of some well known Bell type measurements do not agree well with the predictions of QM, but that this fact was overlooked in the first article on those experiments (sometimes one only sees what one is looking for). I will try to find that back.

Edited by Tim88

## Create an account

Register a new account