Duda Jarek

Senior Members
  • Content count

  • Joined

  • Last visited

Community Reputation

10 Neutral

About Duda Jarek

  • Rank

Contact Methods

  • Website URL

Profile Information

  • Gender
  1. While the original Bell inequality might leave some hope for violation, here is one which seems completely impossible to violate - for three binary variables A,B,C: Pr(A=B) + Pr(A=C) + Pr(B=C) >= 1 It has obvious intuitive proof: drawing three coins, at least two of them need to give the same value. Alternatively, choosing any probability distribution pABC among these 2^3=8 possibilities, we have: Pr(A=B) = p000 + p001 + p110 + p111 ... Pr(A=B) + Pr(A=C) + Pr(B=C) = 1 + 2 p000 + 2 p111 ... however, it is violated in QM, see e.g. page 9 here: http://www.theory.caltech.edu/people/preskill/ph229/notes/chap4.pdf If we want to understand why our physics violates Bell inequalities, the above one seems the best to work on as the simplest and having absolutely obvious proof. QM uses Born rules for this violation: 1) Intuitively: probability of union of disjoint events is sum of their probabilities: pAB? = pAB0 + pAB1, leading to above inequality. 2) Born rule: probability of union of disjoint events is proportional to square of sum of their amplitudes: pAB? ~ (psiAB0 + psiAB1)^2 Such Born rule allows to violate this inequality to 3/5 < 1 by using psi000=psi111=0, psi001=psi010=psi011=psi100=psi101=psi110 > 0. We get such Born rule if considering ensemble of trajectories: that proper statistical physics shouldn't see particles as just points, but rather as their trajectories to consider e.g. Boltzmann ensemble - it is in Feynman's Euclidean path integrals or its thermodynamical analogue: MERW (Maximal Entropy Random Walk: https://en.wikipedia.org/wiki/Maximal_entropy_random_walk ). For example looking at [0,1] infinite potential well, standard random walk predicts rho=1 uniform probability density, while QM and uniform ensemble of trajectories predict different rho~sin^2 with localization, and the square like in Born rules has clear interpretation: Is ensemble of trajectories the proper way to understand violation of this obvious inequality? Comparing with local realism from Bell theorem, path ensemble has realism and is non-local in standard "evolving 3D" way of thinking ... however, it is local in 4D view: spacetime, Einstein's block universe - where particles are their trajectories. What other models with realism allow to violate this inequality?
  2. Sure, it isn't - fm size is only a suggestion, but a general conclusion here is that cross section does not offer a sub-femtometer boundary for electron size (?) Dehmelt's argument of fitting parabola to 2 points: so that the third point is 0 for g=2 ... is "proof" of tiny electron radius by assuming the thesis ... and at most criticizes electron built of 3 smaller fermions. So what experimental evidence bounding size of electron do we have?
  3. Sure, so here is the original Cabbibo electro-positron collision 1961 paper: https://journals.aps.org/pr/abstract/10.1103/PhysRev.124.1577 Its formula (10) says sigma ~ \beta/E^2 ... which extrapolation to resting electron gives ~ 2fm radius. Indeed it would be great to understand corrections to potential used in Schrodinger/Dirac, especially for r~0 situations like electron capture (by nucleus), internal conversion or positronium. Standard potential V ~ 1/r goes to infinity there, to get finite electric field we need to deform it in femtometer scale
  4. Could you give some number? Article? We can naively interpret cross-section as area of particle, but the question is: cross-section for which energy should we use for this purpose? Naive extrapolation to resting electron (not Lorentz contracted) suggests ~2fm electron radius this way (which agrees with size of needed deformation of electric field not to exceed 511 keVs energy). Could you propose some different extrapolation?
  5. So can you say something about electron size based on electron-positron cross section alone?
  6. No matter interpretation, if we want boundaries for size of electron, it shouldn't be calculated for Lorentz contracted electron, but for resting electron - extrapolate above plot to gamma=1 ... or take direct values: So what boundary for size of (resting) electron can you calculate from cross-sections of electron-positron scattering?
  7. There is some confidence that electron is a perfect point e.g. to simplify QFT calculations. However, searching for experimental evidence (stack), Wikipedia article only points argument based on g-factor being close to 2: Dehmelt's 1988 paper extrapolating from proton and triton behavior that RMS (root mean square) radius for particles composed of 3 fermions should be ~g-2: Using more than two points for fitting this parabola it wouldn't look so great, e.g. neutron (udd) has g~ -3.8, \(<r^2_n>\approx -0.1 fm^2 \) And while classically g-factor is said to be 1 for rotating object, it is for assuming equal mass and charge density. Generally we can classically get any g-factor by modifying charge-mass distribution: \[ g=\frac{2m}{q} \frac{\mu}{L}=\frac{2m}{q} \frac{\int AdI}{\omega I}=\frac{2m}{q} \frac{\int \pi r^2 \rho_q(r)\frac{\omega}{2\pi} dr}{\omega I}= \frac{m}{q}\frac{\int \rho_q(r) r^2 dr}{\int \rho_m(r) r^2 dr} \] Another argument for point nature of electron is tiny cross-section, so let's look at it for electron-positron collisions: Beside some bumps corresponding to resonances, we see a linear trend in this log-log plot: 1nb for 10GeVs (5GeV per lepton), 100nb for 1GeV. The 1GeV case means \(\gamma\approx1000\), which is also in Lorentz contraction: geometrically means gamma times reduction of size, hence \(\gamma^2\) times reduction of cross-section - exactly as in this line on log-log scale plot. More proper explanation is that it is for collision - transforming to frame of reference where one particle rests, we get \(\gamma \to \approx \gamma^2\). This asymptotic \(\sigma \propto 1/E^2\) behavior in colliders is well known (e.g. (10) here) - wanting size of resting electron, we need to take it from GeVs to E=511keVs. Extrapolating this line (no resonances) to resting electron (\(\gamma=1\)), we get 100mb, corresponding to ~2fm radius. From the other side we know that two EM photons having 2 x 511keV energy can create electron-positron pair, hence energy conservation doesn't allow electric field of electron to exceed 511keV energy, what requires some its deformation in femtometer scale from \(E\propto1/r^2 \): \[ \int_{1.4fm}^\infty \frac{1}{2} |E|^2 4\pi r^2 dr\approx 511keV \] Could anybody elaborate on concluding upper bound for electron radius from g-factor itself, or point different experimental boundary? Does it forbid electron's parton structure: being "composed of three smaller fermions" as Dehmelt writes? Does it also forbid some deformation/regularization of electric field to a finite energy?
  8. Four-dimensional understanding of quantum computers

    Thanks, I would gladly discuss. The main part of the paper is MERW ( https://en.wikipedia.org/wiki/Maximal_Entropy_Random_Walk) showing why standard diffusion has failed (e.g. predicting that semiconductor is a conductor) - because it has used only an approximation of the (Jaynes) principle of maximum entropy (required by statistical physics), and if using the real entropy maximum (MERW), there is no longer discrepancy - e.g. its stationary probability distribution is exactly as in the quantum ground state. In fact MERW turns out just assuming uniform or Boltzmann distribution among possible paths - exactly like in Feynman's Eulclidean path integrals (there are some differences), hence the agreement with quantum predictions is not a surprise (while still MERW being just a (repaired) diffusion). Including the Born rule: probabilities being (normalized) squares of amplitudes - amplitude describes probability distribution at the end of half-paths toward past or future in Boltzmann ensemble among paths, to randomly get some value in a given moment we need to "draw it" from both time directions - hence probability is square of amplitude: Which leads to violation of Bell inequalities: top below there is derivation of simple Bell inequality (true for any probability distribution among 8 possibilities for 3 binary variables ABC), and bottom is example of their violation assuming Born rule:
  9. Four-dimensional understanding of quantum computers

    After 7 years I have finally written it down: https://arxiv.org/pdf/0910.2724v2.pdf Also other consequences of living in spacetime, like violation of Bell inequalities. Schematic diagram of quantum subroutine of Shor's algorithm for finding prime factors of natural number N. For a random natural number y<N, it searches for period r of f(a)=y^a mod N, such period can be concluded from measurement of value c after Quantum Fourier Transform (QFT) and with some large probability (O(1)) allows to find a nontrivial factor of N. The Hadamar gates produce state being superposition of all possible values of a. Then classical function f(a) is applied, getting superposition of |a> |f(a)>. Due to necessary reversibility of applied operations, this calculation of f(a) requires use of auxiliary qbits, initially prepared as |0>. Now measuring the value of f(a) returns some random value m, and restricts the original superposition to only a fulfilling f(a)=m. Mathematics ensures that {a:f(a)=m} set has to be periodic here (y^r \equiv 1 mod N), this period r is concluded from the value of Fourier Transform (QFT). Seeing the above process as a situation in 4D spacetime, qbits become trajectories, state preparation mounts their values (chosen) in the past direction, measurement mounts their values (random) in the future direction. Superiority of this quantum subroutine comes from future-past propagation of information (tension) by restricting the original ensemble in the first measurement.
  10. Immunity by incompatibility – hope in chiral life

    A decade has passed, moving this topic from SF to synthetic life: https://en.wikipedia.org/wiki/Chiral_life_concept One of the most difficult tasks seemed to be able to synthesize working proteins ... and last year Chinese have synthesized mirror polymeraze: Nature News 2016: Mirror-image enzyme copies looking-glass DNA, Synthetic polymerase is a small step along the way to mirrored life forms, http://www.nature.com/news/mirror-image-enzyme-copies-looking-glass-dna-1.19918 There are also lots of direct economical motivations to continue to synthesize mirror bacteria, like for mass production of mirror proteins (e.g. aptamers) or L-glucose (perfect sweetener). So in another decade we might find out that a colony of mirror bacteria is already living e.g. in some lab in China ... ... taking us closer to a possibility nicely expressed in the title of WIRED 2010 article: "Mirror-image cells could transform science - or kill us all" ( https://www.wired.com/2010/11/ff_mirrorlife/ ) - estimating that it would take a mirror cyanobacteria (photosynthesizing) a few centuries to dominate our planet ... eradicating our life ...
  11. There is a recent article in a good journal (Optics July 2015) showing violation of Bell inequalities for classical fields: "Shifting the quantum-classical boundary: theory and experiment for statistically classical optical fields" https://www.osapublishing.org/optica/abstract.cfm?URI=optica-2-7-611 Hence, while Bell inequalities are fulfilled in classical mechanics, they are violated not only in QM, but also classical field theories - asking for field configurations of particles (soliton particle models) makes sense. It is obtained by superposition/entanglement of electric field in two directions ... analogously we can see a crystal through classical oscillations, or equivalently through superposition of their normal modes: phonos, described by quantum mechanics, violating Bell inequalities.
  12. Regarding Bell - we know that nature violates his inequalities, so we need to find an erroneous assumption in his way of thinking. Let's look at a simple proof from http://www.johnboccio.com/research/quantum/notes/paper.pdf So let us assume that there are 3 binary hidden variables describing our system: A, B, C. We can assume that the total probability of being in one of these 8 possibilities is 1: Pr(000)+Pr(001)+Pr(010)+Pr(011)+Pr(100)+Pr(101)+Pr(110)+Pr(111)=1 Denote by Pe as probability that given two variables have equal values: Pe(A,B) = Pr(000) + Pr (001) + Pr(110) + Pr(111) Pe(A,C) = Pr(000) + Pr(010) + Pr(101) + Pr(111) Pe(B,C) = Pr(000) + Pr(100) + Pr(011) + Pr(111) summing these 3 we get Bell inequalities: Pe(A,B) + Pe(A,C) + Pe(B,C) = 1 + 2Pr(000) + 2 Pr(111) >= 1 Now denote ABC as outcomes of measurement in 3 directions (differing by 120 deg) - taking two identical (entangled) particles and asking about frequencies of their ABC outcomes, we can get Pe(A,B) + Pe(A,C) + Pe(B,C) < 1 what agrees with experiment ... so something is wrong with the above line of thinking ... The problem is that we cannot think of particles as having fixed ABC binary values describing direction of spin. We can ask about these values independently by using measurements - which are extremely complex phenomena like Stern-Gerlach. Such measurement doesn't just return a fixed internal variable. Instead, in every measurement this variable is chosen at random - and this process changes the state of the system. Here is a schematic picture of the Bell's misconception: The squares leading to violation of Bell inequalities come e.g. from completely classical Malus law: the polarizer reduces electric field like cos(theta), light intensity is E^2: cos^2(theta). http://www.physics.utoronto.ca/~phy225h/experiments/polarization-of-light/polar.pdf To summarize, as I have sketched a proof, the following statement is true: (*): "Assuming the system have some 3 fixed binary descriptors (ABC), then frequencies of their occurrences fulfill Pe(A,B) + Pe(A,C) + Pe(B,C) >= 1 (Bell) inequality" Bell's misconception was applying it to situation with spins: assuming that the internal state uniquely defines a few applied binary values. In contrast, this is a probabilistic translation (measurement) and it changes the system. Beside probabilistic nature, while asking about all 3, their values would depend on the order of questioning - ABC are definitely not fixed in the initial system, what is required to apply (*).
  13. I don't know The discussion about Bell inequalities for solitons has evolved a bit here: http://www.sciforums.com/threads/do-nonlocal-entities-fulfill-assumptions-of-bell-theorem.153000/
  14. While dynamics of (classical) field theories is defined by (local) PDEs like wave equation (finite propagation speed), some fields allow for stable localized configurations: solitons. For example the simplest: sine-Gordon model, which can be realized by pendula on a rod which are connected by spring. While gravity prefers that pendula are "down", increasing angle by 2pi also means "down" - if these two different stable configurations (minima of potential) meet each other, there is required a soliton (called kink) corresponding to 2pi rotation, like here (the right one is moving - Lorentz contracted): Kinks are narrow, but there are also soltions filling the entire universe, like 2D vector field with (|v|^2-1)^2 potential - a hedgehog configuration is a soliton: all vectors point outside - these solitons are highly nonlocal entities. A similar example of nonlocal entities in "local" field theory are Couder's walking droplets: corpuscle coupled with a (nonlocal) wave - getting quantum-like effects: interference, tunneling, orbit quantization (thread http://www.scienceforums.net/topic/65504-how-quantum-is-wave-particle-duality-of-couders-walking-droplets/ ). The field depends on the entire history and affects the behavior of soliton or droplet. For example Noether theorem says that the entire field guards (among others) the angular momentum conservation - in EPR experiment the momentum conservation is kind of encoded in the entire field - in a very nonlocal way. So can we see real particles this way? The only counter-argument I have heard is the Bell theorem (?) But while soliton happen in local field theories (information propagates with finite speed), these models of particles: solitons/droplets are extremaly nonlocal entities. In contrast, Bell theorem assumes local entities - so does it apply to solitons?
  15. I was thinking about designing molecular descriptors for the virtual screening purpose: such that two molecules have similar shape if and only if their descriptors are similar. They could be used separately, or to complement e.g. some pharmacophore descriptors. They should be optimized for ligands - which are usually elongated and flat. Hence I thought to use the following approach: - normalize rotation (using principal component analysis), - describe bending - usually one coefficient is sufficient, - describe evolution of cross-section, for example as evolving ellipse Finally, the shape below is described by 8 real coefficients: length (1), bending (1) and 6 for evolution of ellipse in cross-section. It expresses bending and that this molecule is approximately circular on the left, and flat on the right: preprint: http://arxiv.org/pdf/1509.09211 slides: https://dl.dropboxusercontent.com/u/.../shape_sem.pdf Mathematica implementation: https://dl.dropboxusercontent.com/u/12405967/shape.nb Have you met something like that? Is it a reasonable approach? I am comparing it with USR (ultrafast shape recognition) and (rotationally invariant) spherical harmonics - have you seen other approaches of this type?