Jump to content

Duda Jarek

Senior Members
  • Posts

  • Joined

  • Last visited

Everything posted by Duda Jarek

  1. Why SR? Both gravitomagnetism and 'the default theory' of relativity are Lorentz invariant. The question is if electromagnetism and gravity are so qualitatively different? How to cope it with expected unification theories? What about different intristic curvature problems, like renormalization?
  2. Yes - it's 30 years old paper from "General Relativity and Gravitation" - looking good peer-review journal. And it's also very surprising for me that I couldn't find any commenting papers - neither negative nor positive??? There was only "Despite experimental conformation it appears to have been ignored for three decades." comment in Howard A. Landman's paper http://www.riverrock.org/~howard/QuantumTime4.pdf Another citing of that paper is of the same author (David Apsel) about using this effect for pulsars: http://arxiv.org/abs/gr-qc/0104025
  3. It's far from widely known fact that before Einstein's theory, there was Heaviside's simpler approach to make gravitation Lorentz invariant - by using a second set of Maxwell's equations - with e.g. density of mass instead of density of charge http://en.wikipedia.org/wiki/Gravitomagnetism This much less philosophically controversial theory (matter is not prisoned in infinitely thin submanfold of something with which it doesn't interact ... not allowing for wormhole-like solutions...) agrees well with most of observations (?), even with Gravity Probe B. Some papers says that even in better way: http://www.mrelativity.net/Papers/14/tdm5.pdf There can be also found strong arguments, that electromagnetic field also causes time dilation - for example while measuring muon lifetime in muonic atoms: http://www.springerlink.com/content/wtr11w113r22g346/ My interest on this subject started while I was working on some model in which the main dynamics was local rotations in 4D and it occurred that it leads to natural unification of electromagnetism and gravitomagnetism - spatial rotations gives Maxwell's equations, while small rotations of time axis (kind of central axis of light cone), gives the second set of Maxwell's equation - for gravity (5th section of http://arxiv.org/abs/0910.2724 ). What do you think about it? Why 'the only proper approach': intristic curvature is better than gravitomagnetism?
  4. While deexcitating into the ground state e.g. electrons just take the lowest orbitals and they are indistinguishable from QM point of view - we also loose entanglement - it's thermodynamical process. About phonons - mechanical waves of atoms, they are qualitatively very similar to photons ("mechanical waves of electromagnetic field"). Metals have crystalic - periodic structure - transmit sound well and generally what You hear are resonances, while rubber is random structure of long organic chains - diffusing phonons ... there is also large difference in sound velocity and so resonant frequencies ...but generally it's not for this discussion and You should look at some solid matter book ... About the 'squares' in probabilistic theories ... If for given Markov process we would focus on infinite in one direction chains - probability distribution would be the eigenstate of the stochastic matrix (without the squares). When we focus on a position inside chains infinite in both directions, we get the squares - intuitively: it's because there meet two random variables (chains) - from past and from future and while this gluing they have to give the same - the square is because of multiplying both probabilities. Please look at my paper - there is expanded this argument that it's just a natural result of 4D nature of our world ... generally while trying to predict 'charged points' idealization using some measurements - in any scale: microscopic(QM) or macroscopic(deterministic) there should appear these squares ... The simplest model - maximal entropy random walk on graph in opposite to standard random walk http://demonstrations.wolfram.com/GenericRandomWalkAndMaximalEntropyRandomWalk/ has strong localization properties as quantum mechanics (Anderson localization)
  5. Yes, it occurs that pure Boltzmann distribution among trajectories leads to probability densities exactly as squares of eigenfunctions of Schrödinger operator. In this thermodynamic picture time propagator is not unitary, but stochastic - thermodynamically everything wants to deexcitate as in QM. To introduce interference to this picture, there is required some rotation of some integral degree of freedom of particles (in ellipsoid field it's caused by particle's electric charge)
  6. Thanks for the book, I'll look at it. In approach from my paper we just take thermodynamics among trajectories (Boltzmann distribution) and we automatically get thermodynamical behavior of quantum mechanics - that everything wants to deexcitate to the ground state - we get concrete trajectories which statistically average to quantum mechanical probability distribution of the ground state.
  7. Bell inequalities based argumentation says that quantum mechanical 'squares' cannot be a result of some 'hidden variables' which would give QM as statistical result. Imagine two charged points idealized systems - if they are macroscopic, they can be described deterministically, while if they are microscopic - because of these 'squares' they cannot be described deterministically ? So while rescalling these 'squares' would have to somehow emerge ... how? I have seen only one trial to create probabilistic model for macroscopic system about which we can measure only some of its properties (for example because of distance) - and in this model: thermodynamics among trajectories, these 'squares' appears naturally in any scale (see my paper).
  8. Imagine we hold a flat surface and there is a spinning top (gyroscopic toy) on it. While changing the angle of the surface, the top generally follows the change, but it additionally makes some complicated 'precessive sinusoid/cycloid like motion' around the expected trajectory. Electron's spin is something different, but it sometimes is imagined as a spinning charge ... it's quantum mechanical phase rotates while time change ... there is Lamor precession ... Let's look at Bohr model of atom - quantum mechanics made it obsolete, but it still gives quite a good predictions http://en.wikipedia.org/wiki/Bohr_model It's main (?) lack is that it says that the lowest energy state should be spherically asymmetric (an orbit), while quantum mechanics says that the ground state is symmetric. Generally higher angular momentum states in Bohr model corresponds to quantum mechanical states with angular momentum lower by 1 as in this case. What if we would extend Bohr model by treating electron as 'a top'? Electron's spin projection while such precessive motion could be changing from -1/2 to +1/2, so intuitively it should 'fuzzy' angular momentum by 1 - exactly as in the difference between Bohr model and quantum mechanics, e.g. forgetting about the orbit for the ground state... Quantum mechanical probability density of states can be seen as naturally appearing thermodynamically ( http://arxiv.org/abs/0910.2724 ). Deterministic, but chaotically looking precessive motion could be the main source of statistical noise this model require for this thermodynamical behavior. What do you think about it? Have you heard about extending Bohr model by considering electron precession? Merged post follows: Consecutive posts mergedThere is so called (Bohr's) correspondence principle, which says that quantum mechanics reproduces classical physics in the limit of large quantum numbers: http://en.wikipedia.org/wiki/Correspondence_principle so for large orbits, especially in Rydberg atoms http://en.wikipedia.org/wiki/Rydberg_atom electrons looks like just moving on classical trajectories - this 'quantum noise' is no longer essential. To extend Bohr mode to different angular momentums, there was introduced Bohr-Sommerfeld model: more 'elliptical' orbits http://en.wikipedia.org/wiki/File:Sommerfeld_ellipses.svg one source of loosing these simple orbits, can be found in something like Mercury precession, which allows such orbit to rotate. It doesn't only have to be seen as mass related effect, there are arguments that electric field can also cause GR related effects, like time dilation: http://www.springerlink.com/content/wtr11w113r22g346/ The other source of nonstandard behavior and so this statistical noise can be precessive motion I mentioned - angular momentum conservation says that the total: orbital angular momentum + spin is conserved. Precession - rotating spin of electron is allowed and is compensated by electron's angular momentum to conserve 'j'. So such rotations should 'fuzzy' orbital angular momentum by 1, as in the difference between Bohr model and QM. There is also e.g. very complicated magnetic interaction between particles in atom and finally the only practical model to work on such extremely complicated system could be through probability densities and so quantum mechanics...
  9. Because I'm still thinking about its qualitative consequences - like that it also allows for proton decay in extremely high temperatures, what could solve problems with black holes with its infinite densities ... Merged post follows: Consecutive posts mergedAnother argument for proton decay: how nonzero total baryon number in observed universe (matter-antimatter asymmetry) could be created, if baryon number was always conserved?
  10. For many different models we can fit parameters so that for example it's first approximations suit well observations ... I value higher theory's integrity - when it's full consequences qualitatively agrees with what we observe, then it's worth to fit it's parameters ... For example: has general relativity been confirmed better than up to the first approximation (gravity, time dilation, gravitational lensing, Mercury precession)? The consequences of intrinsic curvature it introduce are enormous - it allows for wormholes ... it says that we live in infinitely thin submanifold of something, completely don't interacting with it ... Lorentz invariant gravitation can be introduced much simpler in flat spacetime - by second set of Maxwell's equations (with e.g. mass density instead of charge density), time dilation can be explained e.g. by rescaling a bit masses, charges, spins in gravitational potential, what would in first approximation rescale the whole matter making that EM interactions are transfered faster (5th section of http://arxiv.org/abs/0910.2724 )
  11. Decay is going to some lower energy and so more stable state - what normally stays on the way for such natural process is some energy barrier ... so the higher temperatures (average energy), the easier the decay ... and in inner core of neutron star are achieved kind of maximal temperatures available for standard matter ... so proton decay could be some kind of nature's failsafe to avoid infinite densities ... About ultra-high-energy cosmic rays ... if proton can decay, for example: - while fast gravitational collapse, start of importance of this decay could be rapid, so that it would cause explosion with much higher energetic particles than from standard supernova, or maybe - while slow collapse, GeV scale photons created while proton decay could in such extreme conditions destroy internal structure of neutrons (or proton-electron pairs), absorbing its energy and growing into astronomical energies ...
  12. Particles are some local energy minimals, but there is always lower energy state than any particle - no particles. States prefer to deexcitate to lower energy state, radiating the difference in form of photons (the deeper local energy minimum, the higher energy barrier and so the more difficult this deexcitation is). The higher the average energy (temperature) the easier these deexcitations/decays are (lower expected life time). If the baryon number doesn't have to be conserved, matter instead of creating infinitely energy density state in a black hole, should decay into nothing, emitting its energy in form of photons ...
  13. There is considered hypothetical decay of proton - usually into positron and neutral pion, which quickly decays into two photons. Such decays would allow standard matter to completely change into EM waves (proton + electron -> ~4 photons). So this decay allow to get to more stable state and temperatures in collapsing neutron stars should make it easier - it suggests that neutron star instead of creating a mysterious matter state (black holes), should 'evaporate' - turn its core into photons ... I've looked at a few papers and I haven't found any considered this type of consequences? If this process requires extreme conditions to be statistically important, it would happen practically only in the center, heating the star ... It doesn't contradict Big Bang as a Big Bounce (it cannot change difference between amount of matter and antimatter) Maybe it could explain extremely high energetic cosmic rays? (maybe in extremely high temperatures high energy photons could itself destroy proton + electron structure, absorbing part of their energy...) What do you think about proton decay? If it would be true - would black holes be created?
  14. Ok, I've finally found the article You are referring to ... yes - it's how it should be done! ... http://www.pacificbiosciences.com/index.php I didn't realize that there are so large difference in time scale between searching and incorporating the new base ... in this case indeed the dye doesn't have to be activated - it can be always active, but we can use that "it takes several milliseconds to incorporate it" ... It's simpler and better than my idea, but still I wanted to defend it... As I imagine, simpler polymerases makes that succeeding 'nucleotide carriers' tries if it suits to actually considered base. If polymerase doesn't analyze the current base, these 'nucleotide carriers' are taken from environment completely at random - if their concentration is 1:10 means that statistically per 11 'draws' there would be tried one of the first type ... I completely agree that it's simplified, but generally we can choose concentrations to select differences in time required for search as we want. It's stochastic process - this time varies - it's why there would be required a few runs for given strand to increase preciseness. I agree that these differences in concentrations would increase the number of errors made, but we don't have to use these duplicated strands (for example using nanopore - duplicates are on the other side) The problem is that time required to incorporate it is much larger, but it should be rather practically constant. "How do you measure movement on the DNA?" The forces read by AFM should be rather small and smooth, until the active movement of polymerase to the next base - they should be seen as 'peaks' of force (along DNA strand - the tip should be not on the bottom of cantilever, but rather on it's front)
  15. In short, polymerase cycle is - catch and insert the proper nucleoside triphosphate, then GO TO THE NEXT BASE, don't it? This movement is rather active (uses ATP), so when polymerase would be attached, it would have to pull DNA - AFM observes single atom interactions, so it should also observe forces created while this 'pulling' step. To distinguish between bases we would have just to watch time between succeeding steps - because of large difference in concentrations of 'nucleotide carriers', time required to find complementary base for different nucleotides would be e.g. like 1:10:100:1000. Of course its accuracy wouldn't be perfect, so we would have to read it a few times. Probably this pulling would be to difficult for polymerase and we would have to help it in controlled way. For example ssDNA it is working on could just go through a nanopore - cantilever of AFM would be just behind the nanopore and polymerase would work toward the nanopore. Thanks of it polymerase would work on unfolded ssDNA and we can control speed of releasing DNA through nanopore by changing electric field to ensure optimal working conditions. Such nanopores are already working http://www.physorg.com/news180531065.html I agree that dyes are more reliable, but it's difficult to use different ones for different nucleotides, so in pyrosequencing it has to be done in separate machine cycle. The perfect would be attaching different inactive dyes to nucloside triphosphates so that when polymerase catches it, it by the way breaks this connecting, what would release the dye and somehow activate it ... sounds great, but how to do it?? The only fast and simple way of distinguishing bases without drastically modifying biochemical machinery (or for example attaching electrodes to it) I can think of is modifying its speeds by changing 'nucleotide carriers' concentrations ...
  16. Yes - monitoring the insertion of nucleotides is more accurate, but it's difficult to make it fast - usually each base requires some separate cycle of macroscopic time. So for such methods the only way to make it practical is to use massive parallelism as you written ... but it's still slow and expensive ... We should also look for smarter methods analyzing base by base for example while it comes through nanopore/protein/ribosome ... Such polymerase naturally process large portions of chromosome in minutes/hours - if we could monitor this process, we would get faster and cheaper methods ... Measuring position of polymerase is in fact really difficult ... optical methods are rather not precise enough ... making many snapshots using electron microscope could damage it ... Alternative approach is attaching polymerase to cantilever of atomic force microscope - it should 'feel' single steps it make ... so when we would use large differences in concentrations (like 1:10:100:1000), times between steps would very probably determine base. Of course we would have to process given strand a few time to get required accuracy.
  17. One of the reasons to to introduce the concept of entanglement was the EPR experiment. Standard quantum measurement is projection into eigenstate basis of some hermitian operator (Heisenberg uncertainty principle applies to these measurements). EPR uses some additional, qualitatively different type of information - we know that because of angular momentum conservation, produced photons have opposite spin. In deterministic picture - there was no entanglement - there was just created some one specific pair of photons - 'hidden variables'. In this picture quantum mechanics is only a tool to estimate probabilities and the concept of entanglement is essential to work on such uncertain situations, but it isn't directly physical. But Bell's inequalities made many people believe that we cannot use such picture. However, such moving 'macroscopic charged points' can be well described deterministically - when we doesn't have full information, we could construct probabilistic theory with 'hidden variables'. And when we know that there was created two pairs of rotating bodies, such that we couldn't measure it's parameters, we still would know that e.g. because of angular momentum conservation, they would have to rotate in opposite direction - so to work on such probabilities we would have to introduce some concept of entanglement ... So the question stays - would that theory have 'squares' like quantum mechanics, which make it contradict Bell inequalities? If no - how this qualitative difference emerges while rescaling? Why we can describe rotating macroscopic points with 'hidden variables', but we cannot do it with microsocpic ones?
  18. Quantum mechanics ‘works’ in proton + electron scale. Let’s enlarge it – imagine proton rotating around chloride anion … up to two charged oppositely macroscopic bodies rotating in vacuum. We can easily idealize the last picture to make it deterministic (charged, not colliding points). However, many people believe that Bell’s inequalities says that the first picture just cannot be deterministic. So I have a question – how this qualitative difference emerge while changing scale? In what scale an analog of EPR experiment would start giving Bell’s inequalities? I know – the problem is: it’s difficult to get such analog. Let’s try to construct a thought experiment on such macroscopic rotating charged bodies, which are so far, that we can measure only some of its parameters and so we can only work on some probabilistic theory describing their behavior. For simplicity we can even idealize that they are just point objects and that they don’t collide, so we can easily describe deterministically their behavior using some parameters, which are ‘hidden’ from the observer (far away). The question is: would such probabilistic theory have quantum-mechanical ‘squares’, which make it contradicting Bell inequalities? If not – how would it change while changing scale? Personally I believe that the answer is yes – for example thermodynamics among trajectories also gives these ‘squares’ (http://arxiv.org/abs/0910.2724), but I couldn’t think of any concrete examples… How to construct an analog to EPR experiment for macroscopic scale objects? Would it fulfill Bell inequalities?
  19. Sanger is completely different - it cuts into short pieces and use electrophoresis. Pyrosequencing I've just read about, is a bit closer what I'm thinking - it sequentially adds nucleotides and watch if they were used by polymerase. Steps of such sequence are quite long and so expensive. The idea is not to use such macroscopic time sequences, but rather a natural process which goes many orders of magnitude faster. For example - somehow mount polymerase on the cantilever of atomic force microscope, so that it can 'watch' its speed of DNA processing. Now use different concentrations of the 'carriers' of nucleotides - so that the speed of the process depends on the current base. So there should be correlations between base sequence and forces observed by the microscope - processing given sequence a few times this way, we should be able to fully determine base sequence ... many orders of magnitude faster than using pyrosequencing. Eventually we could mount ssDNA and optically watch the speed of polymerase (for example by attaching to it something producing light, like luciferase).
  20. There are considered some approaches to sequence DNA base by base - for example by making it go through nanoscale hole and measure its electric properties using some nanoelectrodes. Unfortunately even theoretical simulations says that identifying bases this way is already extremely difficult ... http://pubs.acs.org/doi/abs/10.1021/nl0601076 Maybe we could use nature's ways to read/work with DNA? For example somehow mount polymerase or ribosome and somehow monitor its state... I thought about using speed of process to get information about currently processed base. For example DNA polymerase to process succeeding base has to get from environment corresponding nucleoside triphosphat - there are only four of them - we can manipulate their concentrations. If we would choose different concentrations for them, there would be correlations between type of the base and time of its processing - by watching such many processes we could determine the sequence. Is it doable? What do you think about such 'base by base' sequencing methods? How to use proteins developed by nature for this process?
  21. While introducing random walk on given graph, we usually assume that for each vertex, each outgoing edge has equal probability. This random walk usually emphasize some path. If we work on the space of all possible paths, we would like to have uniform distribution among them to maximize entropy. It occurs that we can introduce random walk which fulfills this condition: in which for each two vertexes, each path between them of given length has the same probability. Probability of going from a to b in MERW is S_ab= (1/lambda) (psi_b/psi_a) where lambda is the dominant eigenvalue of adjacency matrix, psi is corresponding eigenvector. Now stationary probability distribution is p_a is proportional to psi_a^2 We can generalize uniform distribution among paths into Boltzmann distribution and finally while making infinitesimal limit for such lattices covering R^n, we get behavior similar to quantum mechanics. This similarity can be understand that QM is just a natural result of four-dimensional nature of our world http://arxiv.org/abs/0910.2724 In this paper further generalizations are made in classical field of ellipsoids as its topological excitations. Their structure occurs to be very similar to known from physics with the same spin, charge, number of generations, mass gradation, decay modes, electromagnetic and gravitational interaction. Here is for example presented behavior of the simplest topological excitations of direction field - spin 1/2 fermions: http://demonstrations.wolfram.com/SeparationOfTopologicalSingularities/ What do you think about it?
  22. It occurs that rotational modes of the simplest energy density in such ellipsoid field already creates electromagnetic and gravitational interaction between such topological excitations. I've finished paper about this whole idea of deterministic physics http://arxiv.org/abs/0910.2724
  23. What do You mean? Looking at the demonstration - minimizing local rotations would make opposite (the same) singularities attracting (repelling). Using local curvature of rotation axis we can define E vector, and B vector using some local curl. I'm not sure, but probably E_kin = Tr(sum_i d_i M * d_i M^t) M - the matrix field (real) d_i - directional derivative or some its modification should lead to Maxwell's equations. For flat spacetime we assume that time axises are aligned in one direction. The potential term should make given eigenvalues preferable - for example sum_{i=1..4} Tr(M^i - v_i)^2 where v_i are sums of powers of eigenvalues. But more physically looks potential defined straighforward by eigenvalues (l_i) sum_i (l_i-w_i)^2 in this representation we can write M = O diag(l_i) O^t where O are orthogonal matrices (3 deg. of freedom in 3 dimensions). Now O corresponds to standard interactions like EM. {l_i} are usually near {w_i} and changes practically only near critical points (creating mass). These degrees of freedom should interact extremely weakly - mainly while particle creation/annihilation - they should thermalize with 2.7K EM noise through these billions of years and store (dark) energy needed for cosmological constant. About further interactions - I think essential are 'spin curves' - natural, but underestimated result of that the phase is defined practically everywhere (like in quantum formulation of EM). It can be for example seen in magnetic flux quantization - it's just the number of such spin curves going through given area. Taking it seriously it's not surprising that opposite spin fermions like to couple - using such spin curves. We can see it for nucleons, electrons in orbit, Cooper pairs - they should create spin loop. How to break such loop? For example to deexcitate an excited electron which is in such couple. The simplest way is to twist to 'eight-like shape' and reconnect to create two separate loops containing one electron (fermion), which could reconnect to create lower energy electron couple. Such twist&reconnect process makes that one of fermions rotates its spin to the opposite one - changing spin by 1 - we see selection rules ... which made us believe that photons are spin 1. Going to baryons... In rotational matrix O we can see U(1)*SU(2) symmetry ... but topological nature of strong interactions is difficult to 'connect' with SU(3) ... But this ellipsoid model naturally gives higher topological excitations which are very similar to mesons/baryons ... with practically the same behavior ... with natural neutrino<electron<meson<baryon mass gradation ... and which can naturally create nucleus like constructions ... Practically the only difference is the spin of Omega baryon - quarks model gives 3/2 spin and as topological excitation it's clearly 1/2 ... but these 3/2 spin hasn't been confirmed experimentally (yet?). Pions would be Mobius strip like spin loops, kaons makes full not half internal rotation. Pions can decay by enlarging the loop - charged part creates muon, the second one - neutrino. Kaons internal rotation should make them twist and reconnect creating two/three pions. Long and short living kaons can be explain that internal rotation is made in one or opposite way. Baryons would be spin curve going through spin loop (could be experimentally interpreted as 2+1 quark structure). The loop and curve singularities uses different axis -the spin curve looks to be electron-like and the loop to be meson-like (produces pions). Strangeness would make this loop make some number of additional internal half-rotations. It's internal stress would make it to twist and reconnect to release part of it's internal rotation into a meson - most of decay processes can be seen in this way. Two neutron could reconnect their spin loops creating 'eight-like shape' holding both of them together. With proton it could reconnect their spin curves - deuteron would be two attracting loops on one spin curve. Finally in this way could be constructed larger nucleons - hold by interlacing spin loops.
  24. Look at Schrodinger's equation solutions for hydrogen atom - there is e^{i m phi} term (m - spin along z axis) - if we look at the phase while making a loop around the axis, it rotates m times - in differential equation theory it's called topological singularity, in complex analysis it's conservation is called http://en.wikipedia.org/wiki/Argument_principle Generally for any particle, while making rotation around some axis, spin says how to change the phase - while making full rotation the phase makes 'spin' rotations - so in same way particle is at least topological singularity. In fact this underestimated property can lead to answers to many questions, like from where - mass of particles, - conservation properties, - gravity/GR, - that the number of lepton/quark generations is equal to the number of spatial dimensions, - electron coupling (orbits, Cooper pairs), - cutoffs in quantum field theories, - neutrino oscillations, and many others come from. Let's start from the other side. Some time ago I've considered some simple model - take a graph and consider a space of all paths on it. Assumption that all of them are equally probable, leads to some new random walk on graph, which maximize entropy globally (MERW). It can be also defined that for given two vertices, all paths of given length between them are equally probable. Standard random walk - for all vertices, all edges are equally probable - maximize uncertainty only locally - usually gives smaller entropy. http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PRLTAO000102000016160602000001 This model can be generalized that paths are not equally probable, but there is Bolzman distribution among them for some given potential. Now if we cover R^3 with lattice and take continuous limit, we get that Bolzman distribution among paths gives near QM behavior - more precisely we get something like Schrodinger equation, but without Wick rotation - stationary state probability density is the square of the dominant eigenfunction of Hamiltonian - like in QM. Derivations are in the second section of http://arxiv.org/abs/0710.3861 In some way this model is better than QM - for example in physics excited electrons aren't stable like in Schrodinger's equations, but should drop to the ground state producing photon, like wihout Wick's rotation. Anyway in this MERW based model, electron would make some concrete trajectory around nucleus, which would average to probability distribution as in QM. This simple model shows that there is no problem with 'squares', which are believed to lead to contradictions for deterministic physics (Bell's inequalities) - they are result of 4D nature of our world - the square is because there have to meet trajectories from past and future. This simple model - Bolzman's distribution among paths is near real physics, but misses a few things: - there is no interference, - there is no energy conservation, - is stochastic not deterministic, - there is single particle in potential. But it can be expanded - in some classical field theory, in which particles are some special solutions (like topological singularities suggested by e.g. strong spin/charge conservation). To add interference they have to make some rotation of its internal degree of freedom. If it's based on some Hamiltonian, we get energy conservation, determinism and potentials (used for Bolzman distribution in the previous model). To handle with many particles, there are some creation/annihilation operators which creates particle path between some two points in spacetime and interacts somehow (like in Feynman's diagrams) - and so creates behavior known from quantum field theories, but this time everything is happening not in some abstract and clearly nonphysical Fock space, but these operator really makes something in the classical field. The basic particles creating our world are spin 1/2 - while making a loop, phase makes 1/2 rotation - changes vector to the opposite one. So if we identify vectors with opposite ones - use field of directions instead, fermions can naturally appear - as in the demonstration - in fact they are the simplest and so the most probable topological excitations for such field - and so in our world. A simple and physical way to create directional field is a field of symmetric matrices - which after diagonalisation can be imagined as ellipsoids. To create topological singularities they should have distinguishable axises (different eigenvalues) - it should be energetically optimal. In critical points (like the middle of tornado), they have to make some axises indistinguishable at cost of energy - creating ground energy of topological singularity - particle's mass. Now one (of 3+1) axis has the strongest energetic tendency to align in one direction - creating local time arrows, which somehow rotates toward energy gradient to create gravity/GR like behaviors. The other three axises creates singularities - one of them creates one singularity, the other has enough degrees of freedom to create additional one - to connect spin+charge in one particle - giving family of solution similar to known from physics - with characteristic 3 for the number of generations of leptons/quarks. With time everything rotates, but not exactly around some eigenvector, giving neutrino oscillations. Is it better now? Merged post follows: Consecutive posts mergedThere is nice animation for topological defects in 1D here: http://en.wikipedia.org/wiki/Topological_defect thanks of [math](\phi^2-1)^2[/math] potential, going from 1 to -1 contains some energy - these nontrivial and localized solutions are called (anti)solitons and this energy is their mass. Such pair can annihilate and this energy is released as 'waves' (photons/nontopological excitations). My point is that in analogous way in 3D, starting from what spin is, our physics occurs naturally. I think I see how mesons and baryons appears as kind of the simplest topological excitations in picture I've presented - in each point there is ellipsoid (symmetric matrix) which energetically prefers to have all radiuses (eigenvalues) different (distinguishable). First of all singularity for spin requires making 2 dimensions indistinguishable, for charge requires 3 - it should explain why 'charges are heavier than spins'. We will see that mass gradation: neutrino - electron - meson - baryon is also natural. Spins as the simplest and so the most stable should be somehow fundamental. As I've written in the first post - from topological reasons two spins 'likes' to pair and normally would annihilate, but are usually stabilized by additional property which has to be conserved - charge. And so electron (muon,tau) would be a simple charge+spin combination - imagine a sphere such that one axis of ellipsoids is always aiming the center (charge singularity). Now the other two axises can make two spin type singularities on this sphere. And similarly for other spheres with the same center and finally in the middle all three axises have to be indistinguishable. The choice of axis chooses lepton. Now mesons - for now I think that it's simple spin loop (up+down spin) ... but while making the loop phases make half rotation (like in Mobius strip) - it tries to annihilate itself but it cannot - and so creates some complicated and not too stable singularity in the middle. Zero charge pions are extremely unstable (like 10^-18 s), but charge can stabilize them for a bit longer. The hardest ones are baryons - three spins creating some complicated pattern and so have to be difficult to decay - the solution could be that two of them makes spin loop and the third goes through its middle preventing from collapse and creating large and 'heavy' singularity. Spin curves are directed, so there are two possibilities (neutron isn't antineutron). We believe we see up and down quarks because two creating the loop are different form the third one.
  25. In quantum mechanics spin can be described as that while rotating around the spin axis, the phase rotates "spin" times – in mathematics it’s called Conley (or Morse) index of topological singularity, it’s conservation can be also seen in argument principle in complex analysis. So particles are at least topological singularities. I'll try to convince that this underestimated property can lead to explanations from that fermions are extremely common particles up to the 'coincidence' that the number of lepton/quark generations is ... the number of spatial dimensions. I've made a simple demonstration which shows qualitative behavior of the phase while separation of topological singularities, like in particle decay or spontaneous creation of particle-antiparticle: http://demonstrations.wolfram.com/SeparationOfTopologicalSingularities/ The other reason to imagine particles as topological singularity or a combination of a few of them is very strong property of spin/charge conservation. Generally for these conservation properties it’s important that some ‘phase’ is well defined almost everywhere – for example when two atoms are getting closer, phases of their wavefunctions should have to synchronize before. Looking form this perspective, phases can be imagined as a continuous field of nonzero length vectors – there is some nonzero vector in every point. The problem is in the center of a singularity – the phase cannot be continuous there. A solution is that the length of vectors decreases to zero in such critical points. To explain it form physical point of view we can look at Higg’s mechanism – that energy is minimal not for zero vectors, but for vectors for example of given length. So finally fields required to construct such topological singularities can be field of vectors with almost the same length everywhere but some neighborhoods of the singularities where they vanishes in continuous way. These necessary out of energetic minimum vectors could explain (sum up to?) the mass of the particle. Topological singularity for charge doesn’t have something like ‘spin axis’ – they can be ‘pointlike’ (like blurred Planck's scale balls). Spins are much more complicated – they are kind of two-dimensional – singularity is ‘inside’ 2D plane orthogonal to the spin axis. Like the middle of a tornado – it’s rather ‘curvelike’. The first ‘problem’ is the construction of 1/2 spin particles – after rotating around the singularity, the spin makes only half rotation – vector becomes opposite one. So if we forget about arrows of vectors – use field of directions – spin 1/2 particles are allowed as in the demonstration – in fact they are the simplest ‘topological excitations’ of such fields … and most of our fundamental particles have 1/2 spin … How directions – ‘vectors without arrows’ can be physical? For example imagine stress tensor – symmetric matrix in each point – we can diagonize it and imagine as an ellipsoid in each point – longest axis (dominant eigenvector) doesn’t choose ‘arrow’ – direction fields can be also natural in physics … and they naturally produce fermions … It's emphasized axis - eigenvector for the smallest or largest or negative eigenvalue would have the strongest energetic preference to align in the same direction - it would create local time dimension and its rotation toward energy - creating gravity and GR related effects. One of other three axises could create one type of singularity, and there still would remain enough degrees of freedom to create additional singularity - to combine spin and charge singularity in one particle - it could explain why there is 3*3 leptons/quarks types of particles. Another ‘problem’ about spins is behavior while moving the plane in ‘spin axis’ direction – like looking on tornado restricted to higher and higher 2D horizontal planes - the field should change continuously, so the critical point should so. We see that conservation doesn’t allow it to just vanish – to do it, it has to meet with opposite spin. This problem occurs also in standard quantum mechanics – for example there are e^(i phi) like terms in basic solutions for hydrogen atom – what happens with them ‘outside the atom’? It strongly suggest that against intuition, spin is not ‘pointlike’ but rather curve-like – it’s a ‘curve along it’s spin axis’. For example a couple of electrons could look like: a curve for spin up with the charge singularity somewhere in the middle, the same for spin down - connected in ending points, creating kind of loop. Without the charges which somehow energetically ‘likes’ to connect with spin, the loop would annihilate and it’s momentums should create two photon-like excitations. Two ‘spin curves’ could reconnect exchanging its parts, creating some complicated, dynamical structure of spin curves. Maybe it’s why electrons like to pair in orbits of atoms, or as a stable Cooper pairs (reconnections should create viscosity…) Bolzman distribution among trajectories gives something similar to QM, but without Wick’s rotation http://www.scienceforums.net/forum/showthread.php?t=36034 In some way this model corresponds better to reality – in standard QM all energy levels of a well like made by a nucleus are stable, but in the real physics they want to get to the ground state (producing a photon). Without Wick’s rotation eigenfunctions are still stable, but the smallest fluctuation make them drop to the ground state. What this model misses is interference, but it can be added by some internal rotation of particles. Anyway this simple model shows that there is no problem with connecting deterministic physics with squares appearing in QM. It suggests that maybe a classical field theory would be sufficient … when we understand what creation/annihilation operators really do – what particles are … the strongest conservation principle – of spin and charge suggests that they are just made of topological singularities… ? What do you think about it? I was said that this kind of ideas are considered, but I couldn’t find any concrete papers? There started some discussion here: http://groups.google.com/group/sci.physics/browse_thread/thread/97f817eec4df9bc6#
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.