Jump to content

SuperPolymath

Senior Members
  • Posts

    165
  • Joined

  • Last visited

Posts posted by SuperPolymath

  1. On 11/18/2017 at 2:15 PM, studiot said:

    I am promoting discussion about the question

     

    What is a uncertainty in Science?

     

    Uncertainty is definitely linked to probability.
    Probability is a definite number (one number) between 0 and 1
    So it is tempting to think that if the probability is p then the uncertainty can be defined as (1-p), but this doesn't really work.

    Consider a cubical shaped object.

    What is the uncertainty of position when I place it along a line?

    Let my cube be a model of my car and of side L.
    Now the UK rules for parking said car state that if any part of the car is on a yellow line it is illegally parked and I could be subject to a fine (or worse).

    So what is the allowable uncertainty in positioning my car when I park it?

    Now let my cube be much smaller.
    Let it be the rider on a beam balance.
    What is the uncertainty in positioning this rider so that the balance operates correctly?

    In both cases the uncertainty could be huge.
    The car could be 100 miles form the nearest yellow line.
    But there is a minimum distance for placing the centre.

    Which brings out two points.
    Probability is just a number. It has no units or dimensions.

    Uncertainty is usually expressed in the units of the measurand and generally has dimensions.

     

     

     

    Uncertainty is usually expressed as a range with one end a maximum or minimum.

    In one sense, a myth is an idea that, while widely believed, is false, failing to correspond with reality.

    In a deeper sense, which is employed by students of religion, a myth serves as an orienting and mobilizing story for a people, a story that reminds them who they are and why they do what they do. When a story is called as a myth in this sense—which we can call Myth with a capital M—the focus is not on the story's relation to reality but on its function. This orienting and mobilizing function is possible, moreover, only because Myths with a capital M have religious overtones. Such a Myth is a Sacred Story.

  2. 7 minutes ago, Vmedvil said:

    Yes, but it is governed by SR and generated by GR.

    GR states that matter tells spacetime how to curve & this curve (gravity) propagates at C for that level of reality (1.6x10^-35 meters through 13 billion light years) & tells matter how to move via SR, there are infinite smaller levels composing an infinite larger levels, that is a governing property & generating property Vmedvil, but it's all deterministic because it's all (including SR) governed by GR, Vmedvil.

    In either case @swansont was incorrect. This wouldn't be modeled from scratch. 

     

  3. 5 minutes ago, swansont said:

    You had no model, and asked that it be locked. 

    It was an attempt to work with a user who had the necessary education to help me model-build, & I never once asked you to lock it. I said move it to trash can before resorting to locking it entirely, at least from there it could be moved back into speculations in the event a model would arise from the concepts. 

     

     

  4. 8 minutes ago, swansont said:

    Newton was a proponent of light being a particle. His aether was for transmission of something other than light.

    https://en.m.wikipedia.org/wiki/Luminiferous_aether

    Feel free to open a thread where you predict the observed results with classical physics.

    Start with explaining spin classically. 

    I was going to attempt to work with @Mordred on this but you locked said topic. So I sent him a pm on the subject in question. If he takes on the project I will attempt to do so, then I will post the topic. 

  5. 5 minutes ago, Strange said:

    Neither did I. So it is a bit surreal to bring him in now.

    Still nothing to do with the luminiferous aether.

    The luminiferous Ether was Newton's vision of the ether, not Einstein's

    5 minutes ago, swansont said:

    Go ahead and make a valid spin-state prediction based on anything but QM. 

    "Detector a reads 1 when a particle has horizontal polarity. Dectector B reads 1 when a particle has vertical polarity. If the particles are entangled previously this will affect the statistical average. Superposition is a mixture of the two. Statistically it can only be one or the other. However you don't know which photon has which. "

    You wanna know why? Because QM uses probability statistics & makes no attempt to understand the underlying nature of it all. The science of it, which could be expressed classically i.e. with nothing but local information exchange, can't be perfectly understood with QM & therefore the predictions are actually less accurate than otherwise possible. That's just what you get with an indeterminate methodology. 

  6. 10 minutes ago, Strange said:

    Einstein is talking about space-time, not the luminiferous ether.

    Never once do I mention Isaac Newton. His idea of the ether is far too outdated. Einstein's is based on matter telling spacetime how to reshape & spacetime telling matter how to move.

    My bad, yes I meant "On the Electrodynamics of Moving Bodies."

    7 minutes ago, swansont said:

    Also, Bell tests are QM experiments, not relativity experiments.

    They're neither, they're based on Bell's inequality (math). They may be used to test any theory in physics. 

  7. On 12/15/2017 at 3:11 AM, Capiert said:

    What sort of evidence wound you need

     to prove the ether?

    Please give me some examples

     that are (=would be) acceptable?

     

     

     

    Ether theories continue to have significant proponents, people who've even won nobel prizes (which should hold more weight than anyone objecting them here), finding evidence in support of loopholes in the supposed violation in Bell's inequality that would discredit said Ether theories. 

    Let's start with the most significant original proponents of aether theories from the wikipedia article cited earlier in this thread:

    "We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an Aether. According to the general theory of relativity space without Aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this Aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it." -Einstein circa 1920

    Of course many here would probably quick to point out that Einstein's previous paper "the photoelectric effect" which first introduced SR that predated this 1920 post-GR quote of Einstein's doubted the Ether, GR literally predicted it in distorting light waves around the sun that acted like an ether pushing out the stars around the horizon of the sun generating an optical illusion that warped their positions officially confirmed by astronomers in 1919 & then in gravitational waves with v=c measured twice by LIGO (once in a neutron star collision & once in BH collision where the velocity of gravitation was truly confirmed by the fact that GW waves frame dragging means gravity is not even a field) in 2017.

    The thing that the ether theory states, is local realism in deterministic, infinitely reducible beneath the planck length, spacetime:

    In the Bohmian view, nonlocality is even more conspicuous. The trajectory of any one particle depends on what all the other particles described by the same wave function are doing. And, critically, the wave function has no geographic limits; it might, in principle, span the entire universe. Which means that the universe is weirdly interdependent, even across vast stretches of space.

    BohmanGraphic_2000.jpg&key=6896ffd4a9737

    This pilot wave could literally be the propagating Euclid-esque spatial-temporal curves (GW waves in microcausal systems) of sub-Planck scale (C covers 1/40 planck lengths in 1/40 planck times & that's a superluminal interaction that doesn't violate the cosmic speed limit) structures that we cannot observe. 

    This is supported in this article:

    The situation is somewhat different when we consider gravity and promote the Lorentz violating tensors to dynamical objects. For example in an aether theory, where Lorentz violation is described by a timelike four vector, the four vector can twist in such a way that local superluminal propagation can lead to energy-momentum flowing around closed paths [206]. However, even classical general relativity admits solutions with closed time like curves, so it is not clear that the situation is any worse with Lorentz violation. Furthermore, note that in models where Lorentz violation is given by coupling matter fields to a non-zero, timelike gradient of a scalar field, the scalar field also acts as a time function on the spacetime. In such a case, the spacetime must be stably causal (c.f. [272]) and there are no closed timelike curves. This property also holds in Lorentz violating models with vectors if the vector in a particular solution can be written as a non-vanishing gradient of a scalar. Finally, we mention that in fact many approaches to quantum gravity actually predict a failure of causality based on a background metric [121] as in quantum gravity the notion of a spacetime event is not necessarily well-defined [239]. A concrete realization of this possibility is provided in Bose-Einstein condensate analogs of black holes [40]. Here the low energy phonon excitations obey Lorentz invariance and microcausality [270]. However, as one approaches a certain length scale (the healing length of the condensate) the background metric description breaks down and the low energy notion of microcausality no longer holds.

    I quote Gerard t'Hooft, another proponent of ether theory:

    "Einstein had difficulties with the relativistic invariance of quantum mechanics (“does
    the spooky information transmitted by these particles go faster than light?”). These,
    however, are now seen as technical difficulties that have been resolved. It may be consid-
    ered part of Copenhagen’s Doctrine, that the transmission of information over a distance
    can only take place, if we can identify operators A at space-time point x1 and operators
    B at space-time point x2 that do not commute: [A, B] 6= 0 . We now understand that, in
    elementary particle theory, all space-like separated observables mutually commute, which
    precludes any signalling faster than light. It is a built-in feature of the Standard Model,
    to which it actually owes much of its success.
    So, with the technical difficulties out of the way, we are left with the more essential
    Einsteinian objections against the Copenhagen doctrine for quantum mechanics: it is a
    probabilistic theory that does not tell us what actually is going on. It is sometimes even
    suggested that we have to put our “classical” sense of logic on hold. Others deny that:
    “Keep remembering what you should never ask, while reshaping your sense of logic, and
    everything will be fine.” According to the present author, the Einstein-Bohr debate is not
    over. A theory must be found that does not force us to redefine any aspect of classical,
    logical reasoning.
    What Einstein and Bohr did seem to agree about is the importance of the role of an
    observer. Indeed, this was the important lesson learned in the 20th century: if something
    cannot be observed, it may not be a well-defined concept – it may even not exist at all. We
    have to limit ourselves to observable features of a theory. It is an important ingredient
    of our present work that we propose to part from this doctrine, at least to some extent:
    Things that are not directly observable may still exist and as such play a decisive role
    in the observable properties of an object. They may also help us to construct realistic
    models of the world.
    Indeed, there are big problems with the dictum that everything we talk about must be
    observable. While observing microscopic objects, an observer may disturb them, even in
    a classical theory; moreover, in gravity theories, observers may carry gravitational fields
    that disturb the system they are looking at"

    More evidence:

    The hole is quantum-mechanically unstable: It has no bound states. Wormhole wave functions must eventually leak to large radii. This suggests that stability considerations along these lines may place strong constraints on the nature and even the existence of spacetime foam.

    In invariant set theory, the form of the Bell Inequality whose violation would be inconsistent with realism and local causality is undefined, and the form of the inequality that it violated experimentally is not even gp-approximately close to the form needed to rule out local realism (54) [21]. A key element in demonstrating this result derives from the fact that experimenters cannot in principle shield their apparatuses from the uncontrollable ubiquitous gravitational waves that fill space-time.

    ----

    A finite non-classical framework for physical theory is described which challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the universe as a deterministic locally causal system evolving on a measure-zero fractal-like geometry IU in cosmological state space. Consistent with the assumed primacy of IU , and p-adic number theory, a non-Euclidean (and hence non-classical) metric gp is defined on cosmological state space, where p is a large but finite Pythagorean prime. Using numbertheoretic properties of spherical triangles, the inequalities violated experimentally are shown to be gp-distant from the CHSH inequality, whose violation would rule out local realism. This result fails in the singular limit p = ∞, at which gp is Euclidean. Broader implications are discussed.

    In invariant set theory, the form of the Bell Inequality whose violation would be inconsistent with realism and local causality is undefined, and the form of the inequality that it violated experimentally is not even gp-approximately close to the form needed to rule out local realism (54) [21]. A key element in demonstrating this result derives from the fact that experimenters cannot in principle shield their apparatuses from the uncontrollable ubiquitous gravitational waves that fill space-time.

    ----

    A finite non-classical framework for physical theory is described which challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the universe as a deterministic locally causal system evolving on a measure-zero fractal-like geometry IU in cosmological state space. Consistent with the assumed primacy of IU , and p-adic number theory, a non-Euclidean (and hence non-classical) metric gp is defined on cosmological state space, where p is a large but finite Pythagorean prime. Using numbertheoretic properties of spherical triangles, the inequalities violated experimentally are shown to be gp-distant from the CHSH inequality, whose violation would rule out local realism. This result fails in the singular limit p = ∞, at which gp is Euclidean. Broader implications are discussed..

    A particle of energy is just like an infinite universe of matter with a relatively infinite amount of mass. Everything works the same. Fall anywhere in space, no matter how seemingly void, & you're gonna land on matter if you're small enough. There isn't an empty place anywhere. 

    The idea behind any attempt to build an ether theory is just that empty space ought not be really empty. We have two good reasons to think so: first, electromagnetic signals behave undoubtedly as waves; since they propagate even through intergalactic space, there must be some thing there (everywhere), in which they do wave. Second, quantum theory predicts that vacuum has physical effects, such as the Casimir effect, which is now experimentally confirmed [1].

  8. 18 minutes ago, swansont said:
    !

    Moderator Note

    This is a science discussion site, so appeals to the fallacy of personal incredulity have no place here. As it stands, this does not measure up to our guidelines for speculations discussions. No model, no testable predictions, no evidence. If that's not fixed, then this will be closed.

     

    Explain how there's more evidence for, or testable predictions in, non-locality than in this first. Any model can be tested & any well-sensed theory can be modeled (which was what Mordred said he would do if said thread was posted). There was no true evidence for the super-positions in space-time that QM predicted, this is why Einstein opposed it. This doesn't actually violate C, & it doesn't assume the non-commutation of action like in the standard model's version of QE which there's really no evidence for. 

    I'll tell you what, move this to trash-can until someone models it. That seems to be the only issue here. 

  9. The right theory of everything may lead us to stable fusion, which can cheaply turn small amounts of lead into smaller amounts of gold, which can then be replicated into large amounts of gold. Who said money doesn't grow on trees? A ToE could tell us how fast future spacecrafts could ever possibly go. A ToE could provide avenues for computation & communication that bypass the speed of light. Such a theory would as much lead to a technological Utopia as Einstein's theory of General Relativity led to the atom bomb that ended all of the catastrophes of global scale warfare many decades ago.

     

    To start I had to ask myself a few questions:

     

    How do you explain particle states influencing each other clearly violating C with violations in Einstein's relativity?

    I think it's less far fetched than a superposition of DS. The smaller you get the stronger gravity's influence. <Planck length >C because C & h are related.

    Now, how could there be superluminal communication?

     
     
    Two possible explanations:
     
    A: the quantum interpretation: superposition or non-locality action
     
    B: C traverses 1 planck length in 1 planck time, therefore C traverses 1/n planck lengths in 1/n planck times, so any sub-planckian curve with abrupt accelerations will lead to superluminal gravity waves. In teleparallelism (gravity = EM) this can link polarities (particle states) ftl via interactions between adjacent qubit cells. It can also collapse a wave (particle-scattering) into a particle as the masses attract & merge after being brought into close proximity by that which gives any subatomic particle motion, or anything in nature, motion, & that would obviously be expansion in option B. Expansion = wave function, gravity = wave collapse as well. These forces obviously mediated by DS & ADS fluctuating by the value of each other's curve to attain a state of thermodynamic equilibrium. If this is about a boundless plane, the equilibrium never occurs. Translate that into mathematical expressions & you have a Nobel prize.
     
    Two fairy tales, one claiming action at a distance, clear violations in the laws of motion defined by GR at the quantum level. The other, the one being modelled in this thread, just applying special relativity & a no limits argument for Zeno's paradox to explain things like bells theorem & double slit experiment. I mean, what seems more conceptually sound to you?
     
    A: that a subatomic particle, the smallest unit of measurement, is a small expanding universe just like ours because it seems to behave just like our universe as per particle wave duality. And that you can always find smaller objects in nature
     
    B: that a subatomic particle has no smaller components, is the mystical building block of nature, & turns into pixy dust as a wave. & that objects cannot be smaller than a Planck length (these are OLD school, like Capernicus, philosophies still taught in today's world).
     
    This "microversal cellular automaton interpretation" does not implore superpositions or non-locality (a particle being in multiple places at one time); which to me would be even more far fetched than my interpretation, yet that is the interpretation represented by the standard model. I quote Gerard t'Hooft:
     
    "Einstein had difficulties with the relativistic invariance of quantum mechanics (“does
    the spooky information transmitted by these particles go faster than light?”). These,
    however, are now seen as technical difficulties that have been resolved. It may be consid-
    ered part of Copenhagen’s Doctrine, that the transmission of information over a distance
    can only take place, if we can identify operators A at space-time point x1 and operators
    B at space-time point x2 that do not commute: [A, B] 6= 0 . We now understand that, in
    elementary particle theory, all space-like separated observables mutually commute, which
    precludes any signalling faster than light. It is a built-in feature of the Standard Model,
    to which it actually owes much of its success.
    So, with the technical difficulties out of the way, we are left with the more essential
    Einsteinian objections against the Copenhagen doctrine for quantum mechanics: it is a
    probabilistic theory that does not tell us what actually is going on. It is sometimes even
    suggested that we have to put our “classical” sense of logic on hold. Others deny that:
    “Keep remembering what you should never ask, while reshaping your sense of logic, and
    everything will be fine.” According to the present author, the Einstein-Bohr debate is not
    over. A theory must be found that does not force us to redefine any aspect of classical,
    logical reasoning.
    What Einstein and Bohr did seem to agree about is the importance of the role of an
    observer. Indeed, this was the important lesson learned in the 20th century: if something
    cannot be observed, it may not be a well-defined concept – it may even not exist at all. We
    have to limit ourselves to observable features of a theory. It is an important ingredient
    of our present work that we propose to part from this doctrine, at least to some extent:
    Things that are not directly observable may still exist and as such play a decisive role
    in the observable properties of an object. They may also help us to construct realistic
    models of the world.
    Indeed, there are big problems with the dictum that everything we talk about must be
    observable. While observing microscopic objects, an observer may disturb them, even in
    a classical theory; moreover, in gravity theories, observers may carry gravitational fields
    that disturb the system they are looking at, so we cannot afford to make an observer
    infinitely heavy (carrying large bags full of “data”, whose sheer weight gravitationally
    disturbs the environment), but also not infinitely light (light particles do not transmit
    large amounts of data at all), while, if the mass of an observer would be “somewhere in between”, this could entail that our theory will be inaccurate from its very inception."
     

    Key Terms

     

    · Quantum observer/entanglement/eraser/venn diagram paradox

    · Teleparallel quantum gravity

    · Quantum cellular automaton interpretation

    · anti/de sitter space/ADS/CFT duality

    ·  fractal geometry/scale relativity/special relativity beyond the speed of light

    · White Hole/Black Hole

    · Higgs field/dark matter/gravity waves

    · The cosmological constant/dynamic dark energy/Hawking radiation

    · FRBs & the OMG particle

    · The transplanckian problem

    · Dark Flow/Cosmic Bruising

     

    Let's start with the Black Hole. What is it exactly? My theory:

     

    Matter with positive dimensions doesn't get turned inside out with the spacetime that enters the event horizon of the black hole, matter cannot pass into perpendicular dimensions, instead the micro black holes within all quasar matter combines with the macro black hole (parallel spacetimes merging in anti desitter space leading to dark energy there).

    VwqF0.jpg

    This is why matter jets are so powerful around black holes, & the source of FRBs & OMG particles (relativistic protons) - the microverses of these relativistic particles don't have any microblack holes to delay expansion at first until they pick them up as they meet other radiation propagating through space. Stripping microverses of their black holes gives you more energy than matter/anti-matter annihilation.

    A black hole is a white hole in ADS space (with negative three dimensions). Everything that's contracting in desitter space is expanding in ADS space. So black holes are perpendicular universes that have negative dimensions, so when black holes merge in those perpendicular universes - the direction of expansion inside the inside out dimension of a black hole contracts & the black hole shrinks. That's Hawking radiation for you.

     

    Now, let's start with the idea that the smallest objects we can measure (the subatomic world) is a miniature version of the largest objects we can measure (the CMB & the current observable universe), and then apply special relativity to define time & motion in such miniaturized cosmoses. This kills two birds with one stone; for one, it tells us what's beyond the cosmic event horizon, and two, it tells us what's inside of a subatomic particle.

    Let's start with the atom:

     

    https://www.youtube.com/watch?v=EOHYT5q5lhQ

     

    We have this notion of black hole atoms, now suppose that these micro black holes are crucial in the formation of all atoms:

    LY0aJ.jpg

    This is an atom with an atomic nucleus composed of one proton. However, most atoms have multiple protons with charge as well as a charge-neutral "neutron" which, unlike said protons, flies apart within 10 minutes when freed from the nucleus. Why? Let's break it down:

     

    vUgCP.jpg

     

    The protons of adjacent atoms feed each other, as the micro black hole is in the process of consuming it's accretion disk; that's negative charge & all material around said nucleus, even the electrons that form, are going inward. Going outward would be positive charge, & that is when the proton has fully consumed it's accretion disk, stripped of it's mass, the matter of the accretion flows back outward in the form of Cherenkov radiation. That is positive charge, & as that proton's BH evaporates, the - proton within the nuclei of adjacent atoms grow. The cycles must be synchronized perfectly or the atoms will annihilate into light. A proton with negative charge will always become positively charged. Now, this is also an example of how quantum entanglement comes about, these atoms are causally linked by micro-gravity. It also solves the anti-matter problem; what annihilation would leave in its wake were rapidly evaporating microversal scale cosmic unit black holes in a pre-CMB, CDM state, just like a big rip. Whatever is pulled back by the expansion generated by BH shrinkage would have to be causally synchronized by gravity, just like all particle pairs. You can have a duodecillion black holes, each with a different spin, if they merge, there's only going to be one spin.

     

    ZJBxu.jpg

     

    As you can see here, the neutron is a glorified subatomic pulsar, held together by the collective gravity of the protons. 

     

    Distortions in spacetime around black hole atoms change location, dips in the fabric of spacetime elliptically orbiting the micro mother black hole (Planck particle, micro cosmic unit) at the center of the atomic nucleus. These orbital dips in spacetime are hot convergence points in waves where particles form around atomic nuclei.

    d5e7d5335434091797-68331551.jpg
    nEVdt.jpg

     

    This new picture of microversal cellular automata tells us more about how our universe might operate beyond it's cosmic horizon, id est; dynamical dark energy:

     

    90LVz.jpg

     

    You have late entropy stage mother black holes increasing the space-time via evaporation surrounding early entropy stage CMB bodies causing the isotropic qg plasma to break apart into the first atoms there. What happens with this new cosmic portion is anisotropic black hole sizes. As the universe section ages macro bh's become larger as they consume mass from matter & micro black holes (the higgs field that gives matter-energy mass & DM) shrink overall until a new late entropy section forms overlapping the previous one where those monster black holes of the previous one have been reduced to micro-bh's. 

    Now those 2 sources alone would end in  an omega entropy state if not for source 3, the horizon of the white hole with contour=infinity is constantly absorbing parallel white holes beyond its horizon, continously adding additional energy-matter & space-time. This is the positive approx 2.5 dimensions (fractal geometry which allows the scale relativity Lorentz transformations in this model) that we experience, beyond that is an approx negative 2.5 dimension, these two space-times (de sitter & anti de sitter space) are literally the same but with opposite directions of space-time & negative matter-energy. Anti de sitter space is literally these perpendicular dimensions that give energy-matter mass as space-time gets turned inside out flowing through their event horizons it's curtain is yanked, these black holes of an infinity variety of sizes that occupy literally every point in space is behind the fundamental interactions (gravity, electromagnetic, strong & weak nuclear). This 5 dimensional hypersphere is like a cosmic yin-yang symbol, or an infinitely long snake eating it's own tail.

     

    This explains dark flow & cosmic bruising. However, when applied to a microverse, the cosmological constant in accelerated fractional transplanckian spacetime is how radiation propagates through space. Adjacent microverses overlap as they expand, reigniting quark-gluon plasma states, which we observe as particles. I'd look at a photon as a microverse in a hot dense state, or the electron that orbits the micro-SMBH atomic nucleus as a larger micro-plasma sphere with super-micro black holes in it. These little CMBs are gravitational convergence points in subatomic particle waves, which are microverses that are too cool & spread-about to actually see.

    Protons or electrons in fact as points where sub-quantum bits in continuous spacetime (such as particle waves) face the greatest compression force (into non-wave particles) within atomic orbits. Apart of what the microverse interpretation entails, is that as space becomes infinitesimal, & time gets contracted nigh-infinitely, matter can assume the exact same forms as it does as space becomes vast & time dilates to a near stand-still (the cosmic event horizon). This is because the ratio between the size of an atom & a galaxy is the same in a microverse as it is in the universe, so time dilation is equitable in exactness & the same forces are experienced by matter on different scales, & the patterns of matter formations become poetically reemergent: Basically what was a length of 1.6x10^-35 meters gets a new length of 1.6x(10^-1(35^(1.6x10^35))) meters, a velocity of C gets a new velocity of C^C, & same with Planck time...The reason we can't see past the cosmic event horizon is because before the CMB, it was slowly converging adjacent universes in a state exactly as ours is now, which would have been outshone by the CMBR that proceeded it.

     

     

    Now, let's explain the observer effect, which can also be attributed to the quantum eraser:

     

    UePDJ.jpg

     

    As you can see, the protons with negative charge get heated when compressed into the double slits, this is attributed to the wave function. However, when the mass of the photo-electrons are added to that same proton beam, the negatively charged protons in the double slit get positively charged as the accretion disks get stripped from the protonic micro-BH by the mass of the protons interacting with the photo-electrons behind them causing positively charged protons to become negatively charged.

     

    This microversal cellular automaton interpretation is much more versatile than QM, it works in explaining virtually anything in QM. For instance, let's use the quantum venn diagram paradox;

     

    https://www.youtube.com/watch?v=zcqZHYo7ONs

     

    7FoJc.jpg

    Between wave functions, the photons adopt new polarities as they expand through space-time. More polarizing filters=greater variety of polarities. 

     

    Now let's look at the 3rd type of microverse: Quark-gluon plasma is the absolute densest state matter can take. We see it in the cores of neutron stars, discs of quasars as matter is folded upon itself by compressing spacetime (gravity/mass/dark matter) around macro black holes, & in the cosmic microwave background radiation. Any denser, & matter is just a macro black hole as there's no space between micro black holes. It's composed of micro quasars with micro black holes at their cores, barely held apart by micro expansion. Unlike vacuum radiation & the atomic world, these microverses are non-anthropic (no stellar eras) because less entropy equates to less complexity. Quark-gluon plasma is the only state of matter composed entirely of microverses that are exclusively the same as itself. Atoms & vacuum radiation will have microverses with atoms, quark-gluon plasma & vacuum radiation within them, quark-gluon plasma is only composed of microverses that are entirely filled with quark-gluon plasma.

    Qs8ue.jpg

  10. 40 minutes ago, koti said:

    It’s so hilarious that there just might be a small grain of thruth in it, I’ll be watching :)

    How do you explain particle states influencing each other clearly violating C with violations in Einstein's relativity?

    I think it's less far fetched than a superposition of DS. The smaller you get the stronger gravity's influence. <Planck length >C because C & h are related.

    24 minutes ago, swansont said:
    !

    Moderator Note

    You were told to not post pet/alternative theories here, or anywhere but their own thread. You also said you wouldn't post in this thread again. Now I'm making that required - stay out of this thread.

     

    Right, I have my own thread @ hypography that I PM'd you, my idea literally hasn't changed since that old threads conception. The closest to it was the deterministic cellular automata interpretation PDF in last page but it doesn't explain everything else in my thread @ hypography. There literally anything out there on it yet, someone who knows physics math would have to describe it.

  11. 5 minutes ago, Mordred said:

    Actually in this particular instance the model used describes particles as a BH with charge as source/sink atttaction/repulsion.

    Yes I know this model enough to build model the paper and link above. Yes it is strictly classical.

    However if I show you how it works to provide your best bet at developing your model.

    Are you also willing that I can also show the BH interpretation as an artifact of the metric used? (solid lattice gauge theory) ?

    Absolutely not, the url says gravity is stronger inside atoms which is the only reason I linked. God please, no more diverting my theory, they're scattered enough already

    29 minutes ago, koti said:

    Okay, so we have miniature planckian size black holes inside the atoms with accretion disks, evaporating - everything just like a standard black hole which we have in the large scale. Sounds like a decent B class scifi movie narrative. I’d watch that :)

    It's trippy but it solves ALL of the problems.

  12. 32 minutes ago, Mordred said:

    Modelling this is your responsibility not mine.

    So far you havent supplied anything supportive.

    Why would I waste my time modelling something I know will not work?

    Let me ask you a question How do you propose to explain the massive difference  in decay rates between the proton and Neutron which is incredibly close in mass? 

    Both are matter particles, The proton mean lifetime is well beyond the age of the universe. Ie we cannot detect any significant decay

    The neutron by itself (not in an atom ) is roughly 10 minutes.

    Answer that with your model proposal

     

    When subatomic particle waves of the protons interact, they feed each other their collective material, collapsing wave functions. Like bug rips & big bangs in the microverses of most all subatomic particles. 

    The neutron flies apart because, as I said, it's between the up quark  & down quarks matter jets @ poles of the BH atom held together entirely by it's gravity underneath or adjacent to the proton quasar discs.

    Gravity is a lot stronger near the planck scale, resembling bthe other forces of the  quantum world in it's  strength:

    https://phys.org/news/2009-05-mini-black-holes.amp

  13.  

     

    5 hours ago, Mordred said:

    You don't have a model till you have mathematical testability. 

    No absolutely not.

    Are you not aware we have electron microscopes powerful enough to image atoms? We certainly do not see micro black holes in atoms.

    Wild conjecture incorrect to observational evidence enough said.

    I am done trying to help you out. You have your articles. Study them nothing about micro blackholes in those articles and they support my last post. They also include the Uncertainty principle in its mathematics.

    Enough said on your last two paragraps.

    They weren't intended to steer away from anything but provide you the aids to comprehend what your reading and answering your previous post.

    specifically this post

    https://www.sciencedaily.com/releases/2008/01/080122154357.htm

    Here are images of atom. See any micro black holes?

     https://www.sciencedaily.com/releases/2008/01/080122154357.htm

    Here are images of atom. See any micro black holes?

    Secondly if your theory were correct, there would be no stable particles nor atoms as microblackholes would radiate away via Hawking radiation.

    God damn it I thought you read my thread, Mordred. 

    https://phys.org/news/2011-05-mini-black-holes-atoms-earth.amp

    https://www.google.com/url?sa=t&source=web&rct=j&url=https://haramein.resonance.is/wp-content/uploads/Nexus-Nov-Dec-2013-Black-hole-at-heart-of-Atom-ENGLISH.pdf&ved=0ahUKEwi-o4qJrenXAhVq0oMKHZdrAaAQFgizATAZ&usg=AOvVaw1ELLH9iqPU0BwhQlRdA0tE

    https://www.scientificamerican.com/article/x-ray-lasers-make-atoms-act-like-ldquo-black-holes-rdquo-in-molecules/

    Micro BH evaporation is what would turn a proton with negative charge (microverses with microBH mass going in) into protons with positive charge (microverses without microBH or mass coming back out). The  evaporating black hole atom would keep reforming as charge transfers mass from atom to atom.

    http://www.scienceforums.com/topic/30597-the-theory-of-everything/

    This explains teleparallel gravity.

    & again if particles & waves are microverses, they have smaller black holes. Matter doesn't go into the event in my theory, only the mass which comes from micro black holes. That's why black holes turn matter into dark energy as they grow. The macroblack hole that forms at the core of a giant dying star is formed by all these black hole atoms combining during fusion. It's also what keeps galaxies from flying apart, one day the smbh of our galactic core will consume all the bha's in out galaxy, & galaxy will fly apart several adjacent parts of the universe beyond our cosmic event horizon will converge this wayward cherenkov radiation at that point, where it will reignite a qg plasma state, which will be pulled apart by these shrinking monster black holes (from the big rip) surrounding it, creating the first atoms in a new inflationary universe. This is what all radiation is doing when it goes from solid particles to waves.

     

    All of this needs to be modelled

    This is deep inside an atom so you wouldnt see it. In Einstein's time a BH was theoretical, now we find one at the center of every galaxy.

    All of these different ideas are related to one theory, I suggest someone reads my thread on hypography, create a mathematical model, & publish it in physics journal A, if you want the Nobel prize for physics.

  14. Apply this notion to my theory,

    So as for the charge of a proton, we see it as a rotating & spinning body, yet most atoms have protons & neutrons. My theory gives a micro black hole at the center of every atom, so the atoms with both 1 proton & neutron have 1 quasar disc & 1 matter jet at each pole. Positively charged protons have matter spiraling into waves away from the quasar, negatively charged protons have waves collapsing into the quasar. Quarks would also be matter-jet like. A black hole atom may have many quasar discs.

    Would the behavior of the quantum world not be the same as EM, WNF, & SNF? So isn't everything reducible to deterministic relativity when my other ideas are combined with this? Does my theory not account for dynamic dark energy & dark matter? This seriously needs to become a mathematical model. There's a dozen members here who could make it one.

    1 hour ago, Mordred said:

    There is a handy memorization rule to use for QM (specifically)below but in QFT the fields are the operators.

    operator=local=particle or more accurately a field excitation has a finite waveform/function  ie waveforms that restore to ground state of the field within finite return to ground state crossing points.   All crossings of two waveforms either to the ground state or any other waveform are finite points. Planckian

    field=propagator=non local = ((global) = neighboring states within range of  time dependant causality) neighboring states interact with one another via gauge vector bosons (virtual particles or more accurately field fluctuations (fluctuations waveforms with indeterminate amplitude boundariesOperator requires a unit of quanta=observable/measurable.sub planckian

    VERY IMPORTANT The uncertainty Principle applies to Both Fields and particles. All points of measure are affected.

    Now here is the thing, by the above one must recognize that nothing in the above describes a corpuscular (Solid like object) these are specifically waveforms. All principle particle quantum numbers have wave-functions that model the waveforms ( the excitations that define the particle)

    see here for the atom in regards to electron and orbitals.

    https://www.angelo.edu/faculty/kboudrea/general/quantum_numbers/Quantum_Numbers.htm#Principal

    So here is how this works. First off the purpose of QM and QFT aren't really the same. QM more concerns itself more so with particle collisions, So scattering effect in general from intersecting excitations. Whenever two waveforms overlap they cause interference with one another (destructive and constructive interference. QFT more concerns itself with the fields themselves as being the primary focus.

    to quote from following link under Principle of Superposition. Well defined in that article even though its a simplified link. ( includes interference of different lapping patterns.

    "When two waves interfere, the resulting displacement of the medium at any location is the algebraic sum of the displacements of the individual waves at that same location" Or due to Heisenburg uncertainty. This is your Stochastic state, it is this arena that statistical probabilities apply as this is your indeterminant state. Stochastic  with definition {b]:randomly determined; having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely [/b]

    http://www.physicsclassroom.com/class/waves/Lesson-3/Interference-of-Waves

    Now once you have determined/measured a wave-function probability no longer applies to that wave function.

    Now consider the following logic argument from the above lemmas. Due to the interference nature of wave functions, via constructive inference we get higher amplitude than the ground state wave-functions that become sharply defined (excitations with a quanta of energy). The ground state for QM being the zero point energy state.

    https://en.wikipedia.org/wiki/Zero-point_energy

    Now as to the Uncertainty, well lets think above to all that interference. Is it any wonder that their will always be interference present? that it will be present in every point of measurement? In nature their is never truly a pure state, it will always have some overlapping waveforms. Even the very act of taking a measurement will cause interference.

    These articles that recommend the stochastic treatments to geometry are applying the above, where as under relativity the field geometry is determined to describe the particle geodesic. They don't apply the uncertainty principle mainly because at the macro scale, in particular the universe the effect is negligible to measurement error. Except over vast volume ( ie can contribute to cosmological constant). BOTH QM and QFT are indeterministic treatments ( all possible) General relativity is a deterministic treatment.

    By the way thanks on the above link, I love reading these types of articles. This one is quite good. Also excellent question above hope this helps you understand this article

    Here is an assist to all readers, A holomorphism is an overlap of fields either due to excitations overlap or to fluctuation overlap. This will correspond to the Neighbors descriptive above. Under Geometry the Manifold will have a boundary condition that provides the causality region from that manifold or state. This boundary condition also applies to its own range of influence as well as IR (infrared red Extremely low frequencies) or UV (extremely high frequencies) cutoffs.

    For Gaussian fields  see Gauss Bonnet theorem which applies to Gaussian treatments to the above as well.

    https://math.berkeley.edu/~alanw/240papers00/zhu.pdf

    Dear audience, this long post was designed with the sole purpose of diverting attention away from every point I want your collective attention turned toward. QM & QFT are not at all apart of my theory, a mathematical nightmare of superpositions.

    Forget the uncertainty principal. We know how bodies in the large scale universe behave, that can serve as a formula for how superluminal (sub-planck scale relativity) & commutative (frame dragging) gravity in adjacent sub-quanta cells will make a wave-particle behave.

  15. Micro black holes in just all of the baryonic matter & energy could account for all mass in the universe, including that of dark matter. Higgs field demystified.

    In teleparallel gravity, they could account for the other three fundamental interactions, or eigen values, as well.

    & so much more involving LIGO's detection of gravity waves.

  16. Okay, stochastic gravity theory is for sub-planckian gravity

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5660882/

    But those are gravity fields, not waves (frame dragging). Waves would be superluminal & an entire microverse would have bodies that abruptly accelerate & decelerate constantly, also teleparallel gravity would make it synonymous with atomic charge. This would allow the polarity of one particle to influence that of another particle beyond the speed of light, faster, via commutation of sub-planckian gravitational interactions. No superposition or non-local action between particles that are in two places at once...again because C covers 1 planck length in 1 planck, it will cover 1/n planck lengths in 1/n planck times even if n>1

    This is within parameters of special relativity, or scale relativity, acting beyond the planck length.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.