Jump to content

Knight of Steel

Members
  • Posts

    3
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Knight of Steel's Achievements

Lepton

Lepton (1/13)

0

Reputation

  1. While the 4th dimension, a brane copying itself, is a popular view of time; it's based on finite space. You only need a plane that copies itself as it folds to create the illusion of depth to produce the effect of time if it's vertically infinite. Time gets created when the horizontally copied layers of the folding plane (viewed from the flat side) get themselves copied by larger folds in the plane that are of a higher order of magnitude. In that framework, time is the third dimension, not the 4th.
  2. No, no, I used the wrong number somehow. It's not off by ten atoms, it's off by an entire order of magnitude of atoms, it being a googol in a quarter means I'm missing one tenth of 1.25 googol atoms! This would NOT check out. Fortunately I used the wrong super tp somehow, & I also didn't consider him using 13.6 billion as the radius. "For this calculation, I'll start with a naive value for that radius, of 13.6 billion light years" which was why I had /2 in the r. 1.5915449e+39 x 6.81e+24=1.0838421e+64 meters for a super angstrom, or 1.1455894e+48 light years. 1.0838421e+64/1.1e-10 m gives you your denominator. So 1.1e-10/9.85311e+73=1.1163988e-84 meters for a sub angstrom, which is 1.4331796e+49 times smaller than a planck length. Let's use an actual calculation to confirm: 2.130e+108 atoms for every 13.6 billion light years. 1.1455894e+48/13,600,000,000 = 8.4234515e+37 light years 2.130e+108((4/3 times π times (8.4234515e+37))^3)=9.356548e+223 angstroms in a super angstrom 1.1e-10/1.1163988e-84=9.8531098e+73. (4/3 times π times (9.8531098e+73))^3=7.030493e+223 sub angstroms in an angstrom Now THAT'S within approximate.
  3. You have to back your statements up with evidence. ✓ Anecdotes are not evidence. ✓ Being challenged to present evidence is not a personal attack.✓ Calling the people in who challenge you "brainwashed" or "stupid" does not further your argument. Neither does throwing a tantrum.✓ Published research (peer-reviewed) is more credible than the alternative. But peer-review is not perfect.✓ When you have been shown to be wrong, acknowledge it.✓ Just because some paper or web site agrees with you does not mean that you are right. You need evidence.✓ Just because some paper comes to the same conclusion as you does not mean your hypotheses are the same.✓ Provide references when you refer to the work of others. Make sure the work is relevant, and quotes are in the proper context.✓ Disagreeing with you does not make someone "closed-minded." "Thinking outside the box" is not a substitute for verifiable experimental data.✓ Mainstream science is mainstream because it works, not because of some conspiracy (My lambda-max equation comes from the first solution set of black body radiation and the photoelectric effect). If you think you have an alternative, you have to cover all the bases - not just one experiment (real or gedanken). One set of experimental results that nobody has been able to reproduce is insufficient (I have over a dozen equations that check out).✓ Respect is earned. People who are resident experts, mods and administrators have earned those titles.✓ Be familiar with that which you are criticizing. Don't make up your own terminology, and know the language of the science. A theory is not a guess.✓ If nothing will convince you your viewpoint is wrong, you aren't doing science. That's religion.✓ All theories are of limited scope. (even mine) Just because a theory does not address some point you want it to does not automatically mean it's wrong.✓ Not understanding a concept, or discovering that it's counterintuitive, does not make it wrong. Nature is under no obligation to behave the way you want it to.✓ You are entitled to your own opinion, but you are not entitled to your own facts. Science cares very little about your opinion, as it has little relevance to the subject (Just because it's a theory does not mean it's a fact, I claim that not a single morsel of a pixel of a single letter in this thread is factual).✓ If you want to be taken seriously, you have to address criticism of your viewpoint.✓ | ||| ||||||||||| |||||||||||||||||||||||||| |||||||||| ||||||||||||||||||| |||||| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| The Fundamental Interactions I hypothesize that the quantum eraser is the only fundamental interaction. This interaction being between two boundless branes that are perpendicular to, and that are the corporeal reverse of, one another. This comes from William James Sidis' The Animate and The Inanimate, wherein he becomes the second polymathematical savant to predict the existence of black holes after Einstein. His black hole was different than Einstein's, it was a shard of a reverse universe, existing perpendicular to our own. That is the black hole in this theory, but this theory goes far more into speculative depth, which leads to an equation that yields Einstein's tensor when accounting for frame dragging, because v(g) will equal c. What I am about to describe using a series of thought experiments is really a metaphor for the true state of things. Things may be more fundamentally classical than the BBT would have you believe, but they may be more or less oversimplified in the following thought experiment. However, what I'm about to imagine with you as a metaphor for the true nature of reality really was necessary in order to build an equation for the quantum eraser per unit measured. https://ibb.co/dL4Yhx The black spots are white on one side of the plane, and the white spots are black on the other side. Imagine that these bi-dimensional circles begin to grow a third dimension suddenly. Now picture the thing from pack man chomping it's mouth. That's what the circles will look like as they grow a third dimension. Viewed from the flat part of the plane (making the circles into just lines), this chomping action of the line into circle moves with a velocity equal to the speed of light until a sphere is formed. The smaller lines/circles/spheres between them are copied as sections are added to the larger circle as it becomes spherical. Imagine the replication of not only the fractal qubits between the regular qubits, but the fractals of their fractals as well, ad infinitum. There is theoretically a point where the fractals of two different normal qubits connect as the qubits' doubly, or triply (ad infinitum), fractized counterparts branch off to fill every single gap of space created by the curve of separated circles. Eventually the fractals of two different circles touch at infinitely small scales. It's like Zeno's paradox. Imagine trying to cross a room by only taking 1/2 of the previous step, it happens after infinite time. You may be wondering how there can be horizontally adjacent spheres as the brane forms from a plane (in which there can be only vertically adjacent circles because two dimensional space has no depth). Well, two concepts, one is time as the third dimension that adds depth, the other is infinity. Cantor's infinity is why you don't need a fourth dimension as one finite brane moves through time to create a finite superbrane. You only need one sheet, one 2D plane, to fold into the third dimension, on each inversive-side of reality: That's because the pacman chomper effect you'd see looking at the plane from the perspective where it's infinitely thin line, means the vertical circles are stacked infinitely high & infinitely low, eventually there's going to be bigger spheres growing: a process that will copy the forming vertical spheres from a flat 2 dimensional perspective, that you had in mind, except copied horizontally. Which adds infinite more planes in front of the first one. Contact between white qubit & black qubit leads to zero, equitable nullification of both inverse branes. This is the essence of gravitation, the original pull that begins the infinite torsional pendulum of spacetime & matter-energy, how our cosmos appears to revolve at every level. Also why the white holes wrap around black holes like a hollow sphere (and in ADS, the inside out of this M-brane, the black holes form a hollow sphere around the white holes & black is the new white), until the black hole dissolves the hollow spherical quasar around it & vice versa. *This process is why local increases in thermal density will eventually increase entropy locally as well. (*That's how energy conservation is temporarily broken, that's why every antiproton becomes a proton & why there seems to be more matter than antimatter. It takes longer for entropically fleeting energy to aggregate into matter than for matter to break apart into energy*) https://pdfs.semanticscholar.org/153b/ab77fdb0476030acb2e5a4ad2b54cae658da.pdf The speed of light in this theory will be relative, not constant, because my f(n) equation will yield a fraction, & that means it will employ fractal geometry. As the addition of luminal velocities can become superluminal only in two dimensional space, it can also be superluminal within any dimension that's less than 3. There's somewhere between 4 & 6 real physical dimensions of the perpendicular inverse bi-brane at any given point in space and time, so: ~|2x|+/-~|2x|=n, 6>n>4 & 2>x>1 f(n)=(λmax)•((4π/3)r^3) c=c•x where f(x)=6/(n/(4π/3)^(1/3)) where n>6 c=c•x where f(x)=4/(n/(4π/3)^(1/3)) where 4>n n=the speed of gravitational wave propagation n also equals the pilot waves (Bohmian mechanics) The microscopic pilot wave is an aggregate of infinitessimal quantum eraser phenomenons, gravity itself is a collection of pilot waves. When culminated with the Quantum Eraser effect seen in the Quantum Eraser experiment, my f(n) equation mathematically supports the existence of a changing inversive dimension such as the 4-6 inverse dimensional bi-brane in my hypothesis when applied to measuring the speed of light through various mediums in the real universe: Ex.) How fast is the speed of light in a dense medium such as the heart of the sun? C at the center of the sun (which is 160 billion times denser than the surface) is 0.00551512557 m/s (covering the sun's radius in 4,000 years spending the vast majority of that time in the core). My equation gives the average speed of light throughout the entire sun in m/s: I found lambda max for the sun online: http://studylib.net/doc/18286845/hw-solution Link says 504 nm, or 5.04 x 10^-7 meters f(n)=(5.04 x 10^-7)(((4π/3)(*695,700,000)^3) *Radius of the sun in meters f(n)=7.1086177 x 10^20 c(f(n))=c•x where f(x)=6/(n/(4π/3)^(1/3)) where n>6 c(f(n))=299,792,458(6/(7.1086177e+20/(4π/3))^(1/3)) c(f(n))=325 m/s The speed of light 13.5 billion years ago was around a million times slower due to ions. As evidenced by a cosmic event horizon that was only a few thousand light years as opposed to the current one which is 13 billion light years. The entire universe was about as dense as the sun, so the speed of light during the CMB & my measurements on the average speed of light from the inner layers of the sun to the outer layers of the sun, are about the same. For the average velocity to be in the hundreds of meters per second with a starting velocity in the hundredths of meters per second means that the speed of light would have to increase by 4 orders of magnitude when it escapes the inner layer of a star, & then from there light would increase by 6 orders of magnitude, back to normal speeds, as light escapes the outer layer of a star. Regarding the universe's current density, on the very large scale, the illusion of gravity c(f(n)) is a few percents faster because the volume area is massive yet not very dense at all, lambda max is a high integer on that scale, all that free redshifted entropy. This is why expansion overcomes light on that scale. Ex) λmax of background radiation is 1.07 mm, a radius of superluminal galactic expansion is like distance between milky way & Andromeda, 2.5 million light years f(n)=(0.00107)(((4pi/3)(2.3651826181452 x 10^22))^3) f(n)=1.0405037 x 10^66 f(n)>6, c(f(n))=(299,792,458)(6/((1.0405037 x 10^66)/(4pi/3))^(1/3)) c(f(n))=2.8614552 x 10^-13 m/s This will be used as mathematical evidence for dark energy as the result of superluminal gravity waves from beyond the known universe later. On the very small, the width of a hydrogen atom within the pseudo energies of the sinusoidal waveform of a photon in the virtual blueshift of Earth's atmosphere, lambda max is equally miniscule, so faster than light. We see this phenomenon in neutrinos, cherenkov radiation & entangled particles. Ex) λmax of chloranil radical anion = 450 nm. Elements such as these would have a radius of about 79 picometers. f(n)=(4.5 x 10^-7)(((4pi/3)(7.9 x 10^-11)^3) f(n)=9.2935662 x 10^-37 Recall; c(f(n))=c•x, f(x)=4/(n/(4pi/3)^3) where 4>n c(f(n))=299,792,458(4/(9.2935662e-37/(4π/3)^(1/3)) c(f(n))=2.0799896 x 10^45 m/s So it would require very faint gravity to overcome the speed of light within that range at that low level of thermodynamic conductivity. This is where we come into pilot g waves (micro expansion), which carry cherenkov radiation, neutrinos, & which also entangle particles (atomic nuclei) at that level. According to fiber optic measurements, c(f(n)) for these faint pilot waves would have to be 2.0799896 x 10^-49 m/s in order to overcome gravity & entangle particles at that range. So how is QE possible? We will get to that later as well. The pilot wave (n) on our cosmic scale should be accurate enough to see how the pilot wave effects QED, that should be enough data when culminated with electron holography & laser inferometer readings on thermodynamic fluctuations in the quantum foam between point a & point b cubic volumes in the spacetime foam to predict the outcome of quantum controlled, wave piloted chain reactions between entangled particles. Enough so that one could send messages back & forth this way. Which makes modern satellite communication like a messenger pigeon by comparison. The Cosmology Let's talk about this thing, This is a primordial cloud of gas & cosmic dust. It's heavy in most place, tremendously so. Everything is so compact that it's causing interference patterns in photons, enough so that they travel slower. Hopefully it's the result of QED interference, because if not that would mean light has mass There's no proof that the universe was ever more dense than here. No proof of zero time, no proof of a big bang. For almost a century it's been well-documented that there exists mass beyond the CMB. Now there's more evidence than ever, cosmic bruising, dark flow, etc. That matter would now be over 600 billion light years away. Some of the missing baryonic matter from the missing Baryon problem might still be missing: "The initial measurements still do not account for all the ordinary matter, and some believe the remaining portion could be made up by exotic unobserved objects such as black holes or dark stars. Cosmologists are also still yet to discover the nature of dark matter, which makes up even more of the universe." This could be wayward extra-cosmic galaxies from the source of the extra-cosmic GWs traveling into our cosmic domain, or at least a gravitational domino effect in the gravitational chain-link of galaxies tugging us (modern dark flow), initiating the first phase of the big crunch, so the expansion of the CMB could be the result of GWs from beyond the known universe. Gravity is not a static field, Newtonian expansion shows that frame dragging is a constant. GWs propagate at the speed of light (demonstrated by LIGO in 2017), so GW expansion (given it's the same as the current rate of expansion) involves the addition of luminal velocities for scale relativity: there could be superluminal GWs! Consider for a moment that if adjacent bodies are in a later state of expansion than the fully expanded CMB is now, than just as the current speed of light is faster than it was 13 billion years ago, the speed of GWs propagating from those ultra-low density, ludicrously wide bodies could be faster than anything you could imagine due to scale relativity, time becomes triply relative, quadruply relative, ad infinitum, to us. The fastest GWs have traveled the farthest to get here and have therefore lost the most strength. This gravitation doesn't have to be able to overcome mass to cause the expansion of the universe. This is because of the holographic principle, but we'll get to that. Extra-cosmic gravitation would be unreadable, because we're closer to the stronger sources, & further from the weaker sources, yet the thing stretching the vacuum of space out is the amount by which the stronger gravity is winning the tug of war against the weaker gravity. Picture that. From this picture we can derive equations in order to define the effects that this extra-cosmic gravitation will have on our cosmos: The stronger GWs win the tug of war over the weaker GWs, so they are the 68% of the missing mass in the universe. The weaker GWs are losing, but they're still assisting the mass in our universe, so they must be the invisible 27% of the missing mass. The other 5% is visible. Picture what's visible as being smack in the middle of multiple sources of gravitation, we're closest to the strongest sources. Say one source is on the left, the other is on the right, we're not perfectly in the middle. C = (Length of Strong GWs)/(600billion ly in meters); length of strong gws = 1.7018019e+36 meters Length of GWs = length of the strong GWs/.05 = 3.4036038e+37 meters Length of left weak GWs = length of GWs x .27 = 9.1897303e+36 meters length of right weak GWs = length of GWs x .68 = 2.3144506e+37 meters Velocity of left weak gravitational waves = (length of left weak GWs)/(13.8 billion years in seconds) = 2.2412883e+19 m/s Velocity of right weak gravitational waves = (length of right weak GWs)/(13.8 billion years in seconds) = 5.6447261e+19 m/s (Velocity of left weak GWs) + (Velocity of right weak GWs)/2 = rate of expansion in a vacuum over length of GWs = 3.9430072e+19 m/s The big crunch will be complete when our "universe" expands to be the same size as the source of left weak GWs, which I calculated based on the rate of expansion to be in seconds: The length of the left weak gws is 1618879316.67 times the current universe of 600 billion light years, meaning you'd have to add another 1.5316217e+25 meters to our universe for it to be the same size as the source of the left weak gws. At a rate of 3.9430072e+19 m/s, that would take 1.5316178e+25 seconds, or 4.8561122e+17 years. Plug that as years on top of current age of universe in years into this denominator with length of gws in light years to get 1.0791388e+30/4.8561123e+17 years = 2.2222279e+12 light years per year, or 6.6620716e+20 m/s, for the speed of light in the superverse (Super C). Now, recall; We can use this as a way to determine the validity of super c b/c c/super(c) is literally saying the same thing as the pace in which expansion overtakes the speed of light. 299,792,458/6.6620716e+20=4.4999885e-13 m/s ~ c(f(n))=2.8614552e-13 m/s. C dilates by practically the same amount. This will be used to calculate the size in which atomic structures begin to form in the superverse by determining the new lp via how time dilation effects tp. 2.8614552e-13/5.39e-44=super tp=5.3088223e+30 seconds. c covers a distance of 1.5915449e+39 meters in super tp. There's approximately 6.81×10^24 planck lengths in the length of a hydrogen atom, so 9.8357475e+63 meters, or 1.0396097e+48 light years, is the size of a hydrogen atom in the superverse. Let's see the size of an atom in a microverse by dividing the size of an atom by the dividend of a superverse atom, 9.8357475e+63/1.1e-10 m gives you 8.9415886e+73 as your denominator. So 1.1e-10/8.9415886e+73=1.2302065e-84 meters for your microverse atom, which is 1.3005947e+49 times smaller than a planck length. Let's use an actual calculation to confirm: 2.130e+108 atoms for every 13.6 billion light years. 1.0396097e+48/13,600,000,000 = 7.644189e+37. 2.130e+108 x ((4/3 times π times 7.644189e+37)^3)=6.992616e+223 atoms 1.1e-10/1.2302065e-84 = 8.9415883e+73. (4/3 times π times (8.9415883e+73/2))^3=6.567799e+222 atoms We're off by a little over 10 atoms out of a googol and nearly 1/4th atoms, I'm going to say this ✓s out. :| The speed of light in a microverse is equal to the speed of light in a superverse. You can't use super tp to just divide for units that measure velocity in the same way I just for units that measure size, this is because of the effects of time dilation. A particle of energy is like a universe of matter, with a relatively equivalent amount of mass in adjusted scale. Fall anywhere in space, no matter how seemingly void, and you will land on matter if you're small enough: empty space ought not be really empty. We have two good reasons to think so: first, electromagnetic signals behave undoubtedly as waves; since they propagate even through intergalactic space, there must be some thing there (everywhere), in which they do wave. Second, quantum theory predicts that vacuum has physical effects, such as the Casimir effect, which is now experimentally confirmed [1]. Gerard t'Hooft, another proponent of ether theory: "Einstein had difficulties with the relativistic invariance of quantum mechanics (“does the spooky information transmitted by these particles go faster than light?”). These, however, are now seen as technical difficulties that have been resolved. It may be consid- ered part of Copenhagen’s Doctrine, that the transmission of information over a distance can only take place, if we can identify operators A at space-time point x1 and operators B at space-time point x2 that do not commute: [A, B] 6= 0 . We now understand that, in elementary particle theory, all space-like separated observables mutually commute, which precludes any signalling faster than light. It is a built-in feature of the Standard Model, to which it actually owes much of its success. So, with the technical difficulties out of the way, we are left with the more essential Einsteinian objections against the Copenhagen doctrine for quantum mechanics: it is a probabilistic theory that does not tell us what actually is going on. It is sometimes even suggested that we have to put our “classical” sense of logic on hold. Others deny that: “Keep remembering what you should never ask, while reshaping your sense of logic, and everything will be fine.” According to the present author, the Einstein-Bohr debate is not over. A theory must be found that does not force us to redefine any aspect of classical, logical reasoning. What Einstein and Bohr did seem to agree about is the importance of the role of an observer. Indeed, this was the important lesson learned in the 20th century: if something cannot be observed, it may not be a well-defined concept – it may even not exist at all. We have to limit ourselves to observable features of a theory. It is an important ingredient of our present work that we propose to part from this doctrine, at least to some extent: Things that are not directly observable may still exist and as such play a decisive role in the observable properties of an object. They may also help us to construct realistic models of the world. Indeed, there are big problems with the dictum that everything we talk about must be observable. While observing microscopic objects, an observer may disturb them, even in a classical theory; moreover, in gravity theories, observers may carry gravitational fields that disturb the system they are looking at, so we cannot afford to make an observer infinitely heavy (carrying large bags full of “data”, whose sheer weight gravitationally disturbs the environment), but also not infinitely light (light particles do not transmit large amounts of data at all), while, if the mass of an observer would be “somewhere in between”, ." More evidence: The situation is somewhat different when we consider gravity and promote the Lorentz violating tensors to dynamical objects. For example in an aether theory, where Lorentz violation is described by a timelike four vector, the four vector can twist in such a way that local superluminal propagation can lead to energy-momentum flowing around closed paths [206]. However, even classical general relativity admits solutions with closed time like curves, so it is not clear that the situation is any worse with Lorentz violation. Furthermore, note that in models where Lorentz violation is given by coupling matter fields to a non-zero, timelike gradient of a scalar field, the scalar field also acts as a time function on the spacetime. In such a case, the spacetime must be stably causal (c.f. [272]) and there are no closed timelike curves. This property also holds in Lorentz violating models with vectors if the vector in a particular solution can be written as a non-vanishing gradient of a scalar. Finally, we mention that in fact many approaches to quantum gravity actually predict a failure of causality based on a background metric [121] as in quantum gravity the notion of a spacetime event is not necessarily well-defined [239]. A concrete realization of this possibility is provided in Bose-Einstein condensate analogs of black holes [40]. Here the low energy phonon excitations obey Lorentz invariance and microcausality [270]. However, as one approaches a certain length scale (the healing length of the condensate) the background metric description breaks down and the low energy notion of microcausality no longer holds. ---- In the Bohmian view, nonlocality is even more conspicuous. The trajectory of any one particle depends on what all the other particles described by the same wave function are doing. And, critically, the wave function has no geographic limits; it might, in principle, span the entire universe. Which means that the universe is weirdly interdependent, even across vast stretches of space. ---- The hole is quantum-mechanically unstable: It has no bound states. Wormhole wave functions must eventually leak to large radii. This suggests that stability considerations along these lines may place strong constraints on the nature and even the existence of spacetime foam. ---- In invariant set theory, the form of the Bell Inequality whose violation would be inconsistent with realism and local causality is undefined, and the form of the inequality that it violated experimentally is not even gp-approximately close to the form needed to rule out local realism (54) [21]. A key element in demonstrating this result derives from the fact that experimenters cannot in principle shield their apparatuses from the uncontrollable ubiquitous gravitational waves that fill space-time. ---- A finite non-classical framework for physical theory is described which challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the universe as a deterministic locally causal system evolving on a measure-zero fractal-like geometry IU in cosmological state space. Consistent with the assumed primacy of IU , and p-adic number theory, a non-Euclidean (and hence non-classical) metric gp is defined on cosmological state space, where p is a large but finite Pythagorean prime. Using numbertheoretic properties of spherical triangles, the inequalities violated experimentally are shown to be gp-distant from the CHSH inequality, whose violation would rule out local realism. This result fails in the singular limit p = ∞, at which gp is Euclidean. Broader implications are discussed. ---- This optical pumping scenario is implicitly based on the erroneous quantum mechanical “myth” that quantum “jumps” are instantaneous. In reality transitions between atomic levels take very, very long times, about 10 million times longer than the oscillating period of the electromagnetic radiation that drives the excitation. The Microverse: A microverse is the same principle as a level 1 multiverse, matter can only arrange itself in so many different ways eventually everything assumes the same form again. Recall, The answer seems to be the holographic principle. These protons are really just giant black holes in the microverse that evaporate & spawn 10 billion times per second. The electromagnetic polar jets of radiation of the primordial CMB would be the polarity of the superverse electron. The muon is just a giant electron, whereas the neutron is an ungodly monster of a pulsar in the microverse,. Nearby is the proton, if you're an observer within the microverse it's a quasar unlike anything you could imagine in power-scale, that's over 2 thousand billion billion billion billion times larger than [url=https://arxiv.org/abs/1707.02277]that behemoth within the core of the IC 1101 galaxy[/url] (which is by far the largest SMBH we know about at 4e+10 to 10e+10 solar masses) feeding this kronos of a pulsar. Well, normally the pulsar feeds the quasar since the BH possesses a greater density of "mass", but most cases the proton is positively charged as opposed to the anti-proton. Recall, The BH consumes so much "mass" from it's surrounding quasar material that new material is no longer attracted to the proton/micro-black-hole so much as it is attracted to the neutron/micro-pulsar. Now, however, please note all neutron-proton nuclei begin their life-cycles with the proton actually being a negatively charged anti-proton in this ADS/Conformal GW Theory - but their life cycles end with the it being a normal positively charged proton feeding the neutron with matter emanating from the single down quark of the proton to the single up quark of the neutron before the cycle repeats with the reverse of that: with the neutron feeding the proton. This means that, for a neutron, its down quarks are a holographic compilation of magnetic dipole moments, the up quark is a hologram composed of a collection of briefer magnetic monopole moments - & vice versa for protons. Virtual particles aren't really what we think they are. Between negatively charged states, micro-expansion takes over, because positively charged protons are dispersing thermal picoscopic gasses, fleeting from evaporated black holes. https://i.imgur.com/YZFSQIy.jpg https://i.imgur.com/ZWp0Ehz.jpg This is much more versatile than QM, it works in explaining virtually any quantum effect. For instance, let's use the quantum venn diagram paradox; https://www.youtube.com/watch?v=zcqZHYo7ONs&t=25s https://i.imgur.com/VxO1oaS.jpg The non-virtual photons adopt new polarities as they expand, aka wave, through the vacuum mediums of the quantum sub-foam microverse. More polarizing filters=greater variety of polarities. Quark-gluon plasma is the absolute densest state matter can take. We see it in the cores of neutron stars, discs of quasars as matter is folded upon itself by compressing spacetime (gravity/mass/dark matter) around macro black holes, & in the cosmic microwave background radiation. But in this hypothesis it's more like a black star in a fully classical, not just semiclassical, framework of gravity. Any denser, & matter is just a macro black hole as there's no space between micro black holes. It's composed of micro quasars with micro black holes at their cores, barely held apart by micro expansion. Unlike vacuum radiation & the atomic world, these microverses are non-anthropic (no stellar eras) because less entropy equates to less complexity. Quark-gluon plasma is the only state of matter composed entirely of microverses that are exclusively the same as itself. Atoms & vacuum radiation will have microverses with atoms, quark-gluon plasma & vacuum radiation within them, quark-gluon plasma is only composed of microverses that are entirely filled with quark-gluon plasma. Consider the mystery of primordial SMBH formation solved as well as the reason behind why the minority of giant red stars that should collapse into black holes instead collapse into magnetars. Key Terms Local realism Quantum observer/entanglement/eraser/venn diagram paradox anti/de sitter space/ADS/CFT duality fractal geometry/scale relativity/special relativity beyond the speed of light The transplanckian problem Dark Flow/Cosmic Bruising/CMB Primer
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.