Jump to content

Non-unifying Geometrized Newton-Cartan Gravity


inSe

Recommended Posts

Non-unifying Geometrized Newton-Cartan Gravity

 

The Fundamental Interactions
 

I hypothesize that the quantum eraser is the only fundamental interaction. This interaction is between two boundless and inverted branes that are perpendicular to one another.

 

Physicists seem to be thinking of Eigen values when I use the word "branes". This is geometrized Newton-Cartan gravity; vector calculus does not apply here. This is classical physics, not quantum mechanics. When I say brane I mean a conceivable geometric structure, three dimensions in the literal sense, not the metaphysics of some incomprehensible angle that forms a tesseract. View time as the second & a half dimension. A third dimension has time dilated to a stand still, but this fractal counterpart has time contracted as a dynamicalized version of that static temporal state. Time in this theory isn’t being thought of as a third dimension moving through the 4th dimension.

 

William James Sidis, in The Animate and The Inanimate he becomes the second savant to predict the existence of black holes after Einstein. His black hole was different than Einstein's, it was a shard of a reverse universe, existing perpendicular to our own.

That is the black hole in this theory, but this theory goes far more into speculative depth…

 

(The reverse dimensionality is simply a matter of perspective, the reverse areas of the brane in my thesis represent volume mediums with negative densities, and the only points in our universe that are truly invisible via microwave spectroscopy are black holes.

 

Whatever is visible in the microwave spectrum would be fundamentally composed of areas in the brane that represent positive density mediums. Six dimensions in this sense does not represent new angles that are beyond perception, they are just three positive plus three negative dimensional volumes. Contact between positive & negative density mediums leads to zero, equal nullification of both areas in an inversive brane.

 

This is the essence of gravitation, the original pull that begins the infinite pendulum of cosmic evolution. Also why the white holes wrap around black holes like a hollow sphere (and in ADS, the inside out of this brane, the black holes form a hollow sphere around the white holes & black is the new white), until the black hole dissolves the hollow spherical quasar around it & vice versa. *This process is why local increases in thermal density will eventually increase entropy locally as well.

 

[*That's how energy conservation is temporarily broken, that's why every antiproton becomes a proton & why there seems to be more matter than antimatter. It takes longer for fleeting energy to aggregate into matter than for matter to break apart into energy*])

               

…which leads to an equation that yields Einstein's tensor when accounting for frame dragging, because v(g) will equal c.

 

Yes, yes, "retrocausality" is a result of the "quantum eraser", but in this framework it isn't a "quantum mechanical" eraser, it's a fracturing & re-organizing of the third dimension in two inside out branes on infinitely, yet paradoxically finite, scales (because at some point an approximately measureable part of reality does get erased). If you want to out-think a computer, than you must be able to use paradoxes. Computers can't process a paradox, a human mind can.
 

Cantor & Zeno's infinitesimals are like pneumonic devices that I’ve used to see a larger mathematical picture here. A basis for concepts like scale relativity as a process in which mechanical structures like black holes can be found at every point in space.

 

λmax is the maximum amount of entropy that can occur in a given medium volume. What my thesis says is simply that the perfect three dimensions of the brane are chipped away when that maximum amount of available entropy is lower - this is time contraction. The reverse of it is time dilation. If time goes, space goes. If space goes, everything surrounding that space gets closer together because the space separating those spaces no longer exists, ergo gravity.

 

Electromagnetism and the strong & weak nuclear forces arise from the resulting fracture pattern in that part of the brane. If the brane gets fractured away at points located all around the observer in every which way, everything appears to be moving outward. Not so, gaps in reality are just being filled creating the illusion of expansion.

 

The speed of light in this theory will be relative, not constant, because my f(n) equation will yield a fraction, meaning that it will employ fractal geometry. As the addition of luminal velocities can become superluminal only in two dimensional spaces, it can also be superluminal within any dimension that's less than 3.

There's somewhere between 2 & 3 real physical dimensions any given point in space and time, so:

~|2x|+/-~|2x|=n; 6>n>4; & 2>x>1

f(n)=(λmax)•((4π/3)r^3)

c=c•x where f(x)=6/(n/(4π/3)^(1/3)) where n>6

c=c•x where f(x)=4/(n/(4π/3)^(1/3)) where 4>n

n=the speed of gravitational wave propagation

 

We're talking about potential interactions that are incredibly fast (involving a velocity that carries attractive or repulsive forces) & small (smaller than the spacetime foam). These events can barely be said to have even occurred in the first place. In this simple loophole we circumvent Bell’s inequality and we allow dark forces & unification to be made classical (non-QCD). The microscopic pilot wave is an aggregate of infinitesimal quantum eraser phenomena; gravity itself is a collection of these pilot waves.

Ex.)

How fast is the speed of light in a dense medium such as the heart of the sun?

C at the center of the sun (which is 160 billion times denser than the surface) is 0.00551512557 m/s (covering the sun's radius in 4,000 years spending the vast majority of that time in the core).

My equation gives the average speed of light throughout the entire sun in m/s:

I found lambda max for the sun online:
http://studylib.net/doc/18286845/hw-solution

Link says 504 nm, or 5.04 x 10^-7 meters

f(n)=(5.04 x 10^-7)(((4π/3)(*695,700,000)^3)

*Radius of the sun in meters

f(n)=7.1086177 x 10^20

c(f(n))=c•x where f(x)=6/(n/(4π/3)^(1/3)) where n>6

c(f(n))=299,792,458(6/(7.1086177e+20/(4π/3))^(1/3))

c(f(n))=325 m/s

The speed of light 13.5 billion years ago was around a million times slower due to ions. As evidenced by a cosmic event horizon that was only a few thousand light years as opposed to the current one which is 13 billion light years.

The entire universe was about as dense as the sun, so the speed of light during the CMB & my measurements on the average speed of light from the inner layers of the sun to the outer layers of the sun, are about the same.

For the average velocity to be in the hundreds of meters per second with a starting velocity in the hundredths of meters per second means that the speed of light would have to increase by 4 orders of magnitude when it escapes the inner layer of a star, & then from there light would increase by 6 orders of magnitude, back to normal speeds, as light escapes the outer layer of a star.

Regarding the universe's current density, on the very large scale, the illusion of gravity c(f(n)) is a few percents faster because the volume area is massive yet not very dense at all, lambda max is a high integer on that scale, all that free redshifted entropy. This is why expansion overcomes light on that scale.

Ex)

λmax of background radiation is 1.07 mm, a radius of superluminal galactic expansion is like distance between milky way & Andromeda, 2.5 million light years

f(n)=(0.00107)(((4pi/3)(2.3651826181452 x 10^22))^3)

f(n)=1.0405037 x 10^66

f(n)>6,

c(f(n))=(299,792,458)(6/((1.0405037 x 10^66)/(4pi/3))^(1/3))

c(f(n))=2.8614552 x 10^-13 m/s

This will be used as mathematical evidence for dark energy as the result of superluminal gravity waves from beyond the known universe later.

On the very small, the width of a hydrogen atom within the pseudo energies of the sinusoidal waveform of a photon in the virtual blueshift of Earth's atmosphere, lambda max is equally miniscule, so faster than light. We see this phenomenon in neutrinos, cherenkov radiation & entangled particles.

Ex)

λmax of chloranil radical anion = 450 nm. Elements such as these would have a radius of about 79 picometers.

f(n)=(4.5 x 10^-7)(((4pi/3)(7.9 x 10^-11)^3)

f(n)=9.2935662 x 10^-37

Recall;

c(f(n))=c•x, f(x)=4/(n/(4pi/3)^3) where 4>n

c(f(n))=299,792,458(4/(9.2935662e-37/(4π/3)^(1/3))

c(f(n))=2.0799896 x 10^45 m/s

So it would require very faint gravity to overcome the speed of light within that range at that low level of thermodynamic conductivity. This is where we come into pilot g waves (micro expansion), which carry cherenkov radiation, neutrinos, & which also entangle particles (atomic nuclei) at that level. According to fiber optic measurements, c(f(n)) for these faint pilot waves would have to be 2.0799896 x 10^-49 m/s in order to overcome gravity & entangle particles at that range. So how is QE possible? It's the atomic oscillation frequency, the collection of particles phasing in & out of virtual states to create a hologram that acts like solid matter. In the virtual states, expansion occurs, & everything exists in a virtual state for the longest duration (longer than when it's "there"). In virtual states, you're left with a collection of micro vacuums in which this pilot wave of the components of gravity (gravitons) can surf the expansion of those microvacuums superluminally linking everything together in one big wave function (pilot wave liken to the as-of-yet unproven higgs field). This will be covered in depth later.

 

The Cosmology
 

Let's talk about the oldest observable light:

http://sci.esa.int/science-e-media/img/45/i_screenimage_18245.jpg

This was a primordial cloud of gas & cosmic dust. It was heavy in most places, tremendously so. Everything was so compact that it was causing interference patterns in photons, enough so that they travel slower. Hopefully it was the result of Ion interference, because if not that would mean light has mass. ;-)

There's no proof that the universe was ever denser than it was then. There’s no physical proof of zero time, and there’s no physical proof of a big bang. Since the early 1990s it's been well-documented that there exists mass beyond the CMB; dark flow. Now there's more evidence than ever, cosmic bruising, the Bootes Void, etc. The source of these mass disturbances in the cosmos may be more of the universe from beyond the cosmic event horizon emitting Unruh radiation in the form of gravitational waves, that part of the universe would now be over 600 billion light years away.

 
Gravity is not a static field; Newtonian expansion shows that frame dragging is a constant. GWs propagate at the speed of light (demonstrated by LIGO in 2017), so GW expansion (given it's the same as the current rate of expansion) involves the addition of luminal velocities for scale relativity: there could be superluminal GWs! Consider for a moment that if adjacent bodies are in a later state of expansion than the fully expanded CMB is now, than just as the current speed of light is faster than it was 13 billion years ago, the speed of GWs propagating from those ultra-low density, ludicrously wide bodies could be faster than anything you could imagine due to scale relativity, time becomes triply relative, quadruply relative, ad infinitum, to us.

The fastest GWs have traveled the farthest to get here and have therefore lost the most strength. This gravitation doesn't have to be able to overcome mass to cause the expansion of the universe. This is because of the holographic principle, but we'll get to that later.

Extra-cosmic gravitation would be unobservable, because we're closer to the stronger sources, & further from the weaker sources, yet the thing stretching the vacuum of space out is the amount by which the stronger gravity is winning the tug of war against the weaker gravity. Picturing that is like picturing a frame-dragging observer himself being frame-dragged from a 360 degree angle; it’s like three separate Rindler effects occurring simultaneously.

From this picture we can derive equations in order to define the effects that this extra-cosmic gravitation will have on our cosmos:

The stronger GWs win the tug of war over the weaker GWs, so we can attribute 68% of missing mass to their effects as they travel 27% of the length of total GWs involved in expansion, losing less strength as they get here at the same time as the GWs we attribute to 27% of the missing mass pulling from the opposite direction having traveled 68% of the length of total GWs involved in expansion.

Recall earlier that the velocity of light dilates by 299792458/2.8614552e-13 over 2.5 million light years. Therefore, the speed of light is only viable over a distance of 2500000(9.461e+15)/1.0476923e+21=22.5758078016 meters in a near perfect vacuum (lambda max of the vacuum)

Length of strong GWs (where v(g)=c) = 22.5758078016 x 299792458 = 6768056912.18 meters

Total Length of GWs = length of the strong GWs/.05 = 135361138244 meters

Length of left weak GWs = length of GWs x .27 = 36547507325.9 meters

length of right weak GWs = length of GWs x .68 = 92045574005.9 meters

Velocity of left weak gravitational waves = length of left weak GWs/length of strong GWs times the speed of light = 1618879273.21 m/s

Velocity of right weak gravitational waves = length of right weak GWs/length of strong GWs times the speed of light = 4077177428.81 m/s

Velocity of right weak GWs/velocity of left weak GWs = rate of expansion in a vacuum over total length of GWs = 2.51851851851 m/s

Now the speed of light over the total length of GWs is found in the same way we found the speed of light over the length of 2.5 million light years:

λmax of background radiation is 1.07 mm, the radius for total length of GWs = 135361138244/2 meters

f(n)=(0.00107)(((4pi/3)(67680569122))^3)

f(n)=12.4380444e+31

f(n)>6,

c(f(n))=(299,792,458)(6/((2.4380444e+31)/(4pi/3))^(1/3))

c(f(n))=0.09999714934 meters

Now we can find the velocity increases of c for every 22.5758078016 meter increase in the length of the GW with rate of expansion for total length of GWs = 2.51851851851 / the speed of light over total length of GWs = 0.09999714934 = +25.1859031496 m/s per 22.5758078016 meters.


Let’s see if that checks out, 2500000(9.461e+15) = 2.36525e+22 meters. 2.36525e+22/22.5758078016=1.0476923e+21 m/s. 299,792,458 + 25.1859031496(1.0476923e+21) = 2.6387077e+22 meters s within approximation. 

C dilates by 2.8614552e-13 over that same distance, 299,792,458/1.0476923e+21 = 2.8614552e-13 
Okay moving on.

In this theory the universe has no outer boundary limit. So eventually matter arrangements will repeat within larger & smaller structures. Black hole evaporation will be used to find a higher & lower cosmic scales using the proton’s frequency rate of one billion times per second, the size of a proton is 10−15 m and the Schwarzchild radius of its central black hole will give you the rate at which black evaporates.

The Schwarzchild radius is 2.484e-54 meters (just type proton into where it says earth). The rate of evaporation is 8.41e-17 seconds (just type proton into where it says earth). That’s just the vanishing rate of the proton; oscillation frequency is more for how long it would take for another proton to form plus the time it took to evaporate. Protons form at a rate of 1e-9 - 8.41e-17 = 9.9999992e-10 seconds. Now that’s enough information to use in order to finally acquire enough evidence to either confirm or deny my hypothesis.


But protons do not have λmax of a vacuum, that’s the problem, so for a proton we must use the original equation f(n)=(λmax)•((4π/3)r^3);c=c•x where f(x)=4/(n/(4π/3)^(1/3)) where 4>n to find the contraction of c with the λmax of a proton ≈ 395 nm. However, in the special case of black holes the equation must be modified. 

First of all, it’s 4πr^2 because the quasar within the Schwarzschild radius of the proton is a hollow sphere. Secondly, λmax of the proton’s quasar is the proton’s normal λmax but to the negative power of the proton’s length divided by twice the Schwarzschild radius

f(n)=(3.95e-7^-(1e-15/2(2.484e-54)))(()(2.484e-54)^2)=7.753772e-107

c(f(n))=4/(7.753772e-107/(4π))^(1/2) = 1.610306e+54 m/s

So a black hole with the mass of the sun (1391400000 meters) has a Schwarzschild radius of 2953 meters & will evaporate in 6.61e+74 seconds.

f(n)=(5.04e-7^-1(1.3914e+9/5906)) x ((4π x 2953)^3) = 2.3886249e+25 m/s

c(f(n))=6/(4π(2.3886249e+25^(1/2))=9.7693891e-14 m/s

1.610306e+54/299,792,458/9.7693891e-14=5.4981971e+58

5.4981971e+58/8.41e-17=6.5376898e+74 seconds 

Ladies & gentlemen we have ourselves a theory.

Further investigations

 

Assuming that the electron/positron is a nanoscopic primordial CMB cloud (& it acts like one); we use its oscillation frequency to find the moment of the big crunch in our universe (which is basically caused by overlapped radiation from dissolving galaxies being sprayed by the matter jets (the magnetic dipole moments) or the outflows of its accretion disk (magnetic monopole moments) of a superverse proton) by using the dilation of c equation to find the adjustment to our relative time-frame for that frequency:

The electron most likely has a radius of 10^-12 m, & λmax of about 4e-7 m (visible spectrum is where electrons like to hide).

f(n)=(4e-7)(4π/3(1e-12)^3)=1.6755161e-42

c(f(n))=4/(1.6755161e-42/(12π^(1/3)))=4.1957466e+43 m/s

The CMB had a radius of 6.9 billion light years, or 6.52809e+28 meters, & λmax of about 1,000 nm.

f(n)=(1e-6)(4π/3(6.52809e+28)^3)=1.1653249e+81

c(f(n))=6/(12π(1.1653249e+81)^(1/3))=1.5124155e-28 m/s

4.1957466e+43/1.5124155e-28=2.7742023e+71 seconds

Or 8.7958221e+60 years, the few SMBHs caught in the big crunch will only be less than half-evaporated, so this can't be right! Grrr
So, we use the time contraction of c equation to find a much larger planck length to see how many electrons fit into a super electron, this will give us a new size for the CMB, so that this process can be redone for a more accurate date for the big crunch.

Okay, there's 6.52809e+28 meters in the radius of the CMB, using (4π/3(1e-12)^3), you can fit 1.165325e+123 electrons into the electrons of the next cosmic scale. Let's see if my math confirms that number using super lp:

2.7742023e+71/299,792,458/6.58e-15=1.4063439e+77 m/s. Planck length over planck time equals 296846011.132 m/s.

 

1.4063439e+77/296846011.132=4.737621e+68 m/s as your new planck length over planck time. 296846011.132 x 5.39e-44 equals lp, so super lp equals

 

1.4063439e+77 x 5.39e-44 = 7.5801936e+33 meters. 7.5801936e+33/4.737621e+68=1.6e-35, which is the planck length (lp). There's 3.125e+22 planck lengths in the length of an electron.

 

7.5801936e+33 x 3.125e+22 = 2.3688105e+56 meters for the superverse electron. Does not confirm, the CMB should be 2.3688105e+56/2=1.1844052e+56, 1.1844052e+56/6.52809e+28=1.8143212e+27 times larger than what we can see. We can't see so much of the CMB for the same reason we can't see forever into the past, it's from a combination of redshift & the fact that the ion interference makes light fade into oblivion eons before it gets near us. For our next dilation of c equation:

f(n)=(1e-6)(4π/3(1.1844052e+56)^3)=6.959684e+162 cubic meters

c(f(n))=6/(12π(6.959684e+162)^(1/3))=8.3359856e-56 m/s

4.1957466e+43/8.3359856e-56=5.033294e+98 seconds, which is 1.5958446e+88 years. Which fits for the evaporation rate for most supermassive black holes (<100 million solar masses). But the few that are the largest in the universe, such as this one, they may grow to become superverse protons during a second or third cosmic life cycle. In the microverse, proton formation could bind cosmic rays, allowing them to exist in the long treks through the expanding vacuum of space. This also explains dark matter from a microverse's perspective. Exceptionally large SMBHs that were too large to evaporate in the previous cosmic life cycle may be the origin of this primordial SMBH. It could also explain this galaxy, which seems to lack a central black hole as well as dark matter. I'm very aware of the evaporation rate it was crucial in moving my did hypothesis to theory, but in cyclic models if one survives a big crunch it will have already undergone quite a bit of evaporation. Enough so to bind a galaxy with low mass per unit volume in it's dying troughs of life. I believe that there absolutely was a maximum solar mass BH at it's inactive center when it's photograph was taken. But being beneath the minimum for a supermassive black hole, we wouldn't have been attempting to spot anything beneath that minimum which would require much more sensitive observations.

Now, on the note of micro black holes, the only reason a solar mass black hole could bind a galaxy would be because it was lacking dark matter, those overgrown protons that are in most other galaxies. Plus, second cosmic life cycle die hard SMBHs heat up when they get blasted at the birth of a new universe (in a cyclic cosmology of course). 

These protons are really just giant black holes in the microverse. The electromagnetic polar jets of radiation of the primordial CMB would be the polarity of a giant electron. The neutron is a monster of a neutron star in the microverse. Relatively nearby is the proton, if you're an observer within the microverse it's a quasar unlike anything you could imagine in power-scale, a gazillion times larger than that behemoth within the core of the IC 1101 galaxy (which is by far the largest SMBH we know about at 4e+10 to 10e+10 solar masses). The giant proton-quasar feeds the neutron, this kronos of a pulsar. Well, normally the pulsar feeds the quasar since the BH possesses a greater density of "mass", but most cases the proton is positively charged as opposed to the anti-proton.

For most of its life, the anti/proton's quasar material is attracted to the neutron/micro-pulsar. Now, however, please note all neutron-proton nuclei begin their life-cycles with the proton actually being a negatively charged anti-proton in this Theory - but their life cycles end with the it being a normal positively charged proton feeding the neutron with matter emanating from the single down quark of the proton to the single up quark of the neutron before the cycle repeats with the reverse of that: with the neutron feeding the proton.

This means that its down quarks are a holographic compilation of magnetic dipole moments, the up quark is a hologram composed of a collection of briefer magnetic monopole moments - & vice versa for protons. Virtual particles aren't really what we think they are. Between negatively charged states, micro-expansion takes over, because positively charged protons are dispersing thermal picoscopic gasses, fleeting from evaporated black holes, & it takes a lot more time for new protons to form than to evaporate as shown during the oscillation frequency. This solves the antimatter problem.

https://i.imgur.com/YZFSQIy.jpg

https://i.imgur.com/ZWp0Ehz.jpg

This is much more versatile than QM, it works in explaining virtually any quantum effect. For instance, let's use the quantum venn diagram paradox;

https://www.youtube.com/watch?v=zcqZHYo7ONs&t=25s

https://i.imgur.com/VxO1oaS.jpg

The non-virtual photons adopt new polarities as they expand, aka wave, through the vacuum mediums of the quantum sub-foam microverse. More polarizing filters=greater variety of polarities.

Quark-gluon plasma is the absolute densest state matter can take. We see it in the cores of neutron stars, discs of quasars as matter is folded upon itself by compressing spacetime (gravity/mass/dark matter) around macro black holes, & in the cosmic microwave background radiation.

But in this hypothesis it's more like a black star in a fully classical, not just semiclassical, framework of gravity.

Any denser, & matter is just a macro black hole as there's no space between micro black holes. It's composed of micro quasars with micro black holes at their cores, barely held apart by micro expansion. Unlike vacuum radiation & the atomic world, these microverses are non-anthropic (no stellar eras) because less entropy equates to less complexity. Quark-gluon plasma is the only state of matter composed entirely of microverses that are exclusively the same as itself. Atoms & vacuum radiation will have microverses with atoms, quark-gluon plasma & vacuum radiation within them, quark-gluon plasma is only composed of microverses that are entirely filled with quark-gluon plasma.

 

I want to look at how particles of different kinds might be entangled in this theory: Forward moving gravitational waves in front of relativistic particles yank particles with perpendicular trajectories at intersection points, this allows particles to communicate faster than the speed of light. It's like an array of electrons through the 16,000 meter copper wire continuously getting T-boned by the G waves of other electrons, synchronizing their spins.

 

Now this theory isn't in the normal form you'd see with it's lambdamax 4/3pi r cubed, but math is math & there's some debate as to whether the form of math we're accustomed is even real. In reality math is just the yin yang pattern of nature so my form's as good or real or accurate as any.

 

Earlier we determined that 

 

Quote

 

 

The CMB had a radius of 6.9 billion light years, or 6.52809e+28 meters, & λmax of about 1,000 nm.

f(n)=(1e-6)(4π/3(6.52809e+28)^3)=1.1653249e+81

c(f(n))=6/(12π(1.1653249e+81)^(1/3))=1.5124155e-28 m/s

 

However, we also determined that our observations o the CMB gave us only part of the picture

 

Quote

 

7.5801936e+33 x 3.125e+22 = 2.3688105e+56 meters for the superverse electron. Does not confirm, the CMB should be 2.3688105e+56/2=1.1844052e+56, 1.1844052e+56/6.52809e+28=1.8143212e+27 times larger than what we can see. We can't see so much of the CMB for the same reason we can't see forever into the past, it's from a combination of redshift & the fact that the ion interference makes light fade into oblivion eons before it gets near us. For our next dilation of c equation:

f(n)=(1e-6)(4π/3(1.1844052e+56)^3)=6.959684e+162 cubic meters

c(f(n))=6/(12π(6.959684e+162)^(1/3))=8.3359856e-56 m/s

 

So this superverse electron is 1.8143212e+27 x 13.8 billion light years. It's therefore 2.5037633e+37 light years in diameter, with a radius of 1.1844052e+53 meters.  So our dilation of c equation becomes 

 

f(n)=(1e-6)(4π/3(1.1844052e+53)^3)=6.959684e+153

 

c(f(n))=6/(12π(6.959684e+153)^(1/3))=8.3359856e-53 m/s

 

So, not only is the CMB expanding, not only is it a giant electron, not only is it spun by outside gravitational forces, but it also is going in one direction with a velocity, the gravitational waves propagating at the speed in which it's moving plus the speed at which gws propagate at a length of 2.3688104e+53 meters. We can determine from all of this the velocity at which particles become entangled in the superverse, & from that we can determine the velocity in which they become entangled in the subverse (sub-atomic world). 

 

The electron travels at 2,200 kilometers per second, Since the speed of light for a superversal electron is going to be 136.269299091 times faster than the speed of that electron, all we need is the relative speed of light for that portion of a superverse, which can be found using the length of GWs for the superverse electron (2.3688104e+53 meters) which we find by multiplying the speed of light by the length of c's gws which we actually determined earlier:

 

Quote

 

 

Recall earlier that the velocity of light dilates by 299792458/2.8614552e-13 over 2.5 million light years. Therefore, the speed of light is only viable over a distance of 2500000(9.461e+15)/1.0476923e+21=22.5758078016 meters in a near perfect vacuum (lambda max of the vacuum)

Length of strong GWs (where v(g)=c) = 22.5758078016 x 299792458 = 6768056912.18 meters

 

 

299792458(2.3688104e+53/6768056912.18)=1.0492694e+52 m/s. Now wait, that's not actually the speed of light in the superverse, but it is the speed of gravity waves for the superverse electron, which will be added to Super C/136.269299091 in order to find the rate at which electrons entangle other particles in the superverse. Earlier we found super tp & super lp, which can be used to find super c:

 

Quote

 

Okay, there's 6.52809e+28 meters in the radius of the CMB, using (4π/3(1e-12)^3), you can fit 1.165325e+123 electrons into the electrons of the next cosmic scale. Let's see if my math confirms that number using super lp:


2.7742023e+71/299,792,458/6.58e-15=1.4063439e+77 m/s. Planck length over planck time equals 296846011.132 m/s.

 

1.4063439e+77/296846011.132=4.737621e+68 m/s as your new planck length over planck time. 296846011.132 x 5.39e-44 equals lp, so super lp equals

 

1.4063439e+77 x 5.39e-44 = 7.5801936e+33 meters. 7.5801936e+33/4.737621e+68=1.6e-35, which is the planck length (lp).

 

lp:  7.5801936e+33 meters

 

tp:  7.5801936e+33/296846011.132  = 2.5535777e+25 seconds

 

To find super c we do (2.5535777e+25/5.39e-44)(7.5801936e+33/1.6e-35)/299792458=7.486864e+128 m/s (which can actually be used to find the size of structures in the super super verse because the length of this GW on the super verse scale equals the 6768056912.18 meters in which luminal GWs begin to propagate on our scale). 

 

Okay so the superverse electron travels at 7.486864e+128/136.269299091=5.494168e+126 m/s, on one side, depending on what direction it's going, the GWs of the forward direction entangles particles directly in front at a velocity of 1.0492694e+52 (velocity of gws for superverse electron) + 5.494168e+126 (the speed of the electron). But remember that as you chain link more particles via entanglement, there's a dilation of entangled velocities just like with the speed of light being dependent on the length of the GWs.

 

Recall earlier c(f(n)) for an electron was found to be 

 

Quote

 

 

The electron most likely has a radius of 10^-12 m, & λmax of about 4e-7 m (visible spectrum is where electrons like to hide).

f(n)=(4e-7)(4π/3(1e-12)^3)=1.6755161e-42

c(f(n))=4/(1.6755161e-42/(12π^(1/3)))=4.1957466e+43 m/s

 

4.1957466e+43, but remember we'd have to divide this velocity by the length of the electron times the speed of light to account for the contraction of time. 4.1957466e+43/(1e-12 x 299792458)=1.3995504e+47 m/s

 

The larger the distance being covered, the slower QE's velocity will be relative to the speed of light. Let's measure QE for a 16km copper wire;

 

V(sa)=299792458 + ((1.3995504e+47 x .136269299091)/(8.5e+28 x 16000))

 

V(sa)=1.4023517e+13 m/s. Over 46,777 times faster over a 16 kilometer distance according to my approximation, but exactly 13,800 times faster according to the measurements.

 

 

Spoiler

empty space ought not be really empty. We have two good reasons to think so: first, electromagnetic signals behave undoubtedly as waves; since they propagate even through intergalactic space, there must be some thing there (everywhere), in which they do wave. Second, quantum theory predicts that vacuum has physical effects, such as the Casimir effect, which is now experimentally confirmed [1].

"Einstein had difficulties with the relativistic invariance of quantum mechanics (“does
the spooky information transmitted by these particles go faster than light?”). These,
however, are now seen as technical difficulties that have been resolved. It may be consid-
ered part of Copenhagen’s Doctrine, that the transmission of information over a distance
can only take place, if we can identify operators A at space-time point x1 and operators
B at space-time point x2 that do not commute: [A, B] 6= 0 . We now understand that, in
elementary particle theory, all space-like separated observables mutually commute, which
precludes any signalling faster than light. It is a built-in feature of the Standard Model,
to which it actually owes much of its success.
So, with the technical difficulties out of the way, we are left with the more essential
Einsteinian objections against the Copenhagen doctrine for quantum mechanics: it is a
probabilistic theory that does not tell us what actually is going on. It is sometimes even
suggested that we have to put our “classical” sense of logic on hold. Others deny that:
“Keep remembering what you should never ask, while reshaping your sense of logic, and
everything will be fine.” According to the present author, the Einstein-Bohr debate is not
over. A theory must be found that does not force us to redefine any aspect of classical,
logical reasoning.
What Einstein and Bohr did seem to agree about is the importance of the role of an
observer. Indeed, this was the important lesson learned in the 20th century: if something
cannot be observed, it may not be a well-defined concept – it may even not exist at all. We
have to limit ourselves to observable features of a theory. It is an important ingredient
of our present work that we propose to part from this doctrine, at least to some extent:
Things that are not directly observable may still exist and as such play a decisive role
in the observable properties of an object. They may also help us to construct realistic
models of the world.
Indeed, there are big problems with the dictum that everything we talk about must be
observable. While observing microscopic objects, an observer may disturb them, even in
a classical theory; moreover, in gravity theories, observers may carry gravitational fields
that disturb the system they are looking at, so we cannot afford to make an observer
infinitely heavy (carrying large bags full of “data”, whose sheer weight gravitationally
disturbs the environment), but also not infinitely light (light particles do not transmit
large amounts of data at all), while, if the mass of an observer would be “somewhere in between”, ."


More evidence:

The situation is somewhat different when we consider gravity and promote the Lorentz violating tensors to dynamical objects. For example in an aether theory, where Lorentz violation is described by a timelike four vector, the four vector can twist in such a way that local superluminal propagation can lead to energy-momentum flowing around closed paths [206]. However, even classical general relativity admits solutions with closed time like curves, so it is not clear that the situation is any worse with Lorentz violation. Furthermore, note that in models where Lorentz violation is given by coupling matter fields to a non-zero, timelike gradient of a scalar field, the scalar field also acts as a time function on the spacetime. In such a case, the spacetime must be stably causal (c.f. [272]) and there are no closed timelike curves. This property also holds in Lorentz violating models with vectors if the vector in a particular solution can be written as a non-vanishing gradient of a scalar. Finally, we mention that in fact many approaches to quantum gravity actually predict a failure of causality based on a background metric [121] as in quantum gravity the notion of a spacetime event is not necessarily well-defined [239]. A concrete realization of this possibility is provided in Bose-Einstein condensate analogs of black holes [40]. Here the low energy phonon excitations obey Lorentz invariance and microcausality [270]. However, as one approaches a certain length scale (the healing length of the condensate) the background metric description breaks down and the low energy notion of microcausality no longer holds.

----

In the Bohmian view, nonlocality is even more conspicuous. The trajectory of any one particle depends on what all the other particles described by the same wave function are doing. And, critically, the wave function has no geographic limits; it might, in principle, span the entire universe. Which means that the universe is weirdly interdependent, even across vast stretches of space.

----

The hole is quantum-mechanically unstable: It has no bound states. Wormhole wave functions must eventually leak to large radii. This suggests that stability considerations along these lines may place strong constraints on the nature and even the existence of spacetime foam.

----

In invariant set theory, the form of the Bell Inequality whose violation would be inconsistent with realism and local causality is undefined, and the form of the inequality that it violated experimentally is not even gp-approximately close to the form needed to rule out local realism (54) [21]. A key element in demonstrating this result derives from the fact that experimenters cannot in principle shield their apparatuses from the uncontrollable ubiquitous gravitational waves that fill space-time.

----

A finite non-classical framework for physical theory is described which challenges the conclusion that the Bell Inequality has been shown to have been violated experimentally, even approximately. This framework postulates the universe as a deterministic locally causal system evolving on a measure-zero fractal-like geometry IU in cosmological state space. Consistent with the assumed primacy of IU , and p-adic number theory, a non-Euclidean (and hence non-classical) metric gp is defined on cosmological state space, where p is a large but finite Pythagorean prime. Using numbertheoretic properties of spherical triangles, the inequalities violated experimentally are shown to be gp-distant from the CHSH inequality, whose violation would rule out local realism. This result fails in the singular limit p = ∞, at which gp is Euclidean. Broader implications are discussed.

----

This optical pumping scenario is implicitly based on the erroneous quantum mechanical “myth” that quantum “jumps” are instantaneous. In reality transitions between atomic levels take very, very long times, about 10 million times longer than the oscillating period of the electromagnetic radiation that drives the excitation.

 

 

A photon isn’t even a point particle in this theory, it’s a bunch of tiny galaxy clusters tugging each. A neutrino is like a decaying neutron star that is a few trillion light years in diameter.

Link to comment
Share on other sites

1 hour ago, inSe said:

vector calculus does not apply here. This is classical physics

Huh?

1 hour ago, inSe said:

A photon isn’t even a point particle in this theory, it’s a bunch of tiny galaxy clusters tugging each. A neutrino is like a decaying neutron star that is a few trillion light years in diameter.

"Oh my god, it's full of stars"

Link to comment
Share on other sites

2 hours ago, Strange said:

Huh?

 

Nope, you only need to use the equation for finding the volume density of sphere. Then invert it for the black holes. That's all you need to define these dimensions mathematically. Plus, without QCD or point-like particles, how can there be color charge. The dimensional analysis gets completely remade by the time you need to use vectors, that's past the defining state though. 

2 hours ago, Strange said:

"Oh my god, it's full of stars"

Yes.

3 hours ago, inSe said:

Cantor & Zeno's infinitesimals are like pneumonic devices that I’ve used to see a larger mathematical picture here. A basis for concepts like scale relativity as a process in which mechanical structures like black holes can be found at every point in space.

Any objections? 

Edited by inSe
Link to comment
Share on other sites

1 hour ago, inSe said:

Any objections? 

None at all. In fact your OP is the longest stream of bullshit I’ve seen on this forum for a long time. Please continue while you still can because when (please note theres no if) this thread ends up in the trash where it belongs you won’t be able to post in it anymore...as it will get locked. Go on, have a blast and ridicule yourself some more with arbitrarily arranging words of which meaning you have no clue. 

Edited by koti
Link to comment
Share on other sites

1 hour ago, koti said:

None at all. In fact your OP is the longest stream of bullshit I’ve seen on this forum for a long time. Please continue while you still can because when (please note theres no if) this thread ends up in the trash where it belongs you won’t be able to post in it anymore...as it will get locked. Go on, have a blast and ridicule yourself some more with arbitrarily arranging words of which meaning you have no clue. 

No I backed it up with my λmax based equation, which matches a great deal of real measurements such as black hole evaporation and the measured velocity of "spook action" over 16km. If you believe my ideas are false you'll have to explain how my "bullshit" equation checked out with actual measurements of the physical universe on its own. It happened more than once, it was no accident or fluke that the velocities & rates of time were so close to what's measured. If I hadn't used the rounded lp & tp it would have matched exactly, & you can test that yourself. 

Edited by inSe
Link to comment
Share on other sites

13 hours ago, inSe said:

No I backed it up with my λmax based equation, which matches a great deal of real measurements such as black hole evaporation and the measured velocity of "spook action" over 16km. If you believe my ideas are false you'll have to explain how my "bullshit" equation checked out with actual measurements of the physical universe on its own. It happened more than once, it was no accident or fluke that the velocities & rates of time were so close to what's measured. If I hadn't used the rounded lp & tp it would have matched exactly, & you can test that yourself. 

What you have done is shown how little you understand why vector Calculus is used in physics. It also shows how little you understand the higher dimensions and how they apply under calculus.

In Calculus a dimension is simply any independant variable that can change value in an equation without affecting any other variable within that equation. In string theory application it is often different potential regions within the overall global volume ie 3 dimensions for volume then the various field interactions from the fundamental forces.

I looked over your post and can see no equation or methodology that reflects Newton Cartan gravity. You have none of the Newton Cartan gravity formulas that I can tell nor have you demonstrated a proper understanding behind the theory itself.

Cartan gravity requires a Yang Mills symmetry group [latex] SO(1.4)[/latex]

Can you demonstrate the affine connection [latex]\Gamma^\rho_{\mu\nu}=\frac{1}{2}g^{\rho\sigma}(\partial_\mu g_{\sigma\nu}+\partial_\nu g_{\mu\sigma}-\partial_{\sigma}g_{\mu\nu})[/latex]

unless you can produce the affine connections that will later involve the geodesic equations your theory is meaningless

the Guage connection for Cartan gravity being

[latex]A_{\mu^A_B}(x)[/latex]

Edited by Mordred
Link to comment
Share on other sites

2 hours ago, Mordred said:

What you have done is shown how little you understand why vector Calculus is used in physics. It also shows how little you understand the higher dimensions and how they apply under calculus.

In Calculus a dimension is simply any independant variable that can change value in an equation without affecting any other variable within that equation. In string theory application it is often different potential regions within the overall global volume ie 3 dimensions for volume then the various field interactions from the fundamental forces.

This is specifically why I said vector calculus does not apply here. The only fundamental interaction occurs when a positive density volume contacts with a negative density volume. They negate each other. I defined this process as time contraction/dilation dependent on the level of λmax. The reduction of the third dimension is Dynamical, which is time.

 

3 hours ago, Mordred said:

 

I looked over your post and can see no equation or methodology that reflects Newton Cartan gravity. You have none of the Newton Cartan gravity formulas that I can tell nor have you demonstrated a proper understanding behind the theory itself.

Cartan gravity requires a Yang Mills symmetry group SO(1.4)

Can you demonstrate the affine connection Γρμν=12gρσ(μgσν+νgμσσgμν)

unless you can produce the affine connections that will later involve the geodesic equations your theory is meaningless

the Guage connection for Cartan gravity being

AμAB(x)

I haven't gotten to that part yet. I had to lay the sediment by removing point like particles & confirming that my alternative meets actual measurements 

Link to comment
Share on other sites

4 hours ago, inSe said:

This is specifically why I said vector calculus does not apply here. The only fundamental interaction occurs when a positive density volume contacts with a negative density volume. They negate each other. I defined this process as time contraction/dilation dependent on the level of λmax. The reduction of the third dimension is Dynamical, which is time.

 

This makes absolutely no sense...

Why would vector calculus not be applicable for starters ?

 The purpose of a geodesic equation is to model vectors ( freefall path under relativity) ie via action as one methodology upon particle free fall as per Newton for non relativistic or Einstein Cartan for relativistic. This is the distinction between the two theories.  The very purpose of a geodesic 

 

 So when describing particle or even a waveform under motion why would you not apply vector calculus ?

 

Edited by Mordred
Link to comment
Share on other sites

24 minutes ago, Mordred said:

This makes absolutely no sense...

Why would vector calculus not be applicable for starters ?

 The purpose of a geodesic equation is to model vectors ( freefall path under relativity) ie via action as one methodology upon particle free fall as per Newton for non relativistic or Einstein Cartan for relativistic. This is the distinction between the two theories.  The very purpose of a geodesic 

 

 So when describing particle or even a waveform under motion why would you not apply vector calculus ?

 

It hasn't been necessary yet, so far we have one interaction in which every variable is dependent on and you only need λmax to define it.

Link to comment
Share on other sites

  You haven't really shown anything other than a smattering of wild conjectures...in your OP. You have presented numerous different models and interactions in your OP that it is literally a random collections of ideas. Break each interaction within those your modelling as separate entities under the same field metric. Under fields scalar quantities/temperature are readily modelled. Any quantity that includes a direction of change are naturally vector.

 Now try to organize each and every coordinate as to what occurs with the chosen interaction.

 Sounds like calculus to me....try to organize variation of change of any measured value (regardless of what it represents) on a coordinate basis without vector calculus.

Good luck on that

Now here is the trick Newton Cartan is a torsional related model, good luck modelling torsion or any rotational interaction based theory without use of vectors and calculus. A physics model is pointless unless it describes some form of interaction, a model such as the ideas you have above has many. All not well described under a metric.

 

 

 

Edited by Mordred
Link to comment
Share on other sites

18 hours ago, Mordred said:

  You haven't really shown anything other than a smattering of wild conjectures...in your OP. You have presented numerous different models and interactions in your OP that it is literally a random collections of ideas. Break each interaction within those your modelling as separate entities under the same field metric. Under fields scalar quantities/temperature are readily modelled. Any quantity that includes a direction of change are naturally vector.

 Now try to organize each and every coordinate as to what occurs with the chosen interaction.

 Sounds like calculus to me....try to organize variation of change of any measured value (regardless of what it represents) on a coordinate basis without vector calculus.

Good luck on that

Now here is the trick Newton Cartan is a torsional related model, good luck modelling torsion or any rotational interaction based theory without use of vectors and calculus. A physics model is pointless unless it describes some form of interaction, a model such as the ideas you have above has many. All not well described under a metric.

 

 

 

They weren't random or wild. They were creative explanations for deleterious di-brane based solutions to the fundamental interactions along with spook action. 

This might be a variation of NC g but it doesn't involve a smooth manifold, so AGT is going to be tricky for this variation of NC g. 

On 4/7/2018 at 10:04 AM, Mordred said:


AμAB(x)

That's based on equilateral triangles, which are smooth. How would that even work for Koch's triangle?

I'm not saying it wouldn't involve vector calculus btw. Vector calculus isn't necessary in understanding straight trajectories in spook action or units of time in BH evaporation, or in the concept of six Euclidean dimensions as three positive euclidean dimensions plus three negative euclidean dimensions (a deleterious di-brane) until actual resulting structures form from the self-automated deletion of the smooth branes, leading to one non-smooth structure that looks like the universe:

https://imgur.com/a/XdWp3

Edited by inSe
Link to comment
Share on other sites

https://arxiv.org/pdf/1010.0775.pdf

https://arxiv.org/pdf/1612.05341.pdf

That's how it works for a koch snowflake

But the dislocations still were not necessary for when I calculated spook action (close enough to match it's measured velocity suggesting particle velocity plus g wave velocity is what t-bones particles with perpendicular trajectories synchronizing the spins of said particles at a superluminal rates given v(gw)=c), because I wasn't calculating this velocity for any particle trajectory in particular.

Edited by inSe
Link to comment
Share on other sites

,That isn't based on Koshes snowflake its a tensor, unfortunately the latex doesn't display the superscript and subscript without stacking it on top of each other. However is an antisymmetric tensor representing spatial rotation or torsion of the Yang Mills gauge connection. A and B are the SO(1.4) guage indices.

[latex]A^A_B(x)=A_\mu A^A_B dx^\mu[/latex] under the Einstein summation rules for an antisymmetric tensor you will have a superscript indice followed by a subscript indice indicating an antisymmetric tensor. Unfortunately the latex typically stacks the two instead of the superscript followed by the subscript. The formulas I posted are in the coordinate basis of the four momentum/velocity for SO(1.4) the tensor signature is diag (-1,1,1,1,1)(diag=orthogonal) so in the equation in this post A=(0,1,2,3,4) however [latex]\mu=(0,1,2,3)[/latex]. This will correspond to the anti-Desitter spacetime as per certain variations of ADS/CFT. Remember in Newtan Cartan you have embedded fields with different symmetries. In this case we have electromagnetism and gravity. This complicates the Levi_Civita densities to [latex]\epsilon^{\mu\nu\rho\sigma}[/latex] for the

SO(1.4) the rolling indices are ABC=0 to 4, for SO(3) i.j,k=(0 to 3), SO(3)=I,jk=(1,2,3) spacetime  connection 4d [latex]\mu,\nu,\rho[/latex]=(0 to 3)the Cartan rolling is the tensor [latex]A^{AB}[/latex] with contact vector [latex]V^A[/latex] for the 4d or [latex]V^i[/latex] for the 2d. Torsion is [latex] \tau^A[/latex] for 4d [latex]\tau^i[/latex] for 2d. Does that help or overly confuse lol.

PS the Poincare group is SO(1,3)

 

Edited by Mordred
Link to comment
Share on other sites

6 hours ago, Mordred said:

,That isn't based on Koshes snowflake its a tensor, unfortunately the latex doesn't display the superscript and subscript without stacking it on top of each other. However is an antisymmetric tensor representing spatial rotation or torsion of the Yang Mills gauge connection. A and B are the SO(1.4) guage indices.

A^A_B(x)=A_\mu A^A_B dx^\mu under the Einstein summation rules for an antisymmetric tensor you will have a superscript indice followed by a subscript indice indicating an antisymmetric tensor. Unfortunately the latex typically stacks the two instead of the superscript followed by the subscript. The formulas I posted are in the coordinate basis of the four momentum/velocity for SO(1.4) the tensor signature is diag (-1,1,1,1,1)(diag=orthogonal) so in the equation in this post A=(0,1,2,3,4) however \mu=(0,1,2,3) . This will correspond to the anti-Desitter spacetime as per certain variations of ADS/CFT. Remember in Newtan Cartan you have embedded fields with different symmetries. In this case we have electromagnetism and gravity. This complicates the Levi_Civita densities to \epsilon^{\mu\nu\rho\sigma} for the

SO(1.4) the rolling indices are ABC=0 to 4, for SO(3) i.j,k=(0 to 3), SO(3)=I,jk=(1,2,3) spacetime  connection 4d \mu,\nu,\rho =(0 to 3)the Cartan rolling is the tensor A^{AB} with contact vector V^A for the 4d or V^i for the 2d. Torsion is \tau^A for 4d \tau^i for 2d. Does that help or overly confuse lol.

PS the Poincare group is SO(1,3)

 

I knew the left side of the affinity equation would be negative looking at it and that didn't make sense to me. Now it does, it also makes since now looking at what you just wrote is 2i=A why the affine gauge is a 4d. 

But I'm not familiar with standard form yet. What I don't understand is what to plug in, or where. Esp for the so(3)

Edited by inSe
Link to comment
Share on other sites

What I do know is n in gamma is going to be a tiny fraction considering this is particle physics, which is how I know the left side of the affinity is the asymmetric. It seems like the affine is the subscript of the gauge? Then with a smooth manifold I have to get a rough koch/hilbert curve manifold before I can even start the geodesic

Link to comment
Share on other sites

We cross posted while I was getting a link for affine connections see above.

 Here is the thing, learning tensors requires a large background in preliminaries including differential geometry. These are not the same mathematical objects one is used to as per algebra. Secondly when one is working in Newton Cartan one also must apply the Cartan connections between manifolds. Its not something that is easily or readily explained on a forum however one uses normalized units

[latex]c=g=\hbar=1 [/latex] 

Cartan geometry follows different lemmas than Rheimannian geometry so one must be familiar with the differences. Well here is a 168 page article on Newtan Cartan. You can see fr9m this that the problem is far more complex than merely plugging in values in the above equation.

https://www.google.ca/url?sa=t&source=web&rct=j&url=https://www.rug.nl/research/portal/files/34926446/Complete_thesis.pdf&ved=2ahUKEwj9vZykxrDaAhWpsFQKHe3EA4oQFjAHegQIBhAB&usg=AOvVaw2GAhJAD69j8f-S0aGy2o1c

 

Now ask yourself the following question. Does the equations you posted even begin to touch upon Newtan Cartan theory in accordance with the lemmas and axioms under Cartan theory ?

Edited by Mordred
Link to comment
Share on other sites

I knew it wasn't a matter of plugging in values looking at it.

Most people go through pre-calc without having a subconscious urge to pry 100% into the proofs for a total understanding of euclidean geometry. It was me & another student, neither of us even bothered finishing the course because we neither of us could settle for the incompleteness of the curriculum. It's a form of OCD.

Was I wrong to not want to move on to calculus before fully understanding geometry? Obviously not, I wouldn't have the demonstrably true proofs of what actually entangles or creates mass or materializes virtual particles in my op if I didn't think that way.

You said I had "equations not well defined by a metric", those equations coincide with what's expected from bh proton evaporation rates & the measured velocity of spook action avoiding bells inequality

Edited by inSe
Link to comment
Share on other sites

Which course and how far did you get into it ? This will help me provide a direction and give me an idea on your math skills. 

 For example if you have never worked with tensors before that alone requires a huge study to use properly. Let alone understanding how Newton Cartan applies the Einstein feild equations under the Newton limit with quage group symmetries.

Edited by Mordred
Link to comment
Share on other sites

39 minutes ago, Mordred said:

 

 

Now ask yourself the following question. Does the equations you posted even begin to touch upon Newtan Cartan theory in accordance with the lemmas and axioms under Cartan theory ?

None of them cover any specific interaction under the dislocated particle trajectories, but they do define rates of generalized interaction do they not?

2 minutes ago, Mordred said:

Which course and how far did you get into it ? This will help me provide a direction and give me an idea on your math skills. 

 For example if you have never worked with tensors before that alone requires a huge study to use properly. Let alone understanding how Newton Cartan applies the Einstein feild equations under the Newton limit with quage group symmetries.

I took it twice, the second time I got hit with everything completely different. Basically everything there is concerning algebra & trigonometry, basically incompleteable in one course, so what's given is subject to the professor's discretion. I taught myself more than textbook, the textbook selects from 18th century proofs, which was what I got into. But I don't remember much.

 

Newton cartan builds a manifold, a change over time. What my equations showed was a lot more rudimentary.

Link to comment
Share on other sites

 Unfortunately in order to compete with Newton Cartan theory as it is today one must show and prove ones theory or methodology is equally complete and accurate. Needless to say what you have is far too rudimentary to provide a reasonable alternative to the Newton Cartan theory.  Needless to say a handleful of equations doesn't compare to what is involved in Newton Cartan theory.  There is a reason for this, that reason is a physicist will want to be able to determine what will occur at every infinitesimal of every coordinate. This is the completeness of a theories predictability and a measure of its completeness. Needless to say you have years of preliminary work in front of you to approach this degree of predictability.

Link to comment
Share on other sites

32 minutes ago, Mordred said:

 Unfortunately in order to compete with Newton Cartan theory as it is today one must show and prove ones theory or methodology is equally complete and accurate. Needless to say what you have is far too rudimentary to provide a reasonable alternative to the Newton Cartan theory.  Needless to say a handleful of equations doesn't compare to what is involved in Newton Cartan theory.  There is a reason for this, that reason is a physicist will want to be able to determine what will occur at every infinitesimal of every coordinate. This is the completeness of a theories predictability and a measure of its completeness. Needless to say you have years of preliminary work in front of you to approach this degree of predictability.

I don't really think it will take that long. Especially not if I work with professors, maybe they'd even know some average people who work in the applied sciences of communication theory. Applied, meaning they can create signalling devices with any form of communication based on varied particle physics, such as the classically unified field oscillations suggested in this theory

Edited by inSe
Link to comment
Share on other sites

Well as a physicist I can tell you that you need a lot of preliminary work to make what you have above presentable and complete enough to even have a hope of getting a PH.D on board.  The question comes down to how far are you willing to work on your model to develop it ?

Link to comment
Share on other sites

1 minute ago, Mordred said:

Well as a physicist I can tell you that you need a lot of preliminary work to make what you have above presentable and complete enough to even have a hope of getting a PH.D on board.  The question comes down to how far are you willing to work on your model to develop it ?

But You see with the right connections, even people who are currently only aspiring to be in physics or the applied sciences, a rudimentary understanding of Unified Field Oscillations goes a long way.

Especially these days

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.