Skip to content

Simplifying SR and GR with Relational Geometry — Algebraic Derivations Without Tensors. Testing and discussion.

Featured Replies

3 hours ago, Anton Rize said:

Could you elaborate please? What was the reasoning that led you to this conclusion? And prior knowledge of what?
You can use this derivation chart to point out the logical block that "require prior knowledge" https://willrg.com/LOGOS_MAP/

I will try to answer the question when I get to know somehow what you are doing, philosophically they are things I agree with you, however,am wondering how you are getting the answers,why is it possible you are getting the answers and why is it working like that...because if it's true it's working, then the implications are profound...

13 hours ago, MJ kihara said:

You use signal delay to determine distance..how do you determine where the signal is coming from and how do you determine the delay?

I would like you to answer the question above because it would clarify to us your thinking.

There are four fundamental quantities in your formulation/method; 1-Energy...2-Time...3-mass...4-length.

Can you briefly explain to us your understanding of those quantities and your intuition about them? From the perspective of your formulation.

  • Author
2 hours ago, MJ kihara said:

how you are getting the answers

Thank you. I genuinely appreciate this question. You are the first person in this discussion to ask about the philosophical foundation, and to be honest, it is the most important part of the entire framework.

If you look only at the equations, it is perfectly natural to suspect "reverse engineering" or curve-fitting based on prior knowledge of SR and GR. But the reality is exactly the opposite, and it is the most fascinating aspect of this research.

The answers you see are not assembled by picking and choosing variables to match textbook results. The entire framework is the inevitable consequence of a single choice made at the very beginning, governed by two strict rules:


1. Epistemic Hygiene:

"This line of reasoning derives physics by removing hidden assumptions, rather than introducing new postulates... No assumptions are introduced and no constructs are retained unless they are geometrically or energetically necessary."

2. Relational Origin:

"All physical quantities must be defined by their relations. Any introduction of absolute properties risks reintroducing metaphysical artefacts."



And now we proceed from strict epistemic minimalism, disallowing all background structures, even hidden or asymptotic ones.

Historical Pattern: breakthroughs delete, not add

* Copernicus eliminated the Earth/cosmos separation.

* Newton eliminated the terrestrial/celestial law separation.

* Einstein eliminated the space/time separation.

* Maxwell eliminated the electricity/magnetism separation.

Each step widened the relational circle and reduced the number of unexplained absolutes. The spacetime--energy split is the only survivor of this pruning sequence.

The contemporary split: an unpaid ontological bill

All present-day theories (SR, GR, QFT, CDM, Standard Model) are built with a bi-variable syntax:

[math]\underbrace{\text{fixed manifold + metric}}_{\text{structure}} + \underbrace{\text{fields + constants}}_{\text{dynamics}}[/math]

No observation demands this duplication; it is retained purely because the resulting Lagrangians are empirically adequate inside the split. The split is therefore not an empirical discovery but an unpaid ontological debt.

Consequence

Until an experiment varies the amount of space while holding everything else fixed, the spacetime - energy separation remains an un-evidenced metaphysical postulate - the last geocentric epicycle in physics.


Here is how the derivation actually works (How I get the answers):

Step 1: Removing the Container

Standard physics assumes a background "container" (spacetime) that gets filled with "things" (energy/matter). By applying Epistemic Hygiene and Relational Origin, we must delete this unobservable background. We are left with only one invariant reality: Energy. But in a purely relational framework, Energy is not a scalar "thing" inside an object; it is strictly the relational measure of difference between states. Thus, we arrive at the core ontological equivalence: [math]\text{SPACETIME} \equiv \text{ENERGY}[/math].

Step 2: Getting on the "Rails"

If there is no background 3D grid, how do we encode this conserved relational energy? The mathematics forces the answer. The absolute minimal, background-free geometric structures capable of hosting a conserved relational resource are the closed topologies of [math]S^1[/math] (for directional kinematics) and [math]S^2[/math] (for omnidirectional gravitation). These are not objects in space; they are the protocols of interaction.

Step 3: Geometric Crystallization

Once the system is on these topological "rails," the physics derives itself. Because [math]S^1[/math] and [math]S^2[/math] are closed geometries, any physical state is simply an orthogonal projection, distributed between an External Amplitude (like kinematic shift, [math]\beta[/math]) and an Internal Phase (internal order/proper time, [math]\beta_Y[/math]).

Because the carrier is closed, the geometry strictly demands:

[math]\beta^2 + \beta_Y^2 = 1[/math]

This is where all the "answers" come from. I did not work backwards from Einstein's time dilation formula. Instead, the geometry dictates that if an observer's state increases its external relational amplitude ([math]\beta[/math]), it must geometrically rotate, necessarily decreasing its internal phase ([math]\beta_Y[/math]). That phase reduction is the physical phenomenon of time dilation.


There is no room for reverse engineering because there are zero free parameters. Once you remove the background space and enforce relational closure, the geometry simply doesn't leave any other algebraic path. The philosophy enforces the geometry, and the geometry generates the physics.

I hope this clarifies the methodology! I would be happy to answer any deep questions. I also encourage you to test it yourself. Just try starting from the same methodological principles and derive anything else but S^1 and S^2. I tried multipole times its impossible without inconsistencies. All LLM's that I tested this logical chain are also converging towards S^1, S^2 independently.

If you are reader:
You can find all derivations in details in this document https://willrg.com/documents/WILL_RG_I.pdf

If you are viewer:
You can glance through the presentation slides: https://github.com/AntonRize/WILL/blob/218963db78ec0f0c2333704b5f1c34dffa4acf98/images/Relational_Geometry_I_21.02.26.pdf
Or you can watch a simple 7 min video but its very basic: https://notebooklm.google.com/notebook/d2c97547-d93b-4f20-ad39-2cf307b8d91d?artifactId=67df8cab-b837-4ffd-8cdc-518eec5024eb

If you are listener:
You can enjoy a nice Deep Dive podcast on WILL_RG_I paper: https://notebooklm.google.com/notebook/d2c97547-d93b-4f20-ad39-2cf307b8d91d?artifactId=dbad15b6-8585-44e4-9306-ae25211177ee



So in conclusion:

I'm not getting the answers, I'm just simply reviling inevitable consequences of the core methodological principals. The incredible part is that they happened to perfectly match the fundamental laws of physics.

4 hours ago, MJ kihara said:

because if it's true it's working, then the implications are profound...

Yeh I know what you mean... It is a bit unsettling... so I prefer not to think about it. In the end Im driven not by the desire for recognition but by a personal need to resolve fundamental questions about reality in the simplest possible terms.

On 2/18/2026 at 7:45 AM, Anton Rize said:

Recall that energy is defined as the relational measure of difference between possible states. It is not an intrinsic property but a relational potential for change. It is never observed directly, only through transformations.

Energy is the ability to do work...by doing work, transformations are observed... therefore,your defination of energy is the same version as ability to do work.

On 2/18/2026 at 7:45 AM, Anton Rize said:

* Observer [math]A[/math] is the center of their own relational framework. Observer [math]B[/math] is a point on [math]A[/math]'s [math]S^1[/math] (for kinematic relations) and [math]S^2[/math] (for gravitational relations).

* Simultaneously, observer [math]B[/math] is the center of their own framework. Observer [math]A[/math] is a point on [math]B[/math]'s [math]S^1[/math] and [math]S^2[/math].

What is the relational framework?... observer A has a co-ordinate system the axis not being X,Y but S^1 and S^2

Every observer has his own reference frame with axis S^1 and S^2.

On 2/18/2026 at 7:45 AM, Anton Rize said:

The parameters [math]\beta[/math] and [math]\kappa[/math] are the coordinates within these relational dimensions

Where do this parameter fit? Is it between the observers or is it within the observer's circle?

On 2/18/2026 at 7:45 AM, Anton Rize said:

the conservation laws (e.g., [math]\beta^2 + \beta_Y^2 = 1; \quad \kappa_X^2 + \kappa^2 = 1[/math])

Subscript X and Y stand for what?

On 2/18/2026 at 7:45 AM, Anton Rize said:

Observer [math]B[/math] is a point on [math]A[/math]'s [math]S^1[/math] (for kinematic relations) and [math]S^2[/math] (for gravitational relations).

S^1- kinematic relations and S^2-gravitational relations....Hamiltonian equals kinetic energy plus potential energy...what you are doing is splitting the Hamiltonian and distributing it's component into axis.

On 2/18/2026 at 7:45 AM, Anton Rize said:

So the total relational shift [math]Q=\sqrt{\beta^2+\kappa^2}[/math] stays invariant between frames.

Q is equivalent to spacetime interval which is, if am not wrong, the same to all observers therefore invariant.

That's partially why I was asking about prior knowledge.

Judging from your posts Its obvious to me your not recognizing the true power and versatility of geometry. Anytime I see any new theory that a member is developing I always point out the need for a geometry

the second most common recommendation I mention is " Comparison against existing models and theories or methodologies. Comparisons isn't strictly accuracy but also includes flexibility of application and ease of use. ( Your previous mentioned Occam's razor ). Now in order help others develop these skills I often have to take the opposite stance in discussions. So for this post I am going to use this technique by detailing the power of geometry and the usefulness of mappings. I will endeavor to keep this as simple as possible. I mentioned before the main goal isn't just making accurate predictions that is fundamentally just a confirmation that the mathematics your employing has some measure of validity. I will be employing the devils advocate in future posts just so you are forewarned and are now aware of what the reasons is. Hopefully nothing I state gets misconstrued on any personal level (All too common in discussions I've had with other members banned or otherwise.)

In my professional real world experience the single most important tool to the all the work I've ever done is graphs. If you can't interpret a graph of results from measurements you will never get work as a physicist plain and simple, no exceptions in my experience. Doesn't make any difference what field of physics your applying. A graph however is not restricted to (x, y ,z etc) any parameter can be used as a replacement. Manifolds in particular employ the fact that x, y and z are convenient labels nothing more. ( coordinate basis). One can arbitrarily graph the number of apples compared to the number of oranges grown on a time dependent graph where coordinates serve zero practical application. Yes I have noted some of your articles included a few graph comparisions. Were comparing flexibility of methodology specifically spacetime vs strictly energy

for the mathematics below I will restrict myself to a simple Newtonian 3D geometry to demonstrate the flexibility. for the start I will use basic Euclid geometry

\(ds^2=x^2+y^2+z^2\)

Nothing fancy about that, as you stated its simply describing our container. Yet that container has incredible usage as an aid to visualize, and in many ways simplify an incredible range of mathematical relations. However you don't need to stick to those geometric relations. I could very well be comparing apples, oranges, grapes.(apples, oranges, grapes). LOL we all know of graphs that have nothing to do with geometry... or at least should. That being said lets play with the above geometry. We all know you can assign any scalar value to any coordinate that's trivial.

One overlooked flexibility is the " visual aid ". Any graph whether or not the graph is coordinate basis or not is irrelevant. The true power comes in when you look for specific relations of :distribution. From those distributions one can find patterns and one can map those patterns upon the same graph. After all that is the whole point of a graph in the first place. A solid good mapping of any graph is how well whether or not its coordinate basis or not, is how one can develop mathematical relations of change and rate of change. This obviously is where differential geometry comes into play, It doesn't matter if your differential geometry uses calculus of variations, scholastic ( probability), or differentials. These mathematics are obviously not restricted to just physics. They apply to any engineering trade as well and if you think about it to any programmer.

Now lets take that geometry above along with a scalar distribution of values. They could very well be just those energy quantities you have in your model development.

Lets say you see a pattern that all quantities are increasing in distance or any other value not involving distance from each other. It could very well be they all have an identical increase in value over some time (rate of change). As they are all identical they are all symmetric in rate of change. So I can readily describe this by one parameter. A scale factor or more accurately a constant of proportionality. Common symbol "a" but I also want a time dependency for rate

\(ds^2=a(\tau)^2(dx^2+dy^2+dz^2)\)

if however I notice the pattern of rotation on any principle axis (a reference) one can simply add a new term of proportionality to represent that rotation. say for example in the above

\(ds^2=a(\tau)^2(dx^2+\omega(dy^2)+dz^2)\)

or alternately apply a vector field using \(z=f(x,y)\).

if I wish to embed some state with determined boundaries such as a hyperbolic paraboloid

\(\frac{x^2}{a^2}-\frac{y^2}{b^2}=\frac{z}{c}\)

I have already established a means of point by point translations to embed any number of geometric objects at any specific location on the original mappings. We should all recognize this makes symmetry relations far more flexible in so far as spatial and rotation translations of those embedded systems or states or even when it comes to symmetry rotations and translations of the original mappings.

The above describes the flexibility of mappings via coordinates or any other parameter assigned manifold. With spacetime this also makes parallel transport operations much more flexible to mathematically describe. Perfect example is developing affine connections pertaining to path integrals such as the geodesic equations.

I simply do not see that flexibility in any of your work. I would honestly hate to try any curve fitting operation or finding other mathematical relations describing embedded systems or states at a given location with your methodology let alone performing any symmetry operations.

Yes I know you have a graphical simulation of an orbiting body. However in order to program that simulation you would have eventually relied on some form of coordinate system simply to employ the instruction set of the software you used. For that matter many ppl are not aware that AI uses a weighted sum of patterns in answers to a question and employs tensors of the weighted averages to determine the most common answer to a question asked.

Edited by Mordred

On 2/18/2026 at 7:45 AM, Anton Rize said:

protocols.
image.png

Observer A has frame of reference with axis S^1 and S^2 while B has his own frame of reference with axis S^1 and S^2...your relational frame seems to be a universal frame of reference with axis beta and kappa

On 2/18/2026 at 7:45 AM, Anton Rize said:

* Observer [math]A[/math] is the center of their own relational framework. Observer [math]B[/math] is a point on [math]A[/math]'s [math]S^1[/math] (for kinematic relations) and [math]S^2[/math] (for gravitational relations).

On 2/18/2026 at 7:45 AM, Anton Rize said:

Therefor I can map your state reletive to mine as a point on [math](\beta, \kappa) [/math] plane. And because the same rules applies to you, you can map me as a point on [math](\beta, \kappa) [/math] plane of your relational frame

The above two statements to me is not clear...

2 hours ago, MJ kihara said:

Energy is the ability to do work...by doing work, transformations are observed... therefore,your defination of energy is the same version as ability to do work.

using energy to describe changes of angles does not conform to the above definition for energy not without causation due to some application of force.

Edited by Mordred

  • Author
19 hours ago, MJ kihara said:

I would like you to answer the question above because it would clarify to us your thinking.

There are four fundamental quantities in your formulation/method; 1-Energy...2-Time...3-mass...4-length.

Can you briefly explain to us your understanding of those quantities and your intuition about them? From the perspective of your formulation.

thank you for the excellent question. You are absolutely right to ask for clarification, as my previous late-night phrasing ("Space is a consequence of Time") was too philosophical. Let me give you the strict, operational physics perspective.


1. Distance and Signal Delay: Where does it come from?

Operationally, I don't start with a pre-existing 3D Cartesian box and place objects in it. In Relational Orbital Mechanics (R.O.M.), the only variable we have access to is the local clock of the receiver, which shows us not a coordinate axis, but the rate of energy transformations (Time). Using time as a spatialized coordinate axis introduces ontological baggage that goes against our methodological principles.

When we receive a photon stream, we do not inherently know its 3D origin vector. The "delay" is the fundamental physical reality - it is the causal lag between emission and reception. Distance is operationally defined through this lag: [math]L \equiv c \Delta t[/math].

The 3D spatial geometry (the "where", like orbital inclination) is a mathematical consequence that my algorithm (I am still waiting for you guys to send me an anonymized dataset for the blind test) reconstructs post-facto by analyzing the invariant kinematic shifts ([math]\beta[/math]) embedded in the received signal (something that GR simply cannot do).


2. The Four Fundamental Quantities (E, M, T, L) and the Vacuum

In GR, spacetime is often treated as a "fabric" that can exist even when completely empty of energy ([math]T_{\mu\nu}=0[/math] vacuum solutions). In WILL RG, there is no such thing as an "empty void" due to the SPACETIME [math]\equiv[/math] ENERGY equivalence derived earlier.

What we call mass, energy, space, and time are strictly correlated projections of a single underlying relational structure: SPACE-TIME-ENERGY, which for convenience (and a bit of irony) I call WILL [math]\equiv[/math] SPACE-TIME-ENERGY. If we express Energy, Time, Mass, and Length dynamically using the framework's kinematic ([math]\beta[/math]) and potential ([math]\kappa[/math]) dimensionless projections, all dimensionful constants strictly cancel out, yielding the invariant identity:

[math]\frac{E}{M} = \frac{L}{T} = c^2[/math]
or in more "poetic" form
[math]W_{ILL}=\frac{ET^2}{ML^2} = 1[/math]
WILL [math]\equiv[/math] SPACE-TIME-ENERGY [math]\equiv[/math] 1
WILL is not the unit of something - but the Unity of Everything.

Every change in the relational state rescales Energy-Mass and Time-Length coherently. They cannot fluctuate independently.

https://willrg.com/documents/WILL_RG_I.pdf#sec:willinvariant

To demonstrate this operational lock, consider the Earth-GPS satellite system. Using its specific dimensionless projections ([math]\beta[/math] and [math]\kappa[/math]) and scaling, the framework calculates the absolute dimensional values:

[math]M_{GPS} \approx 5.972 \times 10^{24} \, \mathrm{kg}[/math]

[math]E_{GPS} \approx 5.367 \times 10^{41} \, \mathrm{kg \cdot m^2/s^2}[/math]

[math]T_{GPS} \approx 0.00785 \, \mathrm{s^2}[/math] (temporal scale projection)

[math]L_{GPS} \approx 7.060 \times 10^{14} \, \mathrm{m^2}[/math] (spatial scale projection)

Testing the identity gives exactly [math]\frac{E_{GPS}}{M_{GPS}} = \frac{L_{GPS}}{T_{GPS}} = c^2[/math], proving these are not independent parameters but interlocked geometric projections.

Because distance requires a causal signal (Energy), you cannot have a spacetime geometry without an underlying energetic structure. What GR calls the "empty vacuum" is modeled in WILL RG as a macroscopic standing wave - a Fundamental Tone ([math]f_0 = H_0/2\pi[/math]) generated by the tension of the causal horizon.

It is crucial to note that this [math]H_0[/math] is strictly derived from first principles using the CMB temperature and the fine-structure constant [math]\alpha[/math], not fitted to cosmological datasets: https://willrg.com/documents/WILL_RG_II.pdf#sec:deriving-H0

The necessity of this Fundamental Tone arises strictly from the system's topological closure. In a closed relational carrier, any relational perturbation cannot propagate indefinitely without re-encountering its own wavefront. Therefore, only resonant modes (where the phase shift completes a full rotation, [math]\Delta_\phi = 2\pi n[/math]) can accumulate sufficient structural energy ([math]Q_{total}[/math]) to form a persistent physical manifold. Dissonant modes self-cancel. The baseline resonance of this closed topology physically constitutes the Fundamental Tone of the observable Universe.
https://willrg.com/documents/WILL_RG_II.pdf#sec:tone


3. Testable Predictions: Why SPACETIME [math]\equiv[/math] ENERGY matters

This ontological stance is not just philosophy; it produces testable celestial mechanics without Dark Matter.

Because every local orbiting body shares the same closed geometry with the Global Horizon, the local orbital frequency constructively interferes with the Fundamental Tone of the "vacuum".

The total observed kinetic energy state of a star incorporates a geometric mean interference term:

[math]v_{obs}^2 = v_{N}^2 + \sqrt{v_{N}^2 \cdot (\Omega a_{Mach} r)}[/math]

Where [math]a_{Mach} = f_0 c = cH_0/2\pi \approx 1.05 \times 10^{-10} \, \mathrm{m/s^2}[/math].

https://willrg.com/documents/WILL_RG_II.pdf#sec:galactic-dynamics

To determine the coupling weight [math]\Omega[/math], we look at the Total Relational Shift ([math]Q^2[/math]). In a closed system, the energy is distributed between the potential ([math]S^2[/math]) and kinematic ([math]S^1[/math]) carriers:

[math]Q^2 = \kappa^2 + \beta^2 = 3\beta^2 = \frac{3}{2}\kappa^2[/math]

This enforces a strict structural bifurcation:

1. Galaxies are continuous potential fields (fluids/structure) on the 2D carrier. Their coupling weight is

[math]\Omega_{pot} = \frac{\kappa^2}{Q^2} = \frac{2}{3}[/math].

2. Wide Binaries are discrete point-mass orbits on the 1D kinematic carrier. Their coupling weight is

[math]\Omega_{kin} = \frac{\beta^2}{Q^2} = \frac{1}{3}[/math].

https://willrg.com/documents/WILL_RG_II.pdf#def:rel_weight

This mathematically derives two distinct anomalous acceleration scales, predicting exactly why MOND's single parameterization fails:

* Galactic Scale: [math]a_\kappa = \Omega_{pot} \cdot a_{Mach} = \frac{cH_0}{3\pi} \approx 0.70 \times 10^{-10} \, \mathrm{m/s^2}[/math]. This naturally matches the flat rotation curves in the SPARC dataset (RMSE [math]\approx[/math] 0.065 dex) without Dark Matter.
https://willrg.com/documents/WILL_RG_II.pdf#sec:models_comparison


sparc_rar_comparison.png

https://willrg.com/documents/WILL_RG_II.pdf#fig:rar

* Binary Scale: [math]a_\beta = \Omega_{kin} \cdot a_{Mach} = \frac{cH_0}{6\pi} \approx 0.35 \times 10^{-10} \, \mathrm{m/s^2}[/math]. This yields a gravity boost factor of [math]\approx 1.47[/math], perfectly matching the recent Gaia DR3 wide binary anomaly (Chae 2023, empirical boost [math]\approx 1.45 - 1.55[/math]).
Wide_binary_Chae_2023.png
https://willrg.com/documents/WILL_RG_II.pdf#sec:wide-binary



In conclusion:
Space is not a void; it is a resonant structural capacity.

Cosmological predictions chain: https://willrg.com/documents/WILL_RG_II.pdf#tab:cosmology_comparison

Edited by Anton Rize

4 hours ago, Mordred said:

using energy to describe changes of angles does not conform to the above definition for energy not without causation due to some application of force.

To decode what he is doing ... We have to look at his own language...to him it appears everything to do with/that is energy boils down to transformations further definitions are artefacts,he is working with minimals.

4 hours ago, Mordred said:

For that matter many ppl are not aware that AI uses a weighted sum of patterns in answers to a question and employs tensors of the weighted averages to determine the most common answer to a question asked.

LLM inner/deep layers the way information is processed is sometimes not well understood,there is a possibility it forms structures/models that are very simple after sieving a lot of information, the residues form formulations like the one used by the author....am sensing the danger in a future date when everything will be connected to AI the pressure of these LLMs to give correct answers will lead to corruption of data received through spectroscopy and other other methods connected to computers...anyway, just a worry.

16 hours ago, Anton Rize said:

There is no room for reverse engineering

What's your physics background? You haven't indicated it in your profile...the way you are handling some terminologies, you seem to have a professional background in physics...if not so, you have had an intensive learning of physics....which one is which? Not to be personal such information can help us understand better your perspective.

  • Author

10 hours ago, MJ kihara said:

Energy is the ability to do work

Are you happy with this definition? It seems like a pedagogical placeholder, not an ontological definition. It is strictly circular: Work is mathematically defined as a mechanical transfer of Energy, while Energy is defined as the capacity to do Work. It defines the entity purely through its own transfer mechanism.

Emmy Noether's theorem: it elegantly links energy conservation to time-translation symmetry. However, ontologically, it relies on an assumed epistemic loop. It postulates "Time" as a pre-existing, independent background container (a fundamental symmetry) in order to derive "Energy" as a conserved quantity.

If we strictly adhere to relationalism, time cannot be an independent external background.

This is why I had to derive a rigorous, non-circular definition. And its not a postulate or an axiom. It holds only until it doesn't.

Energy is the relational measure of difference between possible states. It is not an intrinsic property of an object, nor a magical fluid that "does work". It is a comparative structure between an observer and an observed state.

3 hours ago, MJ kihara said:

To decode what he is doing ... We have to look at his own language...to him it appears everything to do with/that is energy boils down to transformations further definitions are artefacts,he is working with minimals.

One of the problems of relying too much on minimalizations is you tend to overlook critical details. For example and I honestly hope the OP has considered the following and at some point made the necessary corrections is the vastness of our universe with regards to the expansion history.

Take the formula which is his principle formula

\[\Beta=\frac{v}{c}\]

works great for near field measurements however once you hit the Hubble horizon then recessive velocity becomes greater than c. There is a means to make corrections for this but SR cannot be used to describe the distance relations of our universe not in its entirety. GR also has to account for the expansion history. Expansion itself affects any observation methodology.

It affects redshift, luminosity distance, angular diameter distance, Tully Fisher relations and proper time calculations. As some key examples

Angular diameter distance for example has a rather unexpected side effect caused by expansion regarding time of emission to time of signal received. The apparent diameter size of the object being measured increases the further away you go at redshift 4.9 this leads to a decrease in the distance calculations past 4.9. Without making necessary adjustments.

Here's a quick list of adjustments needed for expansion.

the Hubble parameter can be written as 

\[H=\frac{d}{dt}ln(\frac{a(t)}{a_0}=\frac{d}{dt}ln(\frac{1}{1+z})=\frac{-1}{1+z}\frac{dz}{dt}\]

look back time given as

\[t=\int^{t(a)}_0\frac{d\acute{a}}{\acute{\dot{a}}}\]

\[\frac{dt}{dz}=H_0^{-1}\frac{-1}{1+z}\frac{1}{[\Omega_{rad}(1+z^4)+\Omega^0_m(1=z0^3+\Omega^0_k(1+z)^2+\Omega_\Lambda^0]^{1/2}}\]

\[t_0-t=h_1\int^z_0\frac{\acute{dz}}{(1+\acute{z})[\Omega^0_{rad}(1+\acute{z})^4+\Omega^0_m(1+\acute{z})^3=\Omega^0_k(1+\acute{z})^2+\Omega^0_\Lambda]^{1/2}}\]

second order Luminosity distance full integral

\[D_L(z)=(1+z)\cdot D_M(z)\]

where \(D_M(z)\) is the transverse commoving distance

Universe with arbitrary curvature

\[d_L(z)=\frac{c}{H_0}\frac{(1+z)}{\sqrt{|\Omega_k|}}[sinn \sqrt{|\Omega_k|}]\int^z_0\frac{\acute{z}}{E(\acute{z})}\]

sinn(x) defined as sin(x) when \(\Omega_k<0\), sinh(x) when \(\Omega_k>0\), x when \(\Omega_k=0\)

Expansion function (dimensionless Hubble parameter)

\[E(z)=\sqrt{\Omega_r(1+z)^4+\Omega_m(1+z)^3+\Omega_k(1+z^2)+\Omega_\Lambda}\]

modern times radiation is negligible, and for k=0 simplifies to

\[D_L(z)=\frac{c(1+z)}{H_0}\int^z_0 \frac{d\acute{z}}{\sqrt{\Omega_M(1+\acute{z})^3+\Omega_\Lambda}}\]

angular diameter distance reprocity relation

\[D_A(z)=\frac{d_L(z)}{(1+z)^2}\]

Angular diameter distance integral

\[d_A(z)=\frac{c}{\sqrt{|\Omega_{k,o}|H_o(1+z)}} \cdot S_k[H_o\sqrt{|\Omega_{k,o}|} \int^z_0 \frac{dz}{H(z)}\]

\[S_k(x)=\begin{cases}sin(x)&k>0\\x&k=0\\sinh&k<0\end{cases}\]

commoving distance

\[D_c =\frac{c}{H_0} \int^z_0 \frac{d\acute{z}} {E(\acute{z}) }\]

the relation to all the above is

\[E_Z=[\Omega_R(1+z)^4+\Omega_m(1+z)^3+\Omega_k(1+z)^2+\Omega_\Lambda]^{1/2}\]

All the second order adjustments above use this relation for measurements beyond Hubble horizon. It factors in expansion history of each equation of state to determine what measurement adjustments are needed.

The real beauty is the above equations will work regardless of any spacetime curvature.

Now leets try an example of a Down on Earth example. Particle accelerators. If one wishes to develop equations of motion for particles being accelerated is doesn't make sense to use the same coordinates used in Cosmology applications. This is where a new coordinate system was designed using the tools of GR as well as those of geometry. In particular applying Frenet-Serret formulas. Without going through all the derivatives one can arrive at the equations of motion Langrene

curvilinear coordinate beam dynamic Langrangian

\[\mathcal{L}=-mc^2\sqrt{1-\frac{1}{c^2}(\dot{x}^2+\dot{y}^2+h^2\dot{z}^2)}+e(\dot{x}A_x+\dot{y}A_y+h\dot{z}A_z)=-e\phi\]

part of the derivatives used in that last expression involves Floquet theory. "In the quantum world, where the linearity of the Schrodinger equation is guaranteed from the start, the Floquet theory applies whenever the Hamiltonian governing the system is time-periodiodic.

quote from article below

https://www.ggi.infn.it/sft/SFT_2019/LectureNotes/Santoro.pdf

The real beauty of Frenet-Seret equations is that they are well designed for Helical motion.

The above are all examples of where Geometric relations are incredibly useful in problem solving so to ignore geometry is akin to throwing away one of the more versatile tools in a physicists tool pouch

Just now, Anton Rize said:

Are you happy with this definition?

I for one am absolutely happy with that definition it applies at all levels of physics and has been incredibly well tested throughout the entire of physics.

lol to change the definition of energy would literally mean a complete rewrite of all physics as well as engineering related equations

  • Author
11 hours ago, MJ kihara said:

What is the relational framework?... observer A has a co-ordinate system the axis not being X,Y but S^1 and S^2

Every observer has his own reference frame with axis S^1 and S^2.

its a frame of reference. I just adding "relational" to emphasise that its not some arbitrary place holder on preexisting X-space and Y-time coordinate axis.
In relational framework there could be no preexisting space or time coordinate axis. Again, it would violate the core principles.

Instead we derived the relational carriers as inevitable consequences of the core methodological principles.
S^1 carrying the protocol of "change conservation" (energy) in 1 DOF domain ⇨ β²+β_Y²=1 ⇨ :
β² = v² / c² ⇨ βᵧ=√(1-(v/c)²) ⇨ Eₓ^2 + Eᵧ^2 = E^2 ⇨ Eₓ = Eβ² ⇨ Eᵧ = Eβᵧ² ⇨ β = 0 ⇒ βᵧ = 1 (invariant rest frame state relative to yourself) ⇨
⇨ Eᵧ ≡ E_0 ⇨[math] E \beta_Y = E_0 \quad \Longrightarrow \quad E = \frac{E_0}{\beta_Y}.[/math] ⇨ γ = 1 /βᵧ ⇨
⇨ [math] E^{2} = (\tfrac{\beta}{\beta_Y}E_0)^{2} + E_0^{2} = \bigl(\cot(\theta_{1})\,E_{0}\bigr)^{2} + E_{0}^{2} ≡ [math] (pc)^{2} + (mc^{2})^{2} [/math].

https://willrg.com/documents/WILL_RG_I.pdf#thm:restenergy

And the same goes for S^2 but I can't be bothered copy pasting from one widow in to another any more when you can just simply click the link: https://willrg.com/documents/WILL_RG_I.pdf#sec:geometric_composition

image.png





image.png



and when we combine 2 amplitudes in to 1 relational circle we getting the relational frame:

image.png

11 hours ago, MJ kihara said:

Every observer has his own reference frame with axis S^1 and S^2.

Yes. You are the center of your relational carriers.

11 hours ago, MJ kihara said:

Where do this parameter fit? Is it between the observers or is it within the observer's circle?

Both. Its your measuring tool: https://willrg.com/documents/WILL_RG_I.pdf#sec:kappabeta you can compare something only in relation to yourself.

12 hours ago, MJ kihara said:

Subscript X and Y stand for what?

Orthogonality.

12 hours ago, MJ kihara said:

S^1- kinematic relations and S^2-gravitational relations....Hamiltonian equals kinetic energy plus potential energy...what you are doing is splitting the Hamiltonian and distributing it's component into axis.

Great observation! But that's not what Im doing. I highly recommend to read this and at least 2 following sections: https://willrg.com/documents/WILL_RG_I.pdf#eq:will_minkowski_energy
by the end of it I deriving this daring statement: "Mathematical complexity is the symptom of philosophical negligence.:
I have to admit im proud of this result.

12 hours ago, MJ kihara said:

Q is equivalent to spacetime interval which is, if am not wrong, the same to all observers therefore invariant.

That's partially why I was asking about prior knowledge.

Is it? Can you derive it? Ill test it when Ill get a chance. If you right it would be hilarious.

10 hours ago, Mordred said:

Judging from your posts Its obvious to me your not recognizing the true power and versatility of geometry. Anytime I see any new theory that a member is developing I always point out the need for a geometry

the second most common recommendation I mention is " Comparison against existing models and theories or methodologies. Comparisons isn't strictly accuracy but also includes flexibility of application and ease of use. ( Your previous mentioned Occam's razor ). Now in order help others develop these skills I often have to take the opposite stance in discussions. So for this post I am going to use this technique by detailing the power of geometry and the usefulness of mappings. I will endeavor to keep this as simple as possible. I mentioned before the main goal isn't just making accurate predictions that is fundamentally just a confirmation that the mathematics your employing has some measure of validity. I will be employing the devils advocate in future posts just so you are forewarned and are now aware of what the reasons is. Hopefully nothing I state gets misconstrued on any personal level (All too common in discussions I've had with other members banned or otherwise.)

In my professional real world experience the single most important tool to the all the work I've ever done is graphs. If you can't interpret a graph of results from measurements you will never get work as a physicist plain and simple, no exceptions in my experience. Doesn't make any difference what field of physics your applying. A graph however is not restricted to (x, y ,z etc) any parameter can be used as a replacement. Manifolds in particular employ the fact that x, y and z are convenient labels nothing more. ( coordinate basis). One can arbitrarily graph the number of apples compared to the number of oranges grown on a time dependent graph where coordinates serve zero practical application. Yes I have noted some of your articles included a few graph comparisions. Were comparing flexibility of methodology specifically spacetime vs strictly energy

for the mathematics below I will restrict myself to a simple Newtonian 3D geometry to demonstrate the flexibility. for the start I will use basic Euclid geometry

ds2=x2+y2+z2

Nothing fancy about that, as you stated its simply describing our container. Yet that container has incredible usage as an aid to visualize, and in many ways simplify an incredible range of mathematical relations. However you don't need to stick to those geometric relations. I could very well be comparing apples, oranges, grapes.(apples, oranges, grapes). LOL we all know of graphs that have nothing to do with geometry... or at least should. That being said lets play with the above geometry. We all know you can assign any scalar value to any coordinate that's trivial.

One overlooked flexibility is the " visual aid ". Any graph whether or not the graph is coordinate basis or not is irrelevant. The true power comes in when you look for specific relations of :distribution. From those distributions one can find patterns and one can map those patterns upon the same graph. After all that is the whole point of a graph in the first place. A solid good mapping of any graph is how well whether or not its coordinate basis or not, is how one can develop mathematical relations of change and rate of change. This obviously is where differential geometry comes into play, It doesn't matter if your differential geometry uses calculus of variations, scholastic ( probability), or differentials. These mathematics are obviously not restricted to just physics. They apply to any engineering trade as well and if you think about it to any programmer.

Now lets take that geometry above along with a scalar distribution of values. They could very well be just those energy quantities you have in your model development.

Lets say you see a pattern that all quantities are increasing in distance or any other value not involving distance from each other. It could very well be they all have an identical increase in value over some time (rate of change). As they are all identical they are all symmetric in rate of change. So I can readily describe this by one parameter. A scale factor or more accurately a constant of proportionality. Common symbol "a" but I also want a time dependency for rate

ds2=a(τ)2(dx2+dy2+dz2)

if however I notice the pattern of rotation on any principle axis (a reference) one can simply add a new term of proportionality to represent that rotation. say for example in the above

ds2=a(τ)2(dx2+ω(dy2)+dz2)

or alternately apply a vector field using z=f(x,y).

if I wish to embed some state with determined boundaries such as a hyperbolic paraboloid

x2a2−y2b2=zc

I have already established a means of point by point translations to embed any number of geometric objects at any specific location on the original mappings. We should all recognize this makes symmetry relations far more flexible in so far as spatial and rotation translations of those embedded systems or states or even when it comes to symmetry rotations and translations of the original mappings.

The above describes the flexibility of mappings via coordinates or any other parameter assigned manifold. With spacetime this also makes parallel transport operations much more flexible to mathematically describe. Perfect example is developing affine connections pertaining to path integrals such as the geodesic equations.

I simply do not see that flexibility in any of your work. I would honestly hate to try any curve fitting operation or finding other mathematical relations describing embedded systems or states at a given location with your methodology let alone performing any symmetry operations.

Yes I know you have a graphical simulation of an orbiting body. However in order to program that simulation you would have eventually relied on some form of coordinate system simply to employ the instruction set of the software you used. For that matter many ppl are not aware that AI uses a weighted sum of patterns in answers to a question and employs tensors of the weighted averages to determine the most common answer to a question asked.

Thank you for the excellent illustration of the categorical difference in our methodologies.

You highlight the flexibility of manifolds and coordinate systems (e.g., adding [math]a(\tau)[/math] for expansion or [math]\omega[/math] for rotation). I completely agree: for engineering calculations, plotting data, and software rendering, Cartesian grids and differential geometry are incredibly powerful descriptive tools.

However, physics is not a software simulation, and a mathematical graph is not a physical territory. Conflating a descriptive mathematical calculator (the coordinate grid) with the physical generator (the Universe) is the exact root of Ontological Bloat.

1. Flexibility as Epistemic Debt (Descriptive vs. Generative Physics)

The "flexibility" you praise is precisely what WILL RG deliberately eliminates. In standard Descriptive Physics, if a model doesn't fit the observation, the mathematical flexibility allows you to simply add a new parameter (a scale factor, an epicycle, dark matter) to force the curve to fit.

WILL RG is Generative Physics. The lack of "flexibility" is a strict epistemic constraint, not a limitation. Because the relational carriers [math]S^1[/math] and [math]S^2[/math] are geometrically closed, the system is entirely rigid. It physically prohibits the introduction of arbitrary tuning knobs. If a phenomenon cannot be derived directly from the algebraic closure of the relational projections, it does not exist. Same as if theory does not provide accurate predictions - it's wrong, and there's nothing to tune to fix it. That's what I call science - you right or you wrong there's nothing in-between.

2. The Simulation Fallacy

You pointed out that my orbital simulation must rely on a coordinate system and an instruction set. This perfectly illustrates the disconnect.

To render a visual circle on a flat 2D computer monitor for human eyes, the software must indeed use [math]x, y[/math] pixels and coordinate matrices. But the physics engine driving that simulation - determining the exact eccentricity of the orbit - operates strictly on the dimensionless algebraic ratio of the [math]\beta[/math] and [math]\kappa[/math] projections, requiring absolutely no Newtonian vectors. My github is publicly open and you can see the code yourself. https://github.com/AntonRize/WILL

The Universe does not have a monitor to render to, and it does not need a coordinate basis to compute its own relational state. We are not disagreeing on whether graphs are useful for human engineers. We are disagreeing on whether the Universe uses them to function.

10 hours ago, MJ kihara said:

Observer A has frame of reference with axis S^1 and S^2 while B has his own frame of reference with axis S^1 and S^2...your relational frame seems to be a universal frame of reference with axis beta and kappa

Yep its strikingly simple.

10 hours ago, MJ kihara said:

The above two statements to me is not clear...

Can you elaborate please? Is it the selfcentering seems unclear? Its very valuable information for me because deriving the model and effectively delivering it are 2 different skills. Its hard for me to see it from the outside like you do.

5 hours ago, MJ kihara said:

To decode what he is doing ... We have to look at his own language...to him it appears everything to do with/that is energy boils down to transformations further definitions are artefacts,he is working with minimals.

Yes! Thank you!

  • Author
5 hours ago, MJ kihara said:

What's your physics background? You haven't indicated it in your profile...the way you are handling some terminologies, you seem to have a professional background in physics...if not so, you have had an intensive learning of physics....which one is which? Not to be personal such information can help us understand better your perspective.

Nothing personal at all, I am fully open about it. I do not have a formal academic degree or a traditional background in physics.

For many years, my professional life was completely unrelated to science. However, I have always had a deep, persistent fascination with ontology and the philosophical foundations of physics. About 3 years ago, I decided to completely restructure my life. I stepped away from my previous career paths to dedicate my time exclusively to independent research and my other passion, music.

You noticed my terminology is somewhat unconventional. That is exactly because I am self-taught and approach these problems strictly from a philosophical and relational perspective first, rather than a standard mathematical one. Because I don't carry the "legacy habits" of standard academic training, I was forced to build this framework from the ground up, demanding strict epistemic hygiene at every step.

To compensate for my lack of formal mathematical training, I rely heavily on modern computational tools (like Desmos and Python) to rigorously test and verify the geometric algebra of my models.

It is an unconventional path - studying, developing, and jamming like a nihilistic monk - but it is the most meaningful period of my life, and it allows me to look at these foundational problems without being constrained by standard paradigms.

Edited by Anton Rize

1 hour ago, Anton Rize said:

Thank you for the excellent illustration of the categorical difference in our methodologies.

You highlight the flexibility of manifolds and coordinate systems (e.g., adding a(τ) for expansion or ω for rotation). I completely agree: for engineering calculations, plotting data, and software rendering, Cartesian grids and differential geometry are incredibly powerful descriptive tools.

However, physics is not a software simulation, and a mathematical graph is not a physical territory. Conflating a descriptive mathematical calculator (the coordinate grid) with the physical generator (the Universe) is the exact root of Ontological Bloat.

If you believe I was only describing the usefulness of geometry for engineering and plotting's then I obviously did not explain my stance well enough. It is also a powerful to to make predictions . Model development, calibrations of measurement equipment etc etc etc.

I couldn't even do my current job of calibrating the telescope spectrographic equipment at the University of the Caribou which is roughly the size as the one used by Hubble. That obviously involves fully understanding how light behaves and how to employ gratings for frequency separation prior to the collimater

Lmao I even use geometry on MRI's that I've been involved in calibrating where I also need to identify and track diffraction angles .

Nor could I have done written my dissertation way back when the only decent dataset I had to work with was COBE. With regards to BAO measurements looking for signatures of quintessence at that time. My dissertation was long ago shown incorrect when WMAP findings got released ( didn't have sufficient E-folds). Not a biggie part of science.

Tell me how do you think the vast majority of all physics equations got developed if not through the use of geometric relations ?

lol for that matter your methodology includes geometric relations one example being orthogonality. The question I really have is why you would feel mappings is ontologically wrong. Its a very versatile tool used in every day industries not exclusive to physics. LOL every graph you have posted here is a form of mapping.

Consider this then. every varying relation can be graphed. Mappings are inevitable as a result. Spacetime is a flexible tool for any mapping translations involving a volume with varying time as part of its mappings. However if you have one relationship to another that is varying over some other value that too can be graphed with or without any coordinate basis and subsequently you have a form of mapping

notice the above applies to all mathematics

Edited by Mordred

  • Author

Ill comment on this one firs before addressing the long one before it.

6 hours ago, Mordred said:

If you believe I was only describing the usefulness of geometry for engineering and plotting's then I obviously did not explain my stance well enough. It is also a powerful to to make predictions . Model development, calibrations of measurement equipment etc etc etc.

I couldn't even do my current job of calibrating the telescope spectrographic equipment at the University of the Caribou which is roughly the size as the one used by Hubble. That obviously involves fully understanding how light behaves and how to employ gratings for frequency separation prior to the collimater

Lmao I even use geometry on MRI's that I've been involved in calibrating where I also need to identify and track diffraction angles .

Nor could I have done written my dissertation way back when the only decent dataset I had to work with was COBE. With regards to BAO measurements looking for signatures of quintessence at that time. My dissertation was long ago shown incorrect when WMAP findings got released ( didn't have sufficient E-folds). Not a biggie part of science.

Tell me how do you think the vast majority of all physics equations got developed if not through the use of geometric relations ?

lol for that matter your methodology includes geometric relations one example being orthogonality. The question I really have is why you would feel mappings is ontologically wrong. Its a very versatile tool used in every day industries not exclusive to physics. LOL every graph you have posted here is a form of mapping.

Consider this then. every varying relation can be graphed. Mappings are inevitable as a result. Spacetime is a flexible tool for any mapping translations involving a volume with varying time as part of its mappings. However if you have one relationship to another that is varying over some other value that too can be graphed with or without any coordinate basis and subsequently you have a form of mapping

notice the above applies to all mathematics

First of all, I want to express my genuine respect for your background. Calibrating telescope spectrographs and MRI equipment is serious, foundational work. Also, casually mentioning that your dissertation on quintessence was invalidated by WMAP data - and accepting it simply as "part of science" - shows a level of scientific integrity that I deeply admire.

Let me address your question about mappings, and then share something directly related to your work with spectrographs.

1. Why Mapping is Epistemologically Essential, but Ontologically "Wrong"

You used the perfect example: the MRI. An MRI machine as far as I know does not measure physical [math]x, y, z[/math] coordinates inside the brain. It measures proton relaxation times (pure energy state differences) in varying magnetic gradients. The software then uses Fourier transforms to map those energy states onto a 3D Cartesian grid on a monitor so the doctor can understand it.

Is the map useful? Absolutely. It saves lives.

But does the brain use a Cartesian coordinate grid to function? No.

This is the difference between Epistemology (how we describe the world) and Ontology (how the world actually operates). WILL RG is not against geometry - it is Relational Geometry. It simply states that the universe operates directly on the energy states (like the raw MRI data), while the coordinate grid (the 4D map) is just a human computational interface.

2. The Spectrographic Blind Test (Solving the Degeneracy)

Because you work with frequency separation and spectrographs, you know exactly how frustrating the inclination degeneracy is. In classical Keplerian mechanics, the amplitude of a radial velocity curve is tied to [math]K \propto \beta \sin(i)[/math]. It is mathematically impossible to separate the true orbital velocity [math]\beta[/math] from the inclination [math]i[/math] using strictly 1D spectroscopic data.

However, because WILL R.O.M. is so rigidly constrained (the "lack of flexibility" we discussed), it inherently isolates a second-order systemic invariant [math]Z_{sys}=\frac{1}{\sqrt{1-\kappa^{2}}}\frac{1}{\sqrt{1-\beta^{2}}} [/math] (product of gravitational red shift and transverse Doppler shift) from the redshift/Doppler interaction that is independent of the line of sight.

I just completed a rigorous, randomized blind test of this (the one that all of you ignored when I asked for datasets).

Script A generated synthetic 1D observational data for highly relativistic orbits using standard GR 1PN approximations.

Script B (the R.O.M. extractor) received ONLY the raw 1D arrays (no mass, no distance, no geometry) and was tasked with rebuilding the 3D orbit using purely relational algebraic closure.

Here are the results from two of the trickiest extreme angles (nearly face-on and nearly edge-on):

=== STRICT BLIND TEST 1 ===

TRUE PARAMETERS (Hidden from extractor):

Period (P): 15.200 yrs

Eccentricity (e): 0.86000

Argument of Periapsis (w):105.00 deg

Inclination (i): 10.00 deg

Background Drift (vz0): 18.50 km/s

R.O.M. EXTRACTION RESULTS:

Period (P): 15.197 years

Eccentricity (e): 0.86163

Argument of Periapsis (w):103.91 deg

Extracted Inclination (i):10.95 deg

Background Drift (v_z0): 21.73 km/s

Precession Rate: 0.1520 deg / orbit

Fit Quality (χ²): 224.24

=== STRICT BLIND TEST 2 ===

TRUE PARAMETERS (Hidden from extractor):

Period (P): 15.200 yrs

Eccentricity (e): 0.90000

Argument of Periapsis (w):70.00 deg

Inclination (i): 168.00 deg

Background Drift (vz0): -16.50 km/s

R.O.M. EXTRACTION RESULTS:

Period (P): 15.203 years

Eccentricity (e): 0.90011

Argument of Periapsis (w):67.97 deg

Extracted Inclination (i):165.61 deg

Background Drift (v_z0): -10.96 km/s

Precession Rate: 0.1799 deg / orbit

Fit Quality (χ²): 232.42

This shouldn't be possible in standard mechanics without astrometry. But the R.O.M. algorithm successfully extracted the 3D spatial geometry (including inclination and precession) strictly from the algebraic relations of the 1D light signal.

Since you deal with real spectrographic data, I wanted to share this with you. I am not asking you to take my word for it. If you have access to any anonymized, raw 1D RV/redshift datasets for highly relativistic binaries, or if you could synthesise it, I would love to run them through the script and see what the geometry reveals. Let the math speak for itself.

Edited by Anton Rize

The software I currently have access to is specifically designed to utilize a specific set of theories pertaining to Spectrographic grating. It is one of those rather specific application theories that is seldom heard of. In this particular case Kogelnik's Coupled wave theory coupled with the more commonly known Braggs law as applicable to Grotian diagrams.

Though the real challenge isn't simply the grating but the noise reduction due to atmospheric conditions as well as the peculiar motion of the observatory ( dipole anistrophies etc) to filtering intervening plasma etc.

11 hours ago, Mordred said:

Consider this then. every varying relation can be graphed. Mappings are inevitable as a result. Spacetime is a flexible tool for any mapping translations involving a volume with varying time as part of its mappings. However if you have one relationship to another that is varying over some other value that too can be graphed with or without any coordinate basis and subsequently you have a form of mapping

notice the above applies to all mathematics

I take it you didn't notice this portion of my last post.

In essence every equation in mathematics is a form of mapping as they can all be graphed.

So I was curious how your philosophy or ontology would address that statement.

This statement from your last post isn't quite accurate. It is mathematically impossible to separate the true orbital velocity β from the inclination i using strictly 1D.

Its true I wouldn't use strictly one dimensional related mathematics ( dimensionality being the number of effective degrees of freedom ). However using time elapsed spectography it is rather easy to determine the true orbital velocity.

Doppler shift effects on spectograph datasets is easily identifiable. The challenge is more causation. Specifically separating gravitational, cosmological redshift from Doppler redshift.

However this is where range calibrations are commonly used those type of calibrations inherently gets incorperated into the filtering software algorithms.

  • Author
10 minutes ago, Mordred said:

The software I currently have access to is specifically designed to utilize a specific set of theories pertaining to Spectrographic grating. It is one of those rather specific application theories that is seldom heard of. In this particular case Kogelnik's Coupled wave theory coupled with the more commonly known Braggs law as applicable to Grotian diagrams.

Though the real challenge isn't simply the grating but the noise reduction due to atmospheric conditions as well as the peculiar motion of the observatory ( dipole anistrophies etc) to filtering intervening plasma etc.

I completely understand. The data reduction pipeline is basically dark magic, and I have massive respect for the people who actually clean the signal from the noise.

So what I'm calling "raw data" by the time it reaches people like me in a neat CSV format, 99% of the real physical struggle has already been handled by you and your colleagues.

As far as I understand solving the Degeneracy problem is a big deal so extraordinary claims demand extraordinary evidence. So far I got S2 star and synthetic data based on GR 1PN tests conforming the results. But that's not enough. How else could we test it?

I assume 1 PN is post Newtonian coefficients if so that is not actually a bad route it commonly used in cosmographic equations for model independency particularly when you apply higher order coefficients. I would recommend in the testing stages you restrict your dataset comparisons to below 1.4 Z to avoid the corrections needed beyond that ( previously mentioned this thread.)

Have you considered applying your methodology to a system where the barycenter is not central to the star ? for example the barycenter between Jupiter and the sun. That is a dynamic that is safely ignored in your mercury sun system.

Another potential stage being metallicity distributions of light to heavier elements around an orbiting body such as a galaxy or plasma distributions remaining after Poynting vector removal via solar winds. The latter will relate to what types of planets form and in what orbital range is more likely. (specifically why metal heavy planets form near a star while gas giants form further away prior to migration trends inward.

Another set of tests could also be a system where frame dragging becomes an issue lets for example what would occur to Mercruy's orbit if the sun were to rotate at 0.5 c. (there are datasets of systems with this dynamic).

GPS data could also prov useful to you as a test as to why each satellite requires its own calibration setup for its particular orbit etc

Edited by Mordred

I think what may help you is to provide you a more detailed understanding on the field of spectography used in Cosmology applications. Many of formulas used are not quite the same as the FLRW metric standard form for example.

In particular the gravitational lensing formulas can be rather daunting when applied to spectographyy without a fundamental understanding of the particular forms of standard equations used in Spectography.

When I get a chance I will post some of the more commonly applied equations and factors that deal with.

  • Author
1 hour ago, Mordred said:

I assume 1 PN is post Newtonian coefficients if so that is not actually a bad route it commonly used in cosmographic equations for model independency particularly when you apply higher order coefficients. I would recommend in the testing stages you restrict your dataset comparisons to below 1.4 Z to avoid the corrections needed beyond that ( previously mentioned this thread.)

Have you considered applying your methodology to a system where the barycenter is not central to the star ? for example the barycenter between Jupiter and the sun. That is a dynamic that is safely ignored in your mercury sun system.

Another potential stage being metallicity distributions of light to heavier elements around an orbiting body such as a galaxy or plasma distributions remaining after Poynting vector removal via solar winds. The latter will relate to what types of planets form and in what orbital range is more likely. (specifically why metal heavy planets form near a star while gas giants form further away prior to migration trends inward.

Another set of tests could also be a system where frame dragging becomes an issue lets for example what would occur to Mercruy's orbit if the sun were to rotate at 0.5 c. (there are datasets of systems with this dynamic).

GPS data could also prov useful to you as a test as to why each satellite requires its own calibration setup for its particular orbit etc

You missed the critical condition of the blind test I just posted.

Let's focus on one specific problem: Inclination Degeneracy.

In standard mechanics, a 1D radial velocity curve is degenerate: [math]K \propto \beta \sin(i)[/math]. You cannot separate true velocity from inclination without 2D astrometry.

Here is the exact condition of the R.O.M. blind test:

1. ZERO astrometry.

2. ZERO mass parameters.

3. ZERO distance data.

The script received ONLY a raw 1D array of [Time, Velocity]. From that 1D data alone, it accurately extracted the true 3D inclination ([math]i = 10.95^\circ[/math] and [math]i = 165.61^\circ[/math]).

In the standard paradigm, extracting 3D inclination from purely 1D spectroscopic data is mathematically impossible.

So how did the script do it?

It works because R.O.M. geometrically isolates a systemic invariant
[math] Z_{sys}\left(o\right)=\frac{1}{\sqrt{1-\frac{R_{s}}{r\left(o\right)}}}\ \frac{1}{\sqrt{1-\beta^{2}\left(o\right)}}=\left(1+z_{b}\left(o\right)\right)\left(1+z_{k}\left(o\right)\right) [/math]
[math] o= [/math] orbital phase in radians
that completely bypasses the [math]\sin(i)[/math] degeneracy.
This isn't about metallicity or N-body dynamics; this is a direct algebraic solution to the spectroscopic degeneracy problem.

Edited by Anton Rize

Ah that particular degeneracy isn't something I've made much of a study of. Post Newtonian equations however are common to spectography applications as well a GW waves which was why I asked for clarification.

1PN being specifically the velocity element.

Its more a topic of Astrophysics than it is of cosmology though the two branches are closely related.

16 hours ago, Anton Rize said:

I Also, casually mentioning that your dissertation on quintessence was invalidated by WMAP data - and accepting it simply as "part of science" - shows a level of scientific integrity that I deeply admire.

I didn't comment on this earlier. One of the most valuable lessons I ever received in my formal training was a good theorist will always try to prove their own theories wrong. It is the only path to developing a robust theory is continous examination finding flaws and then making the needed improvements to correct those flaws as well as looking for ways to improve any theory.

On the quintessence case the WMAP evidence as well as other datasets were showing a non varying cosmological constant as well as incorrect e-folds.

For that particular paper it was better to simply start over. Lol lets just say Ive gotten extremely good at proving any theory I develop wrong. I spend a far greater portion of time doing so than in its development.

  • Author
1 hour ago, Mordred said:

good theorist will always try to prove their own theories wrong.

Absolutely! I've been using AI especially Gemini to prove me wrong. And o boy he did. For the first year it was like Im in the ring against Mike Tyson. He was shredding me to peace's. Persistency scientific method and Intellectual Honesty is the "water that sharpen the stone". Right now there's no AI that can seriously challenge the model anymore. So that's the main reason Im here talking with you. I'm actively looking for ways to challenge my results. And speaking about results you said that you've bean reading WILL_RG_I. And due to its foundational role it has a lot of philosophy to build the ontological ground for the parts to come. Have you had a chance to open WILL_RG_II? This part Im sure you will find interesting. Its all cosmology and almost no philosophy. Its basically an unbroken chain of 10 + derivations from the first principals with 0 fitting parameters.
I would be honored if someone as knowledgeable as you could challenge this results. They has to be challenged. They are preposterously epic to the point of absurdity. Here's the main ones:

Parameter or Observable

Derived Theoretical Value

Empirical Comparison Value

System or Dataset

Deviation or Accuracy

Physical Formulation

Source

Hubble Constant (H_0)

68.15 km/s/Mpc

67.4 ± 0.5 km/s/Mpc

Planck 2018

+ 1.0%

Geometric saturation density derived from CMB temperature and α

[1]

CMB First Acoustic Peak (ℓ_1)

220.59

220.60

Planck 2018

≈ 0.01%

Resonant harmonics of an S^2 topology loaded by 4.2% baryonic mass

[1]

CMB Quadrupole Power (D_(ℓ=2))

0.199 × 1.285 (boosted)

≈ 0.20

Planck 2018

Within predicted corridor

Vacuum tension acting as a high-pass filter on a tensionless S^2 membrane

[1]

Galactic Rotation Curve Bias

[ 0.70 × 10^(−28) m/s^2 (a_k)

−2.26 km/s (Bias)

SPARC (175 galaxies)

RMSE = 0.066 dex

Boundary Resonant Interference with Universal Fundamental Tone

[1]

Solar Orbital Velocity

226.4 km/s

229 ± 6 km/s

Gaia DR3 / Milky Way

Excellent agreement

Geometric mean interference between local potential and global horizon

[1]

Wide Binary Gravity Boost (γ)

≈ 1.47

≈ 1.45 − 1.55

Gaia DR3 / Chae 2023

Within predicted corridor

Kinetic Resonance Scale (S^1 carrier coupling weight 1/3)

[1]

Type Ia Supernovae Distance Modulus Offset expected

≈ 0.180 mag (low redshift)

≈−0.151 mag Pantheon+

Pantheon+

Shape deviation ≤ 0.02 mag

Geometric Energy Budget Partitioning (2 : 1 ratio of S^2 tension to kinetic mass)

[1]

Strong Lensing Einstein Radius

1.46^(′′)

1.49 ± 0.02^(′′)

MUSE/JWST (SLACS)

≈ 2%

Phantom Inertia (Q^2) acting as universal refractive medium

[1]

Recombination Epoch

≈ 364,860 years

≈ 378,000 years

Standard Cosmological Dating

≈ 3.5%

Unit Phase Condition (Θ_(max) = 1 radius) where arc length equals radius of curvature

[1]

Electron Mass (m_e)

9.064 × 10^(−31) kg

9.109 × 10^(−31) kg

CODATA

≈ 0.49%

Holographic Projection Principle / Geometric Capacity Resonance

[1]


If you will find any of this results interesting you can see all the details in here: https://willrg.com/documents/WILL_RG_II.pdf

Edited by Anton Rize

With regards to Hubble constant and the methodology of using the CMB blackbody temperature to determine Hubble constant.

This isn't a new idea. There are numerous peer reviewed papers suggesting this possibility. Though in some cases the peer reviewed methodologies have a less involved first order approximation.

I would recommend you straighten out your neutrino statement prior to your formulas involving the Hubble constant calculstions. The Cosmic neutrino background is a different blackbody temperature ( less than 2.73 Kelvin) but does not contribute to the photon blackbody temperature.

If you think about the properties of neutrinos and the definition of Blackbody temperature this should be obvious as to the reason why this is the case.

Do not confuse blackbody temperature with thermal temperature which is the average kinetic energy.

Blackbody temperature is not thermal temperature. ( I mention this with regards to the distinction of the CMB with the CNB)

Mathematically I dont see why your CMB temperature calculation regarding photon density will not work but you could have simplified the calculation by using baryon to photon density ratio.

Avoids those clunky numbers such as the gravitational constant and Stefen Boltzmann.

I should add those other papers have substantially greater weight of credibility as they also examine factors such as potential error margins when measurements are concerned and how to potentially correct for them.

Those equations above should also vary accordingly as the Hubble constant varies from what I can determine

Edited by Mordred

  • Author

@Mordred , first of all, thank you sincerely for taking the time to look through the calculation. I truly appreciate it. I am especially glad you noted that mathematically, the CMB temperature calculation regarding photon density works.

Allow me to briefly address your specific points, as they highlight the exact methodological boundaries of WILL RG:

2 hours ago, Mordred said:

This isn't a new idea. There are numerous peer reviewed papers suggesting this possibility. Though in some cases the peer reviewed methodologies have a less involved first order approximation.

I have searched for such papers, but mostly found phenomenological heuristics (like variations of Dirac's large numbers hypothesis) that rely on tuning parameters or mathematical coincidences rather than a generative geometric ontology. RG is not a "first order approximation" - it is a strict algebraic consequence of carrier closure. If you have specific papers in mind that derive [math]H_0[/math] from [math]T_{CMB}[/math] and [math]\alpha[/math] without free parameters, I would genuinely love to read them.

2 hours ago, Mordred said:

I would recommend you straighten out your neutrino statement prior to your formulas involving the Hubble constant calculstions.

I completely agree with your thermodynamic distinction between the CMB and the CNB. However, my exclusion of neutrinos is not about confusing their thermal histories; it is a strict topological category error within my framework. In RG, the observable horizon [math]H_0[/math] constitutes the limit of electromagnetic causality, which is governed by the electromagnetic coupling [math]\alpha[/math]. Because neutrinos do not couple to the [math]S^1(\alpha)[/math] carrier, including their density in the geometric saturation of that specific carrier would violate the theory's ontology.

2 hours ago, Mordred said:

Avoids those clunky numbers such as the gravitational constant and Stefen Boltzmann.

Using the baryon-to-photon ratio would certainly simplify the math, but it would violate my core principle of Epistemic Hygiene. My methodology strictly forbids importing phenomenological ratios when fundamental constants are available. The entire purpose of the derivation is to prove that the macroscopic horizon is directly, algebraically locked to the fundamental constants ([math]G, \sigma_{SB}, \alpha[/math]), not to an empirical gas mixture ratio.

2 hours ago, Mordred said:

I should add those other papers have substantially greater weight of credibility as they also examine factors such as potential error margins when measurements are concerned and how to potentially correct for them.

Here is the crucial difference: in this framework, the Hubble constant does not vary. It is not a free parameter. It is rigidly fixed by the geometry. As for error margins, they are trivial in this specific calculation - they simply propagate the incredibly tight CODATA and Planck measurement uncertainties of [math]\alpha[/math] and [math]T_0[/math]. Including standard error propagation would bloat the document without adding any ontological value.

On a personal note, I want to be completely honest with you. I am an autodidact, and I am currently experiencing something highly surreal.

I am acutely aware of the statistical impossibility of what is happening. To have a single, rigid geometric framework with zero free parameters that produces an unbroken chain of accurate predictions - from [math]H_0[/math], to the CMB acoustic peaks, to galactic rotation curves, to the wide binary anomaly, all the way down to the mass of the electron - defies standard probability.

I constantly ask myself: what is less probable? That an amateur somehow stumbled onto the actual generative geometry of the Universe... or that such a massive, interconnected chain of zero-parameter derivations is just a random sequence of mathematical coincidences?

When reading a lot of papers one often misses critical details a very common setting to simplify the mathematics is the following

\[ [distance] = [time] = [energy]^{-1}= [mass]^{-1}= [temperature]^{−1}\]

Both the Bose Einstein and Fermi-Dirac statistics directly apply the Boltzmann constant. The above relations are still applied they are simply normalized. The difference is that from the two above relations as well as Maxwell Boltzmann for thermodynamic effects one can develop a full equation that correlates to the contributions of any number of particle species including those that do not involve the fine structure constant interactions via a particles effective degrees of freedom. The Big Bang nucleosynthesis mathematics employ the above relations to encompass the entirety of the standard model. Using the first two statistics one can further calculate the number density of any particle species and apply those relations to QM as well as QFT. In point of detail QFT has an equivalent form of all the above. One can literally use the above to throw whatever any particular particle and even atoms at a region and make reasonable predictions of what would occur.

The above highlights the flexibility and the reason why the complexities develop. It is when one must get into greater details to extract different distributions and dynamics. LCDM for example is a rather HUGE set of theories and methodologies that it has incorporated into its model. There is always a good reason for doing so when you get into the nitty gritty to understand what those reasons are and how they develop via mathematical proofs.

That is when one truly learns how interconnected different theories truly are.

I'm sure you've seen the equations I'm referring to but just in case.

Bose Einstein Statistics

\[n_i = \frac {g_i} {e^{(\varepsilon_i-\mu)/kT} - 1}\]

Fermi-Dirac statistics

\[n_i = \frac{g_i}{e^{(\epsilon_i-\mu) / k T} + 1}\]

Maxwell Boltzmann

\[\frac{N_i}{N} = \frac {g_i} {e^{(\epsilon_i-\mu)/kT}} = \frac{g_i e^{-\epsilon_i/kT}}{Z}\]

the nice thing about these relations is they also readily work with the Saha equations. Using the above one can determine and equation when all particles are in thermal equilibrium and through derivatives from the above derive when each species drops out of thermal equilibrium when you also include expansion rates Saha equations being applicable to atoms as opposed to particles.

As mentioned previous I will often throw in counter arguments. Range of applicability in this case.

1 hour ago, Anton Rize said:

1 hour ago, Anton Rize said:

I have searched for such papers, but mostly found phenomenological heuristics (like variations of Dirac's large numbers hypothesis) that rely on tuning parameters or mathematical coincidences rather than a generative geometric ontology. RG is not a "first order approximation" - it is a strict algebraic consequence of carrier closure. If you have specific papers in mind that derive H0 from TCMB and α without free parameters, I would genuinely love to read them.

When I get a chance I will dig a decent one up if I recall Zibens had a treatment sometime in late 2000's.

Edited by Mordred

Create an account or sign in to comment

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Account

Navigation

Search

Search

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.