Jump to content

Questions on Redshift, Distance and Space Expansion


Recommended Posts

Apparently a white dwarf type 1a supernova the standard candle can tell us how far it is, by measuring its luminosity.

 

Is this independant of the size of the star?

 

Is space expansion uniform across the entire universe, or are some volumes expanding faster than others?

 

If we measure the red-shifted wavelength of a photon, and we know the expansion history of the space through which it has travelled, how do we calculate the distance of the source if we don't know the original wavelength of the photon? Or is there a shift in the whole spectrum of radiation, that we can use to infer distance, and if so how do we know what the original spectrum look liked at the time when the radiation was emitted from the source?

 

If space expansion causes a lengthening of a photon's wavelength, does that means space expansion operates in a volume with at least one axis that is shorter than the wavelength of the photon?

 

What is the smallest volume of space that can be subject to expansion?

 

Apparently the density of dark energy needs to remain constant. Which of the following statements is true:

 

As space expands, dark energy must be created to maintain density.

As dark energy is created, space must expand to maintain density.

 

If space and time are related, and dark energy and space are related, how is dark energy and time related?

 

If space is expanding and the rate is variable, could time be similarly contracting or expanding at different rates?

Edited by AbstractDreamer
Link to comment
Share on other sites

Apparently a white dwarf type 1a supernova the standard candle can tell us how far it is, by measuring its luminosity.

 

Is this independant of the size of the star?

 

Type 1a are the product of a binary system where the more massive star is slowly fed extra mass - it goes nova at a certain mass (1.44 solar masses); it is this fact that the mass is steady across many examples that allows its use as a standard candle. The peak brightness is very similar across many examples - magnitude 19.3 +/- 0.03. Missing elemental lines and strong elemental lines in the spectra allow us to confirm that a particular signal is a Type 1a (there are two routes to a star system going type 1a but they light emitted is the same) and not a different Type that would have a differing briightness

 

Is space expansion uniform across the entire universe, or are some volumes expanding faster than others?

I believe at these scales we assume and believe that the cosmological principle applies - ie that the cosmos is isotropic and homogeneous. Bear in mind this is the scale at which the smallest entities you are bothering with is galactic clusters. If the cosmos were found not to be isotropic and homogeneous at such a scale then serious new thinking would be required. Every so often a team of observers will come up with a variation in some supposed constant depending on which part of the sky they look at - but I do not believe any have been born out in further investigation. There is "dark flow" - but that is still at the confirmatory stage and does not have a good explanation at present if indeed it exists

If we measure the red-shifted wavelength of a photon, and we know the expansion history of the space through which it has travelled, how do we calculate the distance of the source if we don't know the original wavelength of the photon? Or is there a shift in the whole spectrum of radiation, that we can use to infer distance, and if so how do we know what the original spectrum look liked at the time when the radiation was emitted from the source?

 

Everything is shifted - the entire emission spectrum; we know (eg from Type 1a SuperNova) what lines are missing and what lines are stronger and this allows us to know both the current wavelength and the original wavelength . For instance what is now the cold microwave background was originally very hot plasma which cooled past the temperature at which photons were continually scattered - we know that temperature and the spectrum that would be emitted and if we stretch it out we can see it "preserved" in the spectrum of the CMB; just as it was predicted many years ago

Link to comment
Share on other sites

If we measure the red-shifted wavelength of a photon, and we know the expansion history of the space through which it has travelled, how do we calculate the distance of the source if we don't know the original wavelength of the photon? Or is there a shift in the whole spectrum of radiation, that we can use to infer distance, and if so how do we know what the original spectrum look liked at the time when the radiation was emitted from the source?

 

 

The only thing that matters is the scale factor when it is emitted and the scale factor when it is received. It's history is not relevant.

 

So if you know the received wavelength and the distance to the source (i.e. the amount of redshift), you can work out the original wavelength.

 

On the other hand, if you know the original wavelength (because it is a line in the emission spectrum of hydrogen, for example) then you can work out the redshift and hence distance.

 

 

Apparently the density of dark energy needs to remain constant. Which of the following statements is true:

As space expands, dark energy must be created to maintain density.

As dark energy is created, space must expand to maintain density.

 

As we don't know what dark energy is, I don't think that can be answered. It might be that the two statements are equivalent.

Link to comment
Share on other sites

it is this fact that the mass is steady across many examples that allows its use as a standard candle. The peak brightness is very similar across many examples - magnitude 19.3 +/- 0.03. Missing elemental lines and strong elemental lines in the spectra allow us to confirm that a particular signal is a Type 1a

Is mass directly proportional to luminosity? What if energy can be dissipated via mechanisms other than intensity of EM radiation, such as charge or angular momentum or other weird stuff? Could this result in a lower luminosity, and subsequently a false distance calculation?

 

...We assume... ...that the cosmos is isotropic and homogeneous. Bear in mind this is the scale at which the smallest entities you are bothering with is galactic clusters. If the cosmos were found not to be isotropic and homogeneous at such a scale then serious new thinking would be required.

 

Is isotropism not a superset of homogeneity? Can anyone give me an example of something that is isotropic but not homogenous?
If the cosmos is described as isotropic, does that mean the properties and direction of the "4D spacetime" is uniform everywhere in the universe? I think i mean, the spatial axes and time are always in the same "direction" (though time arguably has no direction)? How is this possible within blackholes? Or do we just describe black holes as not within our universe? Doesn't that contradiction invalidate the description?
Consider the following:
Imagine a block of glass that contains imperfections due to contamination. Glass is isotropic. The entire block (of glass and contaminants) is not isotropic. The glass within the block (but not including the contaminations) is isotropic.
Imagine a universe of spacetime that contains imperfections due to blackholes. Spacetime is isotropic. The entire universe (of spacetime and blackholes) is not isotropic. The spacetime within the universe (but not including blackholes) is isotropic.
Is it fair to say that spacetime is isotropic, and the universe is NOT isotropic?
Following that potentially false premise above then, does expansion operate uniformly only within spacetime, and operate chaotically or not at all within black holes?
Imagine a perfectly spherical volume of space containing multiple blackholes of significant volume, would the blackholes not affect expansion such that that over time, the volume is no longer perfectly spherical? Or is there an approximation to isotropism within limits, such as the tiny deviations found in the CMB? Could this variance be caused by black holes?

So if you know the received wavelength and the distance to the source (i.e. the amount of redshift), you can work out the original wavelength.

 

On the other hand, if you know the original wavelength (because it is a line in the emission spectrum of hydrogen, for example) then you can work out the redshift and hence distance.

 

But upon receiving a photon of red-shifted wavelength, there's no way of knowing if it initially had a long wavelength that has since red-shifted a lot because its far away, or a short wavelength that red-shifted a little because its near. Unless the original full spectrum is known, and compared. So how do we know the original full spectrum? Do all supernovae have the same spectrum? Or are the spectrums the same within the same types?

 

As we don't know what dark energy is, I don't think that can be answered. It might be that the two statements are equivalent.

 

Does this not beak causality?

 

Sorry, sneaky baseless theory coming up!:
Could outside of the universe be blackholeness of infinite density, "sucking" the universe outwards (via gravitation), affecting the universe to expand into its infinite density, at the same time receding due to the constance of dark energy density within the universe, pushing the blackholeness backwards, providing the cause to the effect of volume expansion?
Can someone shatter this illusion for me?
Edited by AbstractDreamer
Link to comment
Share on other sites

Could outside of the universe be blackholeness of infinite density, "sucking" the universe outwards (via gravitation), affecting the universe to expand into its infinite density, at the same time receding due to the constance of dark energy density within the universe, pushing the blackholeness backwards, providing the cause to the effect of volume expansion?

Can someone shatter this illusion for me?

 

 

Yes, Isaac Newton can: look up the shell theorem.

Link to comment
Share on other sites

Shell theorem, seems to require perfect spherical symmetry. The existence of more than one black hole could imply that the universe is not perfectly spherical in volume, especially assuming space expansion does not operate within the volume of a black hole. This difference would most likely increase over time, resulting in a greater deviance from spherical perfection and consequently a higher net gravitational force (from the greater infinite density) over time, as well a shift in the gravitation "center" of the sphere.

Link to comment
Share on other sites

Is mass directly proportional to luminosity? What if energy can be dissipated via mechanisms other than intensity of EM radiation, such as charge or angular momentum or other weird stuff? Could this result in a lower luminosity, and subsequently a false distance calculation?

 

 

Yes there is a mass to luminosity relation

https://en.m.wikipedia.org/wiki/Mass%E2%80%93luminosity_relation

 

Also learning this will also help.

 

Snells law, Weins Displacement law, and blackbody temperature.

 

https://en.m.wikipedia.org/wiki/Wien's_displacement_law

 

https://en.m.wikipedia.org/wiki/Black_body#/search

 

http://cosmology101.wikidot.com/redshift-and-expansion

http://cosmology101.wikidot.com/universe-geometry

See bottom corner last link for second page. Which described the ds^2 worldlines in the FLRW metric.

 

Yes temperature influences optical effects play a part. We refine data by using spectrography to look for known properties of elements. Such as hydrogen etc.

 

Now homogeneous (no preferred location) example uniform mass distribution

Isotropic ( no preferred direction)

 

combined this is a uniform distribution.

Such as the average density of our universe. The opposites being inhomogeneous and anistropic examples. Explosions, stellar bodies, rotational systems.

Edited by Mordred
Link to comment
Share on other sites

 

Is mass directly proportional to luminosity? What if energy can be dissipated via mechanisms other than intensity of EM radiation, such as charge or angular momentum or other weird stuff? Could this result in a lower luminosity, and subsequently a false distance calculation?

 

 

Type 1a supernova all "go boom" at pretty much the same mass - they reach the Chndresekar limit and it is taken from there. Mass luminosity relationships given by Mordred apply to stars not novas. With regard to your other questions - angular momentum and charge would be conserved so I am not immediately sure how that you could rob energy which would also be conserved; but frankly it is something to think about. The supernova explosion is well modeled and seems to match with theory very well. No standard candle is going to be perfect - but the number that we have means the chances of a mistake, miscalculation etc are very small

Is isotropism not a superset of homogeneity? Can anyone give me an example of something that is isotropic but not homogenous?

Quite - but we have assumed this from our position on earth; tht leap can only be made if you were to compare multiple viewing positions. So the Cosmological principle of homegeneity and isotropy say that any large enough region of space will have the same stuff in and which ever direction we look in will give us the same results. The centre of a sphere of water - every direction you look is exactly the same to the limits of your probing (it is a large sphere) - but this is clearly a preferred position; other position would not necessarily be limited and could discern an edge or at least a gradient that was not uniform. Cosmological principle boils down to we are not in a preferred position.

 

On the contaminations - these principles work at the large scale; glass with contaminations can be isotropic, for instance coloured glass. If the bits are small enough and any decent size sample will have the same number then we are still within the cosmological principle

Imagine a perfectly spherical volume of space containing multiple blackholes of significant volume, would the blackholes not affect expansion such that that over time, the volume is no longer perfectly spherical? Or is there an approximation to isotropism within limits, such as the tiny deviations found in the CMB? Could this variance be caused by black holes?

 

If there are a significant number of black holes then that would sounds as if they are gravitationally bound to each other - gravity is far stronger and they will remain in orbit with each other. Expansion takes place in the huge gaps - if there is enough stuff in the volume you are considering then you either will not observe any expansion (smaller scale) or you need to look at such a large scale that the gaps are in the megaparsec range and the stuff is once again statistically homogeneous

Link to comment
Share on other sites

Given the spectral series of hydrogen is known, is there also proportionality between type 1a supernovae and quantity and rate of hydrogen emission? Or does this only apply to stars just normally burning their fuel and not for a supernova?


Fermat's principle (which Snell's law is a derivation of) seems to require that a photon knows its final destination, in order to take the path of shortest time. Which is rather odd. Surely a photon travels in a straight line (along spacetime curvature) to where ever it might go, thus taking the shortest time to get there?


If the entire mass of the type 1a supernova is converted into EM radiation, and all type 1a supernovae exhibit the same EM radiation spectrum of wavelengths and frequencies, then luminosity or intensity would be consistent, assuming conservation of energy.


I've seen some references to supernovae remnants, such as the Crab Nebula, or strange cores, which leads me to think that the entirety of the mass is NOT converted to EM radiation. So if a type 1a supernovae leaves remnants such as gases or strange cores, then this matter could contain energy in the form of charge, kinetic, thermal, chemical?


Would this account for the inaccuracy as described in wiki? https://en.wikipedia.org/wiki/Cosmic_distance_ladder

For type1a supernova light curves (apparently rather accurate for extragalactic distance calculations) "The current uncertainty approaches a mere 5%, corresponding to an uncertainty of just 0.1 magnitudes."


On homogeneity:

So the observable universe has a sphere around us of 45 billion LY radius. If isotropic and homogeneous, An observer at the edge of that radius would also see a sphere of radius 45 billion LY. Then then things directly between us would be mutually observable, the things "behind" each of us would be mutually exclusively observable from each other, and some other volumes between might be observable by both due to curvature of the spacetime manifold?


However if the universe has an age and a beginning, and spacetime started at the beginning of the universe, does that not contradict the homogeneity of time, if not the isotrophy of time as well?


I'm starting to get an idea that expansion doesn't increase the volume of space, only it stretches it and wraps it around. Bit like a fractal set on the surface of a torus. Or like zooming in on a microscope. Observable boundaries are relative only, there are no edges, though singularities could be points of intersection (which don't exist on a simple torus). Can someone provide me some examples of 3D shapes, with finite surface area, no boundaries, point intersections, and can be formed from a finite 2D area.

Edited by AbstractDreamer
Link to comment
Share on other sites

 

Given the spectral series of hydrogen is known, is there also proportionality between type 1a supernovae and quantity and rate of hydrogen emission?

 

 

What is the "rate of hydrogen emission" ? Stars "burn" (fuse) hydrogen to helium.

 

 

Fermat's principle (which Snell's law is a derivation of) seems to require that a photon knows its final destination, in order to take the path of shortest time. Which is rather odd. Surely a photon travels in a straight line (along spacetime curvature) to where ever it might go, thus taking the shortest time to get there?

 

This is complicated (and really deserves a thread of its own). You need to understand the basics of QED (quantum electrodynamics). A photon does not travel in a straight line from A to B. We cannot, in fact, say anything about its trajectory. All we know is where it was emitted and where it was detected. Fermat's principle and Snell's law are classical laws and do not apply to individual photons.

 

The probability of a photon ending up in a particular place matches the classical predictions. But to calculate that probability, you have to take into account every possible path the photon could take to get from A to B (this includes flying off to the other side of the galaxy and back, and everything in between).

 

 

 

If the entire mass of the type 1a supernova is converted into EM radiation

 

It isn't. Only a tiny proportion of the mass is converted into radiation. Most is blown off into space.

 

 

 

So the observable universe has a sphere around us of 45 billion LY radius. If isotropic and homogeneous, An observer at the edge of that radius would also see a sphere of radius 45 billion LY. Then then things directly between us would be mutually observable, the things "behind" each of us would be mutually exclusively observable from each other, and some other volumes between might be observable by both due to curvature of the spacetime manifold?

 

That sounds right, apart perhaps from the last bit "due to curvature of the spacetime manifold" (which I don't understand).

 

 

 

However if the universe has an age and a beginning, and spacetime started at the beginning of the universe, does that not contradict the homogeneity of time, if not the isotrophy of time as well?

 

It is not homogeneous in time (that would be a steady state model, it is homogeneous (and isotropic) spatially.

 

 

 

I'm starting to get an idea that expansion doesn't increase the volume of space, only it stretches it and wraps it around.

 

If the distance between points is stretched, then the volume they are in must also increase, surely?

Link to comment
Share on other sites

 

It is not homogeneous in time (that would be a steady state model, it is homogeneous (and isotropic) spatially.

 

 

If the distance between points is stretched, then the volume they are in must also increase, surely?

 

Apologies for continuing my baseless thoughts:

 

If expansion is defined as (change in volume)/(time), the only way for expansion to be non-zero and for change in volume to negligible, is through some function of time.

 

With that in mind, is it possible that rather than space expanding, time is contracting?

 

So rather than observing super distance objects are moving away from us faster than closer objects, and deducing that the intervening volume is expanding (which is the obvious answer), could you not interpret it as "The further the distance from an observer, the slower that time ticks at that very distant location, simultaneously". And to preserve isotropy of volume, then it must be that relative to that distant point, our time is ticking also slower, simultaneously".

 

This would appear to both satisfy the inhomogeneity of time, and the isotrophy of volume.

 

So consequently the speed of light is what it is locally for any observer in the universe. But very distantly relative to each and every location, it could be much slower due to time contraction, giving the illusion of volume expansion.

 

What i think i mean is: the speed of light is 300,000 m/s locally relative to a location, at each and every location in the universe. But at the edge of the observable universe relative to each and every location in the universe, the speed of light could approach zero relative to that location.

 

Sorry, i know this is speculation territory here. I will attempt to stay on track, once i have processed more information.

Edited by AbstractDreamer
Link to comment
Share on other sites

No time dilation doesn't apply to the FLRW metric.

 

[latex]d{s^2}=-{c^2}d{t^2}+a{t^2}[d{r^2}+{S,k}{r^2}d\Omega^2][/latex]

 

[latex]S\kappa,r= \begin{cases} R sin(r/R &(k=+1)\\ r &(k=0)\\ R sin(r/R) &(k=-1) \end {cases}[/latex]

 

In this line element you have the Newton approximation given by the Minkowskii/Lorentz equations. However this is in the polar coordinate form.

 

The scale factor a adds a new dynamic to the volume spatial terms.

 

Time dilation requires an inhomogeneous distribution of matter but when you have a homogenous and isotropic fluid with uniform distribution at a particular moment in time such as now. Time dilation does not occur at any particular time slice.

 

So even though there is a higher density past. This is simply due to volume change over time and not time dilation.

Edited by Mordred
Link to comment
Share on other sites

With that in mind, is it possible that rather than space expanding, time is contracting?

 

 

It is, I believe, possible to choose a coordinate system where space does not expand but, instead, time changes. This is not generally used because it is not a very intuitive model and causes complications such as the speed of light changing over time. It is therefore simpler to stick with the coordinate system where space expands.

Link to comment
Share on other sites

 

 

It is, I believe, possible to choose a coordinate system where space does not expand but, instead, time changes. This is not generally used because it is not a very intuitive model and causes complications such as the speed of light changing over time. It is therefore simpler to stick with the coordinate system where space expands.

Those models run into problems beyond Hubble horizon. afiak.

 

[latex]1+z=\frac{\lambda_{observered}}{\lambda_{rest}}=\frac{R_o}{R(t)}=\frac{1}{a(t)}[/latex]

 

This equation gives the cosmological redshift, however I cannot stress enough that it is not due to the velocity of receeding objects, only by the increase in the scale factor a(t) since time t. The equation above however is a basic equation that details local redshift. It gets inaccurate at higher redshifts once you get beyond Hubble horizon, when the value of recessive velocity given by [latex]V_r=H_oD[/latex] where v becomes greater than c.

 

If you try to apply the time dilation equations to the recessive velocity values you will hit infinity at the Hubble horizon when v=c.

 

Tamara Davies gives a simplified coverage of this aspect here.

 

http://www.google.ca/url?q=https://arxiv.org/pdf/astro-ph/0310808&sa=U&ved=0ahUKEwjT2JbOsr3QAhWFMGMKHU5VBDQQFggbMAM&usg=AFQjCNFE9aQDrUBHk91mW2m2K0ME7UyISQ

 

The main points is the assumption of constant expansion used in Hubble's recessive velocity formula will give you incorrect results when applied to the time dilation formulas.

 

The correct methodology is to use the scale factor (which evolves over time) and not recessive velocity. Another good article covering this is by Hogg's

Distance measures in Cosmology\

 

https://arxiv.org/pdf/astro-ph/9905116v4.pdf

 

hope that helps

PS side note there is also k corrections for luminosity distance past z=5.0

Edited by Mordred
Link to comment
Share on other sites

Just to be clear, there's noway I'm right i know, and I'm not trying to argue against the accuracy of FLRW parameterisations of Einsteins field equations, when compared to observations.

 

I just want to probe conceptual alternatives, without making speculations, or at least discount alternatives.

 

However, whilst I am slowly improving with my pure maths, I'm struggling when applying it to physics. And whilst i have no intention of disbelieving equations that are the foundations of theoretical physics, I find it troubling to simply accept everything that is presented before me, without working up to that point of conclusion myself from the basics.

 

Thanks for your replies, I need more time to process before responding hopefully sensibly, without you having to repeat or stress something that i do not fully comprehend.

Link to comment
Share on other sites

Those are are the reasons I include links so you can verify my statements and understanding without relying on my word on the subject.

 

It takes some time to get a handle on these aspects as the textbooks themselves only involve the basics. Which is why you rarely see the corrected formulas for high z. That's something that comes up in advanced studies and courses.

 

Here this is from one of my courses.

 

First we define a commoving field. This formula though it includes curvature (global) you can set for flat spacetime. A static universe is perfectly flat.

 

[latex]ds^2=c^2dt^2 [\frac {dr^2}{1-kr^2}+r^2 (d\theta^2+sin^2\theta d\phi^2)][/latex]

 

we write [latex](x^0,x^1,x^2,x^3)=(ct,r,\theta,\phi)[/latex]

 

we set the above as

[latex]g_{00}=1,g_{11}=-\frac{R^2(t)}{(1-kr^2)},g_{22}=-R^2 (t)r^2, g_{33}=-R^2 (t)r^2sin^2\theta [/latex]

 

the geodesic equation of the above is

 

[latex]\frac {du^\mu}{d\lambda}+\Gamma^\mu_{\alpha\beta}\mu^\alpha\mu^\beta=0 [/latex]

 

if the particle is massive [latex]\lambda[/latex] can be taken as the proper time s. If it is a photon lambda becomes an affine parameter.

 

So lets look at k=0.

 

we set [latex]d\theta=d\phi=0 [/latex]

 

this leads to

 

[latex]ds^2=c^2t^2-R^2 (t)dr^2=c^2dt^2-dl^2=dt^2 (c^2-v^2)[/latex]

 

where dl is the spatial distance and v=dl/dt is the particle velocity in this commoving frame.

 

Assuming it to be a massive particle of mass "m" [latex]q=m (\frac {dl}{ds})c=(1-\frac {v^2}{c^2})^{\frac{1}{2}}[/latex]

 

from the above a photon emitted at time [latex]t_1[/latex] with frequency [latex]v_1 [/latex] which is observed at point P at time [latex]t_0 [/latex] with frequency [latex]v_0[/latex]

 

with the above equation we get

 

[latex]1+z=\frac {R (t_0)}{R (t_1)}[/latex]

 

Please note were still in commoving coordinates with a static background metric.

 

[latex]z=\frac {v}{c}[/latex] is only true if v is small compared to c.

 

from this we get the Linear portion of Hubbles law

 

[latex]v=cz=c\frac{(t_0-t_1)\dot{R}t_1}{R(t_1)}[/latex]

 

now the above correlation only holds true if v is small. When v is high we depart from the linear relation to Hubbles law.

 

We start hitting the concave curved portion.

 

The departures from the linear relation requires a taylor series expansion of R (t) with the present epoch for this we will also need H_0.

 

note the above line element in the first equation does not use the cosmological constant aka dark energy. This above worked prior to the cosmological constant

 

Now for the departure from the linear portion of Hubbles law.

 

[latex] v=H_Od, v=cz [/latex] when v is small.

 

To this end we expand R (t) about the present epoch t_0.

 

[latex]R (t)=R[(t_0-t)]=R(t_0)-(t_0)-(t_0)\dot {R}(t_0)+\frac {1}{2}(t_0-t)^2\ddot{R}(t_0)...=R (t_0)[1-(t_0-t)H_o-\frac {1}{2}(t_0-t)q_0H^2_0...[/latex]

 

with [latex]q_0=-\frac{\ddot{R}(t_0)R(t_0)}{\dot{R}^2(t_0)}[/latex]

 

q_0 is the deceleration parameter. Sometimes called the acceleration parameter.

 

now in the first circumstances when v is small. A light ray follows

 

[latex]\int_{t_1}^{t^0} c (dt/R (t)=\int_0^{r_1}dr=r_1 [/latex]

 

with the use of this equation and the previous equation we get

 

 

[latex]r=\int^{t_0}_t=\int^{t_0}_t cdt/{(1-R (t_0)[1-(t_0-t)H_0-...]}[/latex]

 

[latex]=cR^{-1}(t_0)[t_0-t+1/2 (t_0-t)^2H_0+...][/latex]

 

here r is the coordinate radius of the galaxy under consideration.

 

Solving the above gives..

 

[latex]t_0-t=\frac {1}{c}-\frac {1}{2}H_0l^2/c^2 [/latex]

 

which leads to the new redshift equation

 

[latex]z=\frac {H_0l^2}{c+\frac {1}{2}(1+q_0)H^2_0l^2/c^2+O (H^3_0l^3)}[/latex]

Here is the workup starting with the FLRW ds^2 line element.

 

The last equation is the corrected redshift formula when recessive velocity exceeds c.

 

O above is leading order.

https://en.m.wikipedia.org/wiki/Order_of_approximation

Edited by Mordred
Link to comment
Share on other sites

Evidence that volume expansion is occurring suggests that distance is not "flat" over time. Terrible word to use, but I can't think of another right now. What i mean by not "flat" is that the units of distance are stretched by expansion. Perhaps "uniform units" is a better word. The grids on the graph are morphed, not just the function that describes motion.


What evidence is there that c is universally (at any location in the universe), locally, historically (periods since inflation) , momentarily (now) and futurely (either infinitely or until the end of time) constant?


What evidence is there that time is "flat" or "uniform units". That is, the "gap" between 1 second today is the same as the "gap" between 1 second just before the end of time or [math] 10^{99999} years [/math] in the future, or the same as the "gap" between t=0 and t=1 seconds?


If its possible that time is not "flat", doesn't that invalidate the use of indefinite integration when the boundless value is infinity or -infinity? So when we integrate some function of velocity to obtain a distance or displacement, that's fine when the limits of time are "local" (i dunno say a few million years, time is probably "flat").


But when we integrate [math] \int^{t_z}_{t_0} [/math] we are assuming that the "flatness" of time extends uniformly all the way back to the very moment time started, including the period around [math] t= 10^{-33} seconds [/math], and including the crazy period just before that then when [math] 0<t< 10^{-33} seconds [/math]. So for example when measuring the instantaneous distance to the particle horizon, the lower bound is t=0.


And when we integrate [math] \int^{\infty}_{t_z} [/math] , we are assuming that "flatness" of time will always be the same, right up to the point when time ends or say when the universe has big crunched, or expanded to nothingness, or some other fate. So for example when we measure the instantaneous distance to the event horizon, the upper bound is [math] t=\infty [/math]


However, even as integration is an approximation, is that not a dangerous assumption to assume time is "flat"? Similarly is differentiation with respect to time only accurate when time itself consists of uniform units (regardless of scale).


In the same way to calculus, are trigonometric function only valid for a flat "axes"? So whilst a static universe is perfectly flat, as soon as we differentiate with respect to time we introduce a potentially non-static function, which may invalidate our premise. Even if time were flat, and even if the geometry of the universe is almost perfectly flat, by definition of space expansion is it fair to say volume is not flat over time?


I'm not sure how time dilation is involved in my problem, as this is nothing to with relative velocity or gravitation. It gets very confusing conceptually if time is not "flat" over time. Whilst distance can be an instantaneous measurement independent of time, its hard to picture the same with time. Is there evidence to suggest that everything in the universe that is co-moving and "co-gravitationalfielding" is also co-aging?


Over the distances and scale of space expansion, how can we ever know an object is instantaneously comoving with us, if what we can measure of the object is millions of years old. Is there not some uncertainty principle at work here? Just like location and momentum of a particle is uncertain, can we say the same about distance and age of anything far away, even if we know how the scale factor has changed over time?


Do we know why the scale factor is changing over time?

Edited by AbstractDreamer
Link to comment
Share on other sites

Yes your correct the term flat for cosmology is different than the flat relation in GR.

 

For expansion the following will help

 

This is for all contributors (photons, matter, radiation etc).

 

So first we replace [latex]\rho(t)[/latex] mass density with energy density in the form [latex]\epsilon(t)/c^2[/latex]

 

the GR form of the Freidmann equations is in the Newton limit in GR, this is low gravity such as stars, galaxies, LSS etc. It is a specific class solution in GR.

 

This gives the form of

 

[latex](\frac{\dot{a}}{a})^2[/latex][latex]=\frac{8\pi G}{3}\frac{\epsilon(t)}{c^2}[/latex][latex]-\frac{kc^2}{R_0^2}\frac{1}{a^2(t)}[/latex]

 

If [latex]k\le0[/latex] and the energy density is positive, then the R.H.S of the last equation is always positive. This is an expanding universe that will expand forever.

If matter is the dominant form of energy, as opposed to radiation this implies [latex]\epsilon\propto \frac{1}{a^2(t)}[/latex].

If k=+1 then the R.H.S must eventually reach 0, after which the universe will contract.

 

To get to the density parameter we can substitute [latex]H(t)=(\frac{\dot{a}}{a})^2[/latex] and we can rewrite the above equation into the Hubble parameter. (note I hate calling it constant, as its only constant at a particular moment in time)

[latex]H(t)=\frac{8\pi G}{3}\frac{\epsilon(t)}{c^2}[/latex][latex]-\frac{kc^2}{R_0^2}\frac{1}{a^2(t)}[/latex] if k=0 then

 

[latex]\rho_c(t)=\frac{\epsilon_c(t)}{c^2}=\frac{3H^2(t)}{8\pi G}[/latex] with the following density parameter relations [latex]\Omega=\frac{\epsilon}{\epsilon_c}=\frac{\epsilon}{c^2}*\frac{8\pi G}{3H^3}[/latex]

 

The above details how the acceleration/deceleration works with the Hubble parameter.

 

Do we know why the scale factor is changing over time?

In essence a combination of gravity and thermodynamics. If a particle contributors self energy exceeds its self gravity we get expansion. Loosely put. There really isn't one cause that we can just point to and say its due to pressure or temperature etc.

 

Its a combination of gravity, density, pressure and temperature plus particle degrees of freedom. These all contribute in an elaborate juggling act.

 

An oversimplification is potential energy (gravity) vs kinetic energy (particle momentum) however this isn't the full story. Nor is it particularly accurate in all cases. One example matter collapse into LSS assists expansion as the global mass density decreases due to the collapse.

 

This is also why one must use the scale factor when calculating redshift for the cosmological redshift. As opposed to the objects velocity as per gravitational redshift.

 

The volume change causes the latter method to become inaccurate at higher z values.

 

PS flat can mean many things. It simply describes the geometric shape of a specified relation. That relation or calculation to determine flat is different in cosmology than in GR.

 

PS you can tell your actually studying the material we presented to you in other threads +1

Edited by Mordred
Link to comment
Share on other sites

Is that a typo on line 14 ish. You wrote [math] e_c(t)/c^2 [/math] did you mean [math] \epsilon_c(t)/c^2 [/math]

 

Is this critical energy density as a function of time?

 

With a decreasing scale factor wrt time (or decelerating expansion), more and more distant objects would appear as our particle horizon overtakes the photons coming towards us. But H would need to be really really small (though still positive), considering that these newly appearing objects must initially be outside our "observable" universe; that is, there's so much distance for expansion to work over, so expansion must be really really small in relation to that distance and the time taken for the photon to reach us.

 

With a contracting universe, would the night sky get brighter and brighter, as photons from objects outside our particle horizon catch up with each other, in effect increasing the intensity as observed here on Earth?

Edited by AbstractDreamer
Link to comment
Share on other sites

Yes it is a typo it should have epsilon for energy density as it evolves with cosmological time. Also referred to as commoving time.

 

"The comoving time coordinate is the elapsed time since the Big Bang according to a clock of a comoving observer and is a measure of cosmological time. The comoving spatial coordinates tell where an event occurs while cosmological time tells when an event occurs. Together, they form a complete coordinate system, giving both the location and time of an event."

 

https://en.m.wikipedia.org/wiki/Comoving_distance

 

The commoving observer is also often referred to as "Fundamental observer"

Edited by Mordred
Link to comment
Share on other sites

https://arxiv.org/pdf/gr-qc/0506079v2.pdf

 

Mordred,

I could swear i found that link above from something you posted on this thread, but its gone now. But that paper really answered a lot of my questions!

 

The section on concluding remarks really makes clear the assumptions that I had problems with.

 

 


"It is worth mentioning the physical assumptions behind the above mathematical formalism. First we have the principle of cosmological relativity according to which the laws of physics are the same at all cosmic times. This is an extension of Einstein’s principle of relativity according to which the laws of physics are the same in all coordinate systems moving with constant velocities."

 

"In cosmology the concept of time (t = x/v) replaces that of velocity (v = x/t) in ordinary special relativity. Second, we have the principle that the Big Bang time τ is always constant with the same numerical value, no matter at what cosmic time it is measured. This is obviously comparable to the assumption of the constancy of the speed of light c in special relativity."

 

"Velocity in expanding Universe is not absolute just as time is not absolute in special relativity. Velocity now depends on at what cosmic time an object (or a person) is located; the more backward in time, the slower velocity progresses, the more distances contract, and the heavier the object becomes. In the limit that the cosmic time of a massive object approaches zero, velocities and distances contract to nothing, and the object’s energy becomes in- finite. "

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.