Jump to content

Hubble_constant_debate


es111

Recommended Posts

The measured value of the Hubble constant depends on the observations : Planck Mission gives a value near 68 km/s/Mpc and Ries (HST) gives a value of 73 km/s/Mpc. Does the standard model ask a constant value for all the periods ?

 

Link to comment
Share on other sites

No Hubble constant is only constant at all spatial locations at a given time.

 

However the differences in the values above comes from uncertainties in measurement. As we collect more data we reduce the uncertainties.

Link to comment
Share on other sites

Thanks to Mordred for his answer. But such a difference between the 2 values and the Ries's confidence in his result plus what he replied in the Scientific American of July 2016 (p. 5 Riess and Livio reply) do not exclude something else (they wrote : potential errors or some new physics). What I do not well understand is the moment of the rate of expansion : Planck mission gives a value of the expansion for the present time or the time of the decoupling of light- matter ?

Link to comment
Share on other sites

thanks I'll have to study it in detail and see where he is getting the differences.

 

Should have time later this evening. Though without reading it. The details should be in the critical density calculations. With density of matter,radiation and cosmological constant.

 

In essence he may have derived a different curvature value

Edited by Mordred
Link to comment
Share on other sites

I should have opened the article first lol. The details in this 65 length will take me a bit to examine.

 

At first read I thoroughly enjoyed it. I plan on adding it my collection. Thanks for sharing this.

Edited by Mordred
Link to comment
Share on other sites

so this paper concludes [math] H_0 = 73.24 \pm 1.74 km s^{-1} Mpc^{-1} [/math]

has 99.9% more confidence than [math] H_0 = 66.93 \pm 0.62 km s^{-1} Mpc^{-1} [/math]

 

Is that right?

 

That's quite a huge swing? Are they only factoring measuring uncertainties and not some modelling assumptions into the confidence?

 

Does a larger [math] H_0 [/math] value set us further to the right on the red line in this graph?

 

990350b.jpg

 

https://map.gsfc.nasa.gov/universe/bb_concepts_exp.html

Edited by AbstractDreamer
Link to comment
Share on other sites

so this paper concludes [math] H_0 = 73.24 \pm 1.74 km s^{-1} Mpc^{-1} [/math]

has 99.9% more confidence than [math] H_0 = 66.93 \pm 0.62 km s^{-1} Mpc^{-1} [/math]

 

Is that right?

 

 

 

 

No. How do you get 99.9%? The uncertainties are different by about a factor of 3. The second value has an uncertainty of less than 1%, the former about 2.4%. The second number is expressing a higher confidence in its result.

 

If the uncertainties are at 1 sigma, then they overlap at just less than 3 sigma from each value, i.e. it's a small overlap, but neither one strictly excludes the other. And that assumes that neither result has a bias that would shift the answer.

Link to comment
Share on other sites

After taking some time to read the Reis article. His paper is essentially trying to factor in the statistical uncertainties at various distance ladder stages.

 

In this sense he is factoring in datasets and fine tuning the various statistical error sources by accounting for them using logarithms and additional statistical degrees of freedom.

 

So yes I can see some contention on the end results.

Link to comment
Share on other sites

Well smaller than that really. To give an example.

 

I'll just compare WMAP 2013 to Planck 2013. If you look at what each value each dataset gives for the age of the universe. It will be easiest to see how miniscule the difference is on a smaller volume. ie a parsec.

 

WMAP H_o 69.8 km/s/Mpc age of universe 13.752 Gyrs. @ redshift 1089 proper distance 45.731888 Gly

 

Planck H_o 67.9 km/s/Mpc age of universe 13.787. @redshift 1089 proper distance 45.331596 Gly.

 

You can see on a scale of the observable universe proper distance from the above that an approximate difference of 2 km/s/Mpc is really a small deviation same applies to the age of the Universe calc.

 

In the Reis paper he has only examined local Hubble parameters so via the method he is using he would need to include further statistical error sources such as magnitude luminosity greater than z=5.0 as well as the Hubble horizon and change over of expansion rate due to matter dominant crossover to Lambda dominant at roughly universe age 7 Gyrs.

Edited by Mordred
Link to comment
Share on other sites

so the difference between WMAP and Planck description is a universe with:

 

1.9 km/s/Mpc faster expansion rate for a 35 million year younger universe with 122.789 Mpcs wider particle horizon

 

can we simply say 18.4 million years per 1km/s/Mpc, for the period around today would be within the error range of both results?

 

if so, can we say over 1 year, there might be 5x [math] 10^{-8} km [/math] increase in H_0

 

if so, how much more accurate do our measurements need to be before see yearly measurements statistically significantly different?

Edited by AbstractDreamer
Link to comment
Share on other sites

it really depends on what changes between datasets. Hubble constant isn't very telling as its based on current value today. Its not constant over time but constant everywhere at a particular time.

 

There is one formula far more telling.

 

 

[latex]H_z=H_o\sqrt{\Omega_m(1+z)^3+\Omega_{rad}(1+z)^4+\Omega_{\Lambda}}[/latex]

 

This will give you the value of Hubble's constant at a particular redshift value.

 

The thing about using Hubbles constant to calculate age of universe for example is that you still need to apply the curvature term to get the correct energy/density. The evolution of the mass/energy density however is not consistent.

 

So at some point we draw reasonable approximations. A convenient tool is to map the evolution of the scale factor. Which is non linear, but not too badly so.

 

Depending on what density changes affect expansion rates at different ratios as per the equation above.

For example radiation dominant era had a fast expansion rate. The dominant era expansion rate was slowing down.

 

For our current Lambda dominant universe the expansion rate is still slowing but the rate of decrease is less than the matter dominant. Even though we see an accelerating seperation distance the rate per Mpc is gradually decreasing.

 

This is what the Reis paper is essentially doing is it is questioning the reasonable approximations and trying to find a better means to eliminate those approximations.

Edited by Mordred
Link to comment
Share on other sites

Its a huge subject to learn cosmology. It involves essentially every aspect of physics. Thermodynamics is a major portion. The FLRW metric essentially allows us to model the thermodynamics under GR as a reasonable approximation.

Edited by Mordred
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.