Jump to content

Martin

Senior Members
  • Posts

    4594
  • Joined

  • Last visited

Everything posted by Martin

  1. I agree with Lucaspa on MBs speculations here. They don't make sense. Aren't science really. MB you cannot possibly have read what you say you read, from any legitimate science source. Not even a respectable science journalist would say what you claim to have read. So it is just plain fantasy, or delusion. One reason is that the Planck temperature unit is around 1.4 x 1032 kelvin. That is we are already within one planck degree of absolute zero and matter has not disappeared. And there is no reason for matter to disappear no matter how close we get, a microkelvin, a nanokelvin. Cold is not going to make matter disappear. To make speculations like yours, I suspect one would need to be unusually credulous, or imaginative to the point of inventing fantasy. Also you claim to have read that when the universe gets to within a planck degree of zero (which it certainly already is) the universe will "shrink and implode". Where did you read such an assertion! There have been big crunch collapse scenarios discussed but they are all accompanied by very high temperature. What you are offering us sounds completely "made up". If you want to keep this a legitimate speculation/pseudoscience thread, then you should give your source link. Where on the web does it say these things? If anyone is interested the NIST (national institute of standards and technology) defines the Planck unit of temperature, also see: http://en.wikipedia.org/wiki/Planck_temperature http://physics.nist.gov/cgi-bin/cuu/Value?plktmp|search_for=universal_in!
  2. In November Hawking accepted a visiting faculty appointment at Canada's Perimeter Institute. http://www.perimeterinstitute.ca/ http://www.perimeterinstitute.ca/News/In_The_Media/Stephen_Hawking_to_Regularly_Visit_Perimeter_Institute_as_Distinguished_Research_Chair/ If he does pull through, then he will apparently be spending part of his time at PI, in Waterloo Ontario, on a regular basis. And presumably also part time back in Cambridge, although he has retired from his professorship there.
  3. Well that is a respectable question, let's see if I can give it an adequate answer. It may take several days and several installments and/or help from others like Swansont etc, but we'll see. definitely! Fred is an idealization (perfect uniform matter distribn). Let's set non-uniformity effects aside for now. AT REST means the temperature is the same in all directions, except for the 1/1000 of a percent fluctuations that speckle the map. As we actually measure the CMB there is a huge hotspot due to solarsystem motion, which has to be taken out of the data. It is called the dipole. Let's stop and get to know it. Solar speed relative to CMB is 370 km/s and lightspeed is 300,000 km/s so solarspeed as fraction of lightspeed is 370/300000 = 0.0012. That is one tenth of a percent. So the sky temp is one tenth of a percent hotter than average, in the constellation Leo (the direction we are going). And if you adjust for that and take that out, then the rest of the fluctuation is a thousandth of a percent. There's also a coldspot of the same magnitude in the opposite direction for the same Doppler reason, that's why it's called the dipole---there are two poles to the Doppler distortion of the temperature map. The individual motions of other systems relative to CMB have been calculated. The Milky galaxy as a whole. Andromeda as a whole. Some other galaxies and clusters of galaxies, their collective motion relative CMB. We see it is typically a few hundred clicks---just like us, a few hundred km/s. This is random individual motion that are on top of in addition to the expansion business. Everybody is assumed to have some random individual motion and to be able to adjust for it. So when we go on and talk about observers a billion LY apart reading the same universe time by looking at the sky, we are going to ignore the dipole Doppler effect and assume they all know how to compensate for it. Or simply assume they are all sitting still---at rest relative to Background. That amounts to the same thing. And as I may have said earlier we will also ignore the fact that observers can be at different depths down their local gravity potential pits---being deep down a well makes a clock run slower. But those differences are small in most cases and can be neglected. The real world is not as uniform as the Freddymodel but nearly. Did you see the photograph of Friedman they have in Wikipedia? Does that look like a guy who once held the high altitude ascent world record? Or more like an extreme egg-headed schoolteacher. My hero! ======================== Now the serious part. We have these observers all over the universe and each one is constantly measuring the average temp of the CMB sky. None of them are moving and they are all at about the same level in gravitational potential. Each measurement is an EVENT. We want to say when two events occur at the same universal time. Being able to say. Having a criterion for two events being synchronous in universal time---that's what assures that universal time (UT) is well-defined. Assume by some miracle or previously arranged conspiracy they all use kelvin! Units are a pain in the neck so assume they all use kelvin. Now comes the rabbit! The mathematical wand is waved over the mathematical hat. Actually there's nothing to it. It's trivial. You know the event called "recombination" that happened simultaneously all over the universe when it finally got cool enough for the gas to be effectively transparent, so the CMB light got loose. The background dates from that event. Two people that measure the same CMB redshift are at the same time in the Friedman universe. You asked for small steps. One small step is that everybody uses the same Ned Wright calculator. In that calculator there is nothing special about the earth, or sun, or Milkyway galaxy. You just put in the parameters 0.73, 0.27, 71 that anyone anywhere can measure. You have used that calculator and you know that if you put in the CMB redshift it will tell you the time that has elapsed since recombination. Everybody in the universe can do this. (Except for using different time units, they will be able to agree on a single consistent timescale.) There is a consistency check which I have done---it's kind of fun---and you can do if you want. I can talk you through it. It involves using both morgan and wright calculators. The consistency check is like this: What if we had measured the CMB redshift one billion years ago? We would have measured different instead of 0.73, 0.27, 71, 1100. So we would have put different numbers into the calculator. We would have measured a different CMB redshift (not as big as 1100). I can show you how to figure out those things and do the calculation. What you get is the calculator realizes what's happening and it tells you that the time since recombination has been 12.7 billion years instead of 13.7 billion years! Probably there are a lot neater more efficient ways to explain this. I'm taking a operational concrete approach. Instead of writing equations, I am imagining that we give all the observers a calculator and let them empirically determine four numbers (darkenergy fraction, matter fraction, hubblerate, and background redshift). Oh. the CMB redshift is essentially just 3000 kelvin divided by whatever temperature you measure. For us, since we measure 2.728, the 1+z is about 3000/2.728. The 3000 is the temperature which is cool enough for the medium to become effectively transparent. that also is the same for everybody everywhere, except for whatever different units they might use. I have to go. Late for a matinee of Magic Flute! This could be better but hopefully it will be of some use for the time being.
  4. OK first we acknowledge that the Ferdman is an approximate. It assumes matter is uniformly distributed which is only approximately true. But it gives a very good approximation at large scale. It is the practical working that people use. Next we have to distinguish between the real observational world, and the mathematical model. It is clear what time is in the model because it is there explicitly in all the formulas. The Freddy model comes with coordinates that split space and time. And it tells you what the present ("now") is. And that now is defined universe-wide. You just have to look at the equations, and the formula for the metric (which is time-varying) and you will see. The metric has a spatial part and a time part. You will see that if e.g. you look at Wiki "friedman equations" if that doesnt work then double the N and say "friedmann equations" The "universe time" that the model comes with is optional, you are free to screw around and reparametrize and mix the coordinates up--as always in GR. But it comes with a simple neat coordinate system. What is more interesting is to discuss the pragmatic observability of Friedman time or sometimes called Universe time. Remember the Fr. universe is an approximation so we can only hope to observe the approximate time. But there has to be a recipe by which people all over the universe, if they share the same Ned Wright calculator, can observe the same time. Well the answer is easy actually. All over the universe the people can just hold up a thermometer and take the temperature of the CMB sky, and if you and I happened to measure the same temp, then we were part of the same 'now' at that moment. Those two EVENTS were synchro---the events of taking the temp, even if we were billions of lightyears apart. And the Ned Wright calculator can give you the correspondence between CMB temperature and time units (whatever time unit you use, your planet's years, or planck time units, or whatever). It is pragmatically simple all except for the units conversions. We would have to have communicated previously and decided on common units, like kelvin, or if you prefer the Planck temperature unit. So universe time has both a math model aspect and a practical operational aspect. I'll stop so you can ask whatever questions. If any.
  5. Modeling has gone thru many versions. The earliest model was 1922, a simple differential equation model constructed by Alex Friedman. Published in the Zeitschrift für Physik of that year. http://edocs.ub.uni-frankfurt.de/volltexte/2008/9863/pdf/E001554876.pdf http://en.wikipedia.org/wiki/Friedmann_equations By plugging in today's measurements, this model lets you calculate the approximate history back nearly to when expansion started. It is still the main model that cosmogists use, and a lot of effort is currently focused on improving Friedman's model so that it will go back further in time. Probably the leading up-to-date models are the computer models being run by Ashtekar's group at Penn State. They can reproduce the good fit of the Friedman model, to current observation data. And they also don't break down near the start of expansion. They go back further. A lot of effort is now being devoted at various places around the world to finding ways to test the Ashtekar group's model observationally (to distinguish it from the older Friedman-type cosmology). There is no popularization of the Penn State cosmo model, but if you aren't put off by technical writing you can scan over this list http://www.slac.stanford.edu/spires/find/hep/www?rawcmd=FIND+DK+QUANTUM+COSMOLOGY+AND+DATE+%3E+2006&FORMAT=www&SEQUENCE=citecount%28d%29 This list has a cutoff date of 2006. If you change the cutoff date you can get earlier papers. If you use the same keyword and look for papers before like 1999 you will find stuff by Stephen Hawking, James Hartle, Alex Vilenkin, Andrei Linde, string theorists etc etc. They used to dominate the field. Now a different group of authors has taken precedence. I'm only interested in the recent research, so I use the "date > 2006" cutoff. The list is ranked by citation count (a rough measure of importance in the research community.) ========== So the answer to your question is YES it can very definitely be modeled:D and unless you have a historical interest and want to go back to the 1922 roots you should check out the post-2006 research literature. =========== The standard Friedman cosmo model is implemented in a hands-on version in the Ned Wright calculator. Google "wright calculator". Play around with it. Put in various redshifts and see what you get. The calculator is programmed to run the Friedman differential equations and calculate the universe for you, in its broad outlines. There is another online calculator that does the same thing, called "cosmos calculator". A SFN member called NowThat recently reported trying it out and getting some numbers. It has some nice features that "wright calculator" doesn't. If you want to try it, google "cosmos calculator".
  6. No. At least not according to Einstein's interpretation of Gen Rel. He explicitly said that points in the continuum have no objective physical reality. I'll get the quote. It was from 1915 in a paper on the perihelion of Mercury (one of the first GR papers). Some Einstein quotes: “Dadurch verlieren Zeit & Raum den letzter Rest von physikalischer Realität. ..." “Thereby time and space lose the last vestige of physical reality”. (Possible paraphrase: space does not have physical existence, but is more like a bunch of relationships between events) In case anyone wants an online source, see page 43 of this pdf at a University of Minnesota website http://www.tc.umn.edu/~janss011/pdf%20files/Besso-memo.pdf ==quote from the source material== ...In the introduction of the paper on the perihelion motion presented on 18 November 1915, Einstein wrote about the assumption of general covariance “by which time and space are robbed of the last trace of objective reality” (“durch welche Zeit und Raum der letzten Spur objektiver Realität beraubt werden,” Einstein 1915b, 831). In a letter to Schlick, he again wrote about general covariance that “thereby time and space lose the last vestige of physical reality” (“Dadurch verlieren Zeit & Raum den letzter Rest von physikalischer Realität.” Einstein to Moritz Schlick, 14 December 1915 [CPAE 8, Doc. 165]). ==endquote== Both quotes are from Nov-Dec 1915, one being from a paper on perihelion motion. and the other from a letter to Moritz Schlick a few weeks later. ================== Events have physical reality. Like when I strike the table with my fist, or press the return key on the keyboard with my little finger. Atoms of one substance collide with another substance marking an event. Space is more like the geometric relations between events. Spacetime is a mathematical construct representing information, namely geometric/historical relations among events. (The name "relativity" can remind you that it is about relations, if you need reminding.) That's what I hear Einstein saying in these quotes. Space is not a substance. It does not have physical existence. There are material events and they are related by a (certain classical or uncertain quantum) geometry. That geometry is called the gravitational field and it's not a fixed static affair, it's responsive like other fields. There are unanswered questions here, we humans are still working hard to figure out the underlying reality out of which space time and matter emerge. You might try reading Chapter 1 of a new book that just came out. It's completely unmathematical, but it's not dumbed-down, so don't expect an easy read. http://arxiv.org/pdf/gr-qc/0604045v2 The piece is called "Unfinished Revolution" and it is the lead chapter in a book called Quantum Gravity: Towards a New Understanding of Space Time and Matter. Amazon lets you look inside the book, to sample it: http://www.amazon.co.uk/Approaches-Quantum-Gravity-Toward-Understanding/dp/0521860458/ref=sr_1_1?ie=UTF8&s=books&qid=1240071183&sr=1-1 The publisher, Cambridge University Press, also lets you browse the book online some http://www.cambridge.org/uk/catalogue/catalogue.asp?isbn=9780521860451 I'm not recommending the whole book! I'm suggesting you read Chapter 1, which has the gist of it in brief and indicates where we are on the main issues of the book as a whole.
  7. I believe that is a fair statement. And it is an interesting one, since a lot of nonspecialists have not realized this yet. One way to gauge the mainstream is to look at the main professional society, the GRG (international society for general relativity and gravitation.) They have the triennial meetings where they cover gravitational waves, numerical GR, black holes, big bang, tests of GR etc etc the whole works, everything related to gravitation and cosmology. In 2007 the membership elected Abhay Ashtekar president of GRG so if you want a thumbnail sketch of the mainstream he is Mister General Rel Gravitation and Cosmology. Well in the last 3 years he has been doing mostly research in quantum cosmology, and he is kind of the spokesman for that field---the survey papers, the invited overview talks etc. And if you want a snapshot of quantum cosmology, since 2006, say, then look at the authors titles citation counts of the top 20 or 30 papers that have appeared since 2006 (i.e. 2007 or later). Check the abstracts if you want more detail information: http://www.slac.stanford.edu/spires/find/hep/www?rawcmd=FIND+DK+QUANTUM+COSMOLOGY+AND+DATE+%3E+2006&FORMAT=www&SEQUENCE=citecount%28d%29 This list has over 200 papers but the most representative would be the first 20 or 30 (the most highly cited.) If you look over the list you will see that a lot of the most highly cited papers are by Ashtekar. And a large number of papers are about models of the big bang and of black holes which have no singularity. There is no scientific evidence that a singularity actually exists or existed in nature. In the BB case this would mean that time stops at the BB! No time evolution before that instant . There is no empirical evidence of that. So a considerable amount of contemporary research is aimed at discovering testable models that extend back prior to BB. That is essentially what Ashtekar (who is Mister Mainstream in this context) is focusing on. So I think you are basically right. Anybody who wants to get a more detailed impression can look at a few of the top ten articles on the Stanford database listing I just gave. The last GRG meeting had over 400 participants, it was in Sydney Australia. The upcoming meeting will be next year. Some of the program is already available on the website. http://www.gr19.com/index.php
  8. Great! Glad to see someone getting acquainted with the Morgan calculator. It should agree with the Ned Wright calculator at least to the first 2 or 3 decimal places. You may have noticed that the Hubble rate, which today is 71, is on the order of a million back at z=1000-----around when the CMB light started out. and at z=6 the Hubble rate is about tenfold bigger than it is today. In GR you are very free to choose whatever coordinates and there is no natural time coordinate that stands out as an obvious choice. The GR equation has many solutions. One particular solution or set of solutions is the Friedman model. Somewhat like you have a general theory of triangles where certain things are shown to hold, like 180 degrees and the Sine Law and the Area formula. (but there is no Pythagoras relation). And then you derive a particular specialized theory of RIGHT triangles from that by narrowing down to where one angle has to be 90 degrees. And you discover some new things are true. You wouldn't want to say Oh no! The theory of right triangles is NOT COMPATIBLE with the general theory of triangles because the specialized theory has Pythagoras and the general theory doesn't! That would be silly. The same way as it is silly to think that Friedman model is incompatible with GR. Friedman model has some extra features, like a natural choice of time coordinate, that come from the extra assumptions that went into it. And BTW the choice of Universe Time is optional. It just stands out as a natural choice but if you want to take the trouble you can change to different coordinates. You can always reparametrize. It just makes extra work though. The Wiki article on Friedman equations says briefly how the two Friedman equations are derived from the main equation of GR. It sounds technical but you might want to check it out anyway. I don't quite understand what you are asking, so feel free to rephrase the question. Spell it out in simple language. I'm saying that one is a simplification derived from the other by some simplifying assumptions---which incidentally buys you a few extra goodies, like a natural choice for the time coordinate. So the two are consistent. ================ If you look at Wiki article "Friedman equations" you will see that the Hubble rate H(t) is equal to a'(t)/a(t). And you can probably see how the two equations determine the evolution of a(t), the scale factor. Because they tell you its rate of change a' and its second derivative a". They completely determine the history of expansion, the history of a(t). And calculators like Morgan essentially embody the Friedman equations in a form that you can play around with them. That is, they embody a simple version of the Einstein equation of GR (with uniformity assumptions). And so a calculator like Morgan's can tell you Hubble rate H(t) for any past time, as indeed it does for any redshift z you put in. Because H is simply a'/a, and the calculator is computing a(t) and a'(t) so it is very little extra trouble for it to output H(t)
  9. They are very compatible! The Friedman model was derived rigorously from GR. He just added two assumptions that cosmologists normally make---uniformity assumptions: homog and isotropy. GR is the general theory and it can have a variety of solutions. Friedman identified a family of solutions especially relevant to cosmology. Within the context of those solutions, based on the simplifying assumptions which Friedman made, you get these nice features like a universe time. ================= I guess you could say the Friedman model is the application of GR to cosmology---to a homogeneous and isotropic universe. Wikipedia spells Friedman with two N letter. Friedmann (like a German-speaker would) but he was Russian and spelled it with one cyrillic N. For Liberal Arts you might look up Alexander Friedman in Wiki, and also the Friedman equations. Don't let it bug you if you don't understand everything. It's good to get a little exposure. In a simplified uniform universe all that can change is the SIZE, or scalefactor. Raw GR is very complicated and you can get all kind of weirdshaped solutions depending on what you give it. But this is a radical simplification where all you need to worry about is the average density of matter/energy and the scalefactor (and the cosmological constant, or you can lump that in with energy). ===================== Have you tried Morgan's calculator? You should. Google "cosmos calculator". set it up by putting in .27, .73, and 71 as the present matter and DE densities, and the Hubble rate. Try it out, like for z = 6 or z = 1000 it will tell you the past Hubble rate and stuff. distance increase rates etc. distances of course. I'm interested if that would work out for you. If you get anything out of trying it out. maybe give me some feedback? thx.
  10. I can only give you my limited take (trying to keep at least one foot in the mainstream, although this is an edgy topic.) I don't think anyone says "spacetime is expanding". The visual metaphor that some scientists like to use is that space is expanding. That is not a mathematical statement so it's fundamentally inaccurate or meaningless, but it is often said as an attempt to give people an intuitive feel for Hubble Law (a pattern of expansion of largescale distances between points at rest w.r.t. the CMB or other criterion of rest.) It also is said to help give intuition for the Friedman model which all cosmo is based on. Everybody uses that model and so far data fits beautifully and they will continue until they find a discrepancy---then they will say Hurrah! something new we get to modify the model. That's how it goes. You refine observation and push the existing equations until something doesn't fit and then it's exciting because you get to change the equations. Well, the Friedman model has an absolute time. And it has an absolute present moment and an absolute space (the slice of spacetime corresponding to Now.) In the Friedman model the rate of expansion is extremely variable. You should play around with the MORGAN calculator, which gives you the Hubble expansion parameter for any past time you want, or any redshift z. If you plug in z=1100, to look at the day when the CMB light got loose from the hot fog, the Morgan calculator will tell you what the Hubble rate was on that day. It was very different. In some situations it helps to use the scalefactor as a clock, but it doesn't go at a steady rate. There are problems with using some measure of expansion as a time parameter. It's done sometimes but it is not a simple straightforward strategy. I believe in Loop Cosmology they sometimes use the scalefactor as one possible substitute for time but only right around the big bounce, in the fraction of a second before and after, because driven to that by necessity. Classical time is not readily available in those circumstances. So they run their computer models on somewhat unconventional types of time. I can't speak confidently about this---too technical. I think not. It's not that simple. If you are doing classical Friedman cosmology you already have absolute time. (Called "universe time" or "Friedman model time") But cosmology is a simplification, (assuming uniformity, homog. and isotropy, is a huge simplification). The full GR theory has no chance of an absolute time. And even with the classical 1922 Friedman model there is a breakdown right at the beginning of expansion. People don't understand what is happening there and there is a lot of research now about that---quantum cosmology. And there is no clear straightforward choice of absolute time right around where the classical theory blows up.
  11. The energy levels in a hydrogen atom are different from those in a helium atom. Those levels are different from the levels in any other element you pick, and from the levels in an ion (say a nitrogen with one extra electron or one too few.) Or the levels in a molecule. None of these are commensurate. There is no common unit that energy levels are multiples of. Even in the simplest system--a hydrogen atom---the levels are not integer multiples of a common unit. There is no common unit if you go across species, like from copper to iron, as I mentioned. In other words in any simple ordinary sense energy is not quantized. What quantum mechanics does for you is discretize energy levels in specific situations (in ways which usually appear irregular but can sometimes be described by mathematics which is again specific to the particular atom or situation...) ==================== It sounded like you think that all energy comes in multiples of some fixed unit and that therefore (since particle rest mass is proportional to energy) you think particle rest masses should come in multiples of some fixed unit. You asked for someone to argue against that kind of energy quantization. So I argued, at what I took to be your request. If that is not what you had in mind, just ignore my post and don't bother to answer. ==================== In this kind of a thread my personal habit is to get hold of and reply only to short posts, if at all, short like the one here that I responded to. Not everybody reacts the same, however.
  12. I think what he meant to say is the universe runs out of neg entropy. That is it runs out of available energy---energy in some form that life and other processes can use. Would that way of putting it satisfy you? There is no mystery how that happens---we see energy being degraded all the time: e.g. the planet absorbs highgrade 5000 kelvin sunlight and radiates off an equal amount of lowgrade 300 kelvin waste heat. Everything of interest having been accomplished by degrading that energy from 5000 down to 300 (approximately or whatever). Merged post follows: Consecutive posts merged White dwarf stars (finished fusing but not massive enough to collapse to neutron or hole) gradually cool. Yes compaction generates heat energy. The core of the dead star does generate heat as long as it is becoming more compact. But eventually it cools down and becomes as compact as it is able to be and thats it: cold inert material.
  13. here is a neat little calculator http://hyperphysics.phy-astr.gsu.edu/hbase/geoopt/prism.html#c2 put in 60 degrees for sigma put in 1.3 for the index for ice put in 1.0 for air (negligible slowy-down-ness) click on the little delta in the "active formula" and it will tell you delta. It says delta should be 21 degrees, close enough to 22. If you can't make this calculator work, ask for help. I checked that it works.
  14. Those are helpful links. Wikipedia is not too bad on this either: http://en.wikipedia.org/wiki/22°_halo I guess lots of us have played with 60 degree (equilateral triangle) prisms made of glass. The angle such a thing bends a lightray depends on the transparent material (how much it slows down light passing thru it). If you have a 60 degree prism made of clear ICE then it bends the lightray at least 22 degrees. Because water ice is not as slowy-downy as plastic or glass, a glass prism with the same geometry would bend a lightray more than 22 degrees, at least 37 degrees I think. Chitranga, there is some neat simple physics involved here that if you don't know you could continue asking questions about. Different people might answer. I didn't tell you how a prism bends a lightray. But if you already understand that then no need to ask further.
  15. I mainly want to concur with what has been said. Gen Rel came out in 1915 and was initially tested by observing light-bending (Eddington 1919) and the planet Mercury (actually a post-diction but still a confirming test.) On the basis of that Friedman developed the expanding universe model and published in 1922. He didn't base this on redshift. If it had any empirical basis it was Eddington 1919 and the perihelion of Mercury, that is observations within our solar system. It was one possible solution to Einstein's equation. Then later, in 1929, Hubble published the linear correlation of distance and distance-increase-rate. He had discovered that largescale distances increase at approximately the same percentage rate. This fits Friedman's model. Basically scientific theories are not intended to believe in, they are meant to be tested, by forcing them to yield testable predictions. Eventually the hope is if you push it far enough you can get a theory to predict something that turns out wrong and that will reveal some new physics and you get a chance to improve the theory. General Relativity (which models dynamic geometry and gravity) has passed many tests with exquisite precision. It is confirmed by a lot of other empirical results besides redshifts! The redshift data keeps coming in and providing additional support but that is only part of the story. That doesn't mean that astronomers believe GR is true. It is widely recognized to need improvement (that is what all the research in quantum gravity and quantum cosmology is about). But it is the best model we have so far of how geometry evolves, and how gravity works. It's also simple, which is a serious consideration. A theory which agreed with GR on all past observations but predicted that tomorrow the moon would crash into the earth with the speed of light---a kind of "MegaChicken Little" theory ---would have to be extremely complicated. Likewise one could presumably construct a mathematical model of geometry and gravity, a kind of Megachicken "The Universe is Falling!" theory which would duplicate the success of the Friedman model up until now and would predict that tomorrow all the observed redshifts would change sign and become blueshifts. That is aliens able to violate what we thought were laws conservation of energy and momentum and probably every other presumed law had made a sudden concerted effort and turned things around! Intervening in an orchestrated way such that we would only get the news of it tomorrow, and not a hint would arrive earlier. This is possible and were it to happen would doubtless be welcomed by theoretical physicists and cosmologists because it would represent new physics. All the old laws would have to be revised! So as to be consistent with what had just been observed. In that case there would still be billions of years in which to figure out what was happening. (Unlike the case with the Moon falling on us. ) Merged post follows: Consecutive posts merged If the Moon suddenly changed course and headed towards us at the speed of light we would also have no prior warning. If this has in fact occurred then those at the point of impact now have around 1.2 seconds of blissful ignorance remaining them.
  16. It might sometime interest folks to look into the beginning of the expansion cosmo model. Here's a facsimile of page 1 of Friedman's 1922 paper: http://www.springerlink.com/content/l23864w241673530/fulltext.pdf?page=1 Here is a scan of the entire 10-page paper http://edocs.ub.uni-frankfurt.de/volltexte/2008/9863/pdf/E001554876.pdf I see he makes a rough estimate of the age of the expansion---he says 10 billion years---gets it roughly right! Friedman's work was not based on redshifts. An American astronomer Vesto Slipher had previously measured some galaxy redshifts but had not discovered their correlation with distance. Later on, Hubble was able to estimate distances and thus to correlate redshift with distance and find the linear proportion connecting them (Hubble's Law 1929). It is interesting that the expanding universe idea came before the discovery of Hubble's Law. There is no indication that Friedman knew of Slipher's redshift measurements: he arrived at the expansion model on theoretical grounds, based on General Relativity (1915). An English translation of Friedman's 1922 paper is here, but it is pay-per-view http://www.springerlink.com/content/427ex54m3v50/?sortorder=asc&p_o=10 Willem de Sitter came up with an expanding universe model in 1917. Here is a link to one of two papers he wrote that year: http://www.digitallibrary.nl/proceedings/search/detail.cfm?startrow=1&view=image&pubid=2024&search=&var_pdf=&var_pages= Fortunately the paper is in English. The last page is missing in this free digital library copy. De Sitter seems to have a more abstract theoretical interest. I don't see him treating his model as a possible fit to reality. It is of interest as a possible solution to the equation of General Relativity, and an alternative to Einstein's static model. Friedman, on the other hand, plugs some real numbers in and makes a bold attempt to get a rough fit. Neither of them seem aware of redshift information.
  17. Rob, I appreciate your asking me, but that question belongs in the Relativity forum http://www.scienceforums.net/forum/forumdisplay.php?f=20 Please open a thread there and ask that question where it can be answered without getting this thread off topic.
  18. Rob, I appreciate your asking me, but that question belongs in the Relativity forum http://www.scienceforums.net/forum/forumdisplay.php?f=20 Please open a thread there and ask that question where it can be answered without getting this thread off topic. mod note: Use this thread (i.e. that's a link)
  19. Hello again McGarr, I was wondering where you went. Glad to see you back. Your thread has a lot of other content besides just about time. You could fairly object to hijacking if we got into a discussion focused on just time on your thread. Hi Moo and Swansont, Maybe I will start a fresh thread about time. It shouldn't be primarily based on the FQXi essay---as Swansont points out, we already had a Yuri thread about that. Maybe we can take a different starting point. Like chapter 1 of a new book that just came out talks about the successive weakening of the idea of time that started in 1905 and has continued in several steps. EDIT: McGarr, I went and started a thread about the progressive weakening of the time concept. I think I understand where you are coming from---or might be coming from---when you say "sleight of hand" and "sophistry". Heh heh. Basically McGarr, I welcome and enjoy your presence because you actually read stuff, struggle through and get as much as you can (and skip the rest, just like i do, I guess). And your blog shows that you are highly articulate. And seriously interested in science from a kind of original point of view. So it doesn't bother me if you decide that Rovelli's attempts to understand time amount to "sophistry". I don't feel I need to argue or persuade in this regard, everybody has to see by their own light. However, in case you or anyone wants to ask or comment here's the link: http://www.scienceforums.net/forum/showthread.php?p=484180#post484180 Merged post follows: Consecutive posts mergedMcGarr, getting back towards your main topic (if I understand your drift) you were drawing inspiration from a pop-sci journalism piece in the Telegraph about some 2007 and 2008 speculation by Hawking (with Hertog and Hartle). The corresponding scientific papers are available free on arxiv, in case you want to glance at the source. (As a rule better not to rely entirely on a journalistic paraphrase.) The general topic is "quantum cosmology" so here is a list of recent keyword "qc" papers ranked by citation count to give a glimpse of the research context. http://www.slac.stanford.edu/spires/find/hep/www?rawcmd=FIND+DK+QUANTUM+COSMOLOGY+AND+DATE+%3E+2006&FORMAT=www&SEQUENCE=citecount%28d%29 The two papers by H, H, & H are numbers 39 and 43 on the list. One is a brief 4 page thing and one is a longer more detailed 43 page account. The Hawking approach to quantum cosmology attracted interest in the 1980s and 1990s and has now to a large extent been superseded by newer ones as the field evolves. He used to refer to it as a "path integral" or "sum over histories" approach. In that type the geometry of the universe does not develop according to a definite trajectory but follows a weighted sum of all possible trajectories that get from state then to state now. The current lead versions of quantum geometry/gravity are of the same (path integral/sum over histories) general type but differ significantly in detail. Younger people now. Hawking has retired and his recent papers are not much cited.
  20. Cameron, I'm impressed. Charon has done a brilliant decoding job for you. In old inscriptions they sometimes did not use spaces. Sometimestheyranthe wordstogeth erlikethis. and they might break arbitrarily when they came to the end of the stone and needed to start a new line The emperor was sometimes made a god after he died, and also the wife or empress could be made a goddess or "Diva". So if her name was Faustina after she died she was referred to as Diva Faustina (the divine Faustina) Your inscription looks very much like DIVA FAUSTINA FIA here is wiki about her: http://en.wikipedia.org/wiki/Faustina_the_Elder It says that her husband, the Emperor, deified her (made her a Diva) after she died. Her husband was Antoninus Pius, the emperor who came after Hadrian
  21. There's a new book out from Cambridge University Press with the subtitle "Towards a New Understanding of Space Time and Matter" and Chapter 1 is this 8 pages of non-mathematical overview piece by Rovelli which appeared earlier on the arxiv. http://arxiv.org/abs/gr-qc/0604045 Here's a sample excerpt that deals with time. He is talking about the gradual progressive weakening of the concept of time. From classical mechanics, to special rel, to general rel, and finally to quantum GR... I've highlighted to make the structure of the progression more obvious. ==quote== Before special relativity, one assumed that there is a universal physical variable t, measured by clocks, such that all physical phenomena can be described in terms of evolution equations in the independent variable t. In special relativity, this notion of time is weakened. Clocks do not measure a universal time variable, but only the proper time elapsed along inertial trajectories. If we fix a Lorentz frame, nevertheless, we can still describe all physical phenomena in terms of evolution equations in the independent variable x0, even though this description hides the covariance of the system. In general relativity, when we describe the dynamics of the gravitational field (not to be confused with the dynamics of matter in a given gravitational field), there is no external time variable that can play the role of observable independent evolution variable. The field equations are written in terms of an evolution parameter, which is the time coordinate x0, but this coordinate, does not correspond to anything directly observable. The proper time τ along spacetime trajectories cannot be used as an independent variable either, as τ is a complicated non-local function of the gravitational field itself. Therefore, properly speaking, GR does not admit a description as a system evolving in terms of an observable time variable. This does not mean that GR lacks predictivity. Simply put, what GR predicts are relations between (partial) observables, which in general cannot be represented as the evolution of dependent variables on a preferred independent time variable. This weakening of the notion of time in classical GR is rarely emphasized: After all, in classical GR we may disregard the full dynamical structure of the theory and consider only individual solutions of its equations of motion. A single solution of the GR equations of motion determines “a spacetime”, where a notion of proper time is associated to each timelike worldline. But in the quantum context a single solution of the dynamical equation is like a single “trajectory” of a quantum particle: in quantum theory there are no physical individual trajectories: there are only transition probabilities between observable eigenvalues. Therefore in quantum gravity it is likely to be impossible to describe the world in terms of a spacetime, in the same sense in which the motion of a quantum electron cannot be described in terms of a single trajectory. ==endquote== This is kind of serious. For almost 100 years Gen Rel has been the prevailing theory of how gravity works and has provided our basic idea of space time and geometry. It explains how geometry arises and why Euclidean geometry emerges as a low-density solution. Yet Gen Rel does not admit an idea of time. Once you have a solution, a fixed geometry, you can run observers in that context and they can carry clocks and have an individual idea of time. But the system as a whole has no overall time that it evolves according to. So GR has been predominant for nearly 100 years and we maybe still didn't realize that it doesn't have have the kind of time feature we normally expect. Chapter 2 of the new book is by Nobel laureate Gerard 't Hooft. It is on a similar theme, he talks about what he thinks it will take to get at the fundamental nature of space and time. In case anyone's interested, here is the chapter that 't Hooft contributed: http://www.phys.uu.nl/~thooft/gthpub/QuantumGrav_06.pdf There is a kind of consensus among the contributors to the book that space and time are macro-scale perceptions which emerge from something more fundamental, a different model of reality down at microscopic level. This has been brewing in Physics for some years already, many of the chapters in this collection were posted already in 2006 or 2007 as preprints. And there are contributors from many separate research lines (string theorists, inflation cosmologists, phenomenologists, non-commutative geometers, people from loop quantum gravity, several other non-string QG fields, particle physics, and so on...) About 20 authors in all. So here's one part isolated from Chapter 1. It's about the historical weakening of the idea of time (in foundations physics, not in applied as far as I know.) I put it out as a thread in case anyone wants to discuss it. The book is expensive, aimed at the academic library market. Physics department librarians will order it on faculty request---that kind of thing. In case anyone is curious: http://www.cambridge.org/uk/catalogue/catalogue.asp?isbn=9780521860451
  22. I can't give you much in the way of a satisfactory response. I think this is a recurrent rip scenario rather than a theory, because it doesn't seem to make any predictions. (Or did I miss something?) Perhaps the best way to refute this scenario would be to measure the dark energy equation of state more and more accurately, so that it seems less and less likely that it is less than -1. In fact this has been happening. As you say it is very unlikely (already now using current data) to be less than -1. It seems to me that since we don't yet have a clue as to what dark energy is, or even what the real cause of acceleration is (it might not be a dark energy field at all, it might be something we haven't thought of yet), we can't really follow through with your scenario and check consistency with known particle physics. You would probably like someone to help by considering whether this recurrent rip re-birth scenario is consistent with the standard model of particle physics, but my thought is that we are still most likely 10 or more years away from understanding DE and putting it together with an expanded particle physics. BTW Roger Penrose has a recurrent re-birth universe scenario where something like this happens (but without ripping). I just posted some links to it here http://www.scienceforums.net/forum/showthread.php?p=484140#post484140 You may know Penrose's talk already. Like with yours, his scenario has complete particledecay and complete black hole evaporation and huge expansion eventually leading to conditions favorable to a big bang. He rationalizes this. To me it still is not clear how he gets there. Entertaining and informative talk though. Provocative. You might enjoy it especially because of the points of similarity.
  23. You might enjoy Roger Penrose talk about what came before the big bang and what to expect to happen far into the future. He gave the talk first in 2005 in Cambridge and there is online slides and audio. Later he gave it other places (UC Berkeley, Perimeter Institute...) and an October 2007 version is also on YouTube. I would recommend following the Cambridge (slides and audio only) version first, but you can take your choice. November 2005 Cambridge version: http://www.newton.ac.uk/webseminars/pg+ws/2005/gmr/gmrw04/1107/penrose/ October 2007 George Mason University version. The first segments are his talk and the last ones are him responding to questions from the audience. http://www.youtube.com/watch?v=ghbDGBOYp1g http://www.youtube.com/watch?v=0upkexD9Tf8&feature=related September 2006 Perimeter (Canada) version http://pirsa.org/06090005/ Before the Big Bang: an Outrageous Solution to a Profound Cosmological Puzzle Personally I think Penrose is wrong to trust the 2nd Law to hold thru a LQG bounce----when spacetime geometry and matter collapse and then rebound due to quantum gravity effects. I think that an observer is necessary in order for the Law to hold. During the bounce, space as we normally experience it ceases to exist. Classical space and matter re-emerge shortly after expansion starts. But there is a break during which the physical machinery that one assumes in order to verify the 2nd Law simply does not exist. So I think he is wrong. But that doesn't matter. It is a great talk! Much is presented with exceptional clarity.
  24. Several ways to infer mass, some have nothing to do with luminosity. One of the most basic is to use Kepler law on a binary pair of stars. I assume you know the measure of separation called the semimajor axis. (separation in A.U.)3/(period in years)2 = mass in solar masses There are half a dozen different ways to determine the distance to a star or to a group of stars, and there are several ways to determine the masses of stars. The point is that they are consistent. Astronomers have been meticulously calibrating and checking their measures of basic quantities for something like 100 years. New ways of measuring appear in the literature from time to time.
  25. Airbrush, these are good questions! I didn't see your post until just now (was busy with other things most of the day) and I see Klaynos has already given a clear explanation. So I'll just add a comment. You do find concentrations of DM in and around clusters of galaxies. Your basic intuition is right. DM comes in large clouds which tend to coincide with concentrations of ordinary matter----but those don't have to be individual galaxies, they could also be clusters of galaxies and such-like large-scale structures. And there is a chicken-egg problem. Which came first? You might guess that the baryonic concentrations came first and gathered DM in around them. What I've been hearing about, though, are models where the DM (by the action of its own gravity) slowly curdles into large blobs and filamentary structures and that the DM concentrations help the ordinary matter to cluster and clump etc. George Smoot gave a neat slide presentation about this to the TED club, in which he played computer animation movies of simulations of structure formation in which the DM was actually playing the driving role and serving as a kind of armature or framework to bring the baryonic structures into being. Somebody at Chicago did the computer simulations. BTW expansion actually slows things down, drains kinetic energy (not just photon energy by redshift, but massive particle energy). That sounds bizarre but Steven Weinberg has a proof in his Cosmology textbook and other experts seem to accept this. It seems to violate conservation. But it makes it possible for DM to gather in clouds. Analogous to friction drag, but not friction. Weinberg is Nobel---I'm not going to argue with him :-D It is factored into the sims. Somehow by hook/crook, a self-gravitating DM cloud can eject excess energy and thereby slowly collect and contract. Smoot is another Laureate. Here's his TED presentation Only 19 minutes, check it out. The structure formation sims are of pure DM.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.