Everything posted by Markus Hanke
-
The source of dark energy is gravitational self-energy!
The self-coupling of the field is encoded in the non-linear structure of the field equations themselves - thus, since the Friedmann equations are obtained from the field equations, the self-coupling of gravity is already accounted for in their structure. You cannot easily separate this out in a simple way. As mentioned above, the correct Friedmann equation follows from the Einstein equation, given a suitable energy-momentum tensor; hence the correct one is the standard one which you find (with derivation) in any GR textbook. My advice would be to consider carefully why it is that the self-interaction of the field does not explicitly appear as a source-term in the field equations; you’ll find only ordinary energy-momentum there. That’s because the self-interaction energy cannot be localised, or written down in a way that all observers agree upon. Attempting to explicitly include them in a particular solution (as an analytic term) thus cannot work, because this is inconsistent with the tensorial character of the metric. It is instead encoded in the non-linear structure of the equations themselves, so it influences the relationships between the different components of the metric, rather than directly appear within those components. Conversely, because the cosmological constant appears as a local term in the Einstein equation, it cannot be interpreted as gravitational self-energy (but it certainly contributes to that energy). It is best to look at it as a kind of background curvature that is there independently of ordinary energy-momentum. For certain spacetimes such as FLRW, it thus ends up having an effect on how the metric components depend on the time coordinate (‘rate of expansion’). This is all rather subtle, and easily misinterpreted. Perhaps do some research on the Landau-Lifschitz pseudotensor - though unfortunately this involves some not so basic maths.
-
The source of dark energy is gravitational self-energy!
I haven’t read Ohanian yet, but of course he’s right - the self-coupling has a gravitational effect, and that effect is qualitatively the same like that of ordinary energy-momentum. This is what I said. It can’t be any different, as otherwise you couldn’t have any stable vacuum solutions. I still don’t get you I’m afraid. First of all, gravitational potential energy can only be meaningfully defined for specific spacetimes with very specific symmetries - it is not a generally applicable concept in GR. For example, it can be defined for Schwarzschild, but not for the FLRW metric in cosmology. Secondly, even in cases when grav potential energy can be defined, it is not the same as the non-linear self-coupling of the gravitational field in GR. These are very different concepts. Because you are mixing it all together in your posts, it is really difficult to give a meaningful reply over and above what I have said already. You are right of course in that accelerated expansion requires a repulsive effect of some sort - but that can’t come from self-coupling. The cosmological constant acts as a sort of ‘background curvature’ that modifies the vacuum equation...we don’t yet know what it is, physically, but we do have pretty good idea about what it can’t be.
-
The source of dark energy is gravitational self-energy!
I’m having trouble figuring out what it actually is you are trying to say, since parts of your posts contradict each other (one example above). I’ll just offer a few words from a GR perspective, since the issue of gravitational self-interaction is subtle and often misunderstood. The starting point is the law of energy-momentum conservation. This is well understood in Newtonian physics, and easily translatable to SR, so long as spacetime is flat. However, if we try to find such a law for curved spacetime, we run into trouble - replacing ordinary with covariant derivatives in the conservation law leaves us with extra curvature-related terms that do not, in general, vanish. Worse still, these terms aren’t themselves covariant, so they depend on the observer. Not good. One way to try and recover a meaningful conservation law is by taking into account not just the energy-momentum of matter and radiation, but also the energy inherent in their gravitational interactions, as well as that of gravity’s own self-coupling. However, there’s a problem - gravitational self-energy cannot be localised. The mathematical consequence of this is that there is no covariant (observer-independent) object that captures this quantity. The best we can do is use what’s called a pseudotensor, which isn’t quite the same as a full tensor, and thus not usually a ‘permissible’ object in GR. Even then, there is no unique choice of object - the one most commonly used is called the Landau-Lifschitz pseudotensor. So what we do is form a certain combination of the pseudotensor (representing the energy in the gravitational field) with the normal energy-momentum tensor (representing energy and momentum in matter and radiation) - and notice to our pleasant surprise that the divergence of the resulting object is covariant, and vanishes. So we aren’t looking at the field itself, but rather at the density of sources in a combination of gravity and matter/radiation. What does this mean? Suppose you have a small 4-volume that contains matter/radiation, as well as its gravitational field (in the form of spacetime curvature); the above means that the overall source density (divergence) of the combination of energy-momentum in matter/radiation and in the gravitational field within that volume comes out as zero, so that there is no net flow of energy-momentum through the boundary of the volume. Note the highlighted terms - we are talking about overall density of sources of a combination of the two contributions, resulting in no net flow through the boundary (Stokes theorem). The combination itself is a sum of tensors, one of which is a complicated function of the metric; nowhere are we implying that there are any exotic sources involved - we are just saying that one must account for both matter and gravity in order to write down a conservation law. We also aren’t saying at this point that ‘total energy is zero’ - only that the combination of the two is conserved in a certain precise sense. This is a subtle and somewhat counterintuitive matter, and easily misunderstood. That’s what is meant when we say that ‘the energy of the gravitational field is negative’. It’s essentially an accounting device that leads to a covariant conservation law in curved spacetime, and not an ontological statement about its nature. It’s a bit like debits and credits in accounting, which make the balance sheet balance; but the nature of money for each individual entry is of course not affected. The same with gravity - we are balancing the books, but the gravitational effect of the field’s non-linear self-coupling remains gravitationally attractive, as of course it must. You can see this in the fact that we get stable vacuum solutions to the field equations - scenarios where there is only gravitational self-energy, and no ordinary sources (T=0 everywhere). Geodesics still converge in these spacetimes, and there’s no inflation or expansion, unless you permit a non-zero gravitational constant (the above reasoning about conservation still holds even then). A trivial example is the ordinary Schwarzschild metric; a less trivial but far more striking example is the gravitational geon. Plus all other vacuum solutions without ordinary sources. These solutions wouldn’t exist if the self-coupling had repulsive gravitational effects.
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
I’m sorry, but nothing I see here (or on the other thread) is even remotely convincing enough to justify spending any more time on this. You’re both really just guessing - there are a lot of ‘could’, ‘should’, and ‘might’, but no real substance I can see. Feel free to tag me should you ever come up with an actual working model, and I’ll be happy to look at it - for now, though, I’ll leave you to it.
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
I disagree, they are not at all inverses of one another, because they rely on completely different mechanisms and different physical principles. Expanding space is a consequence of GR, but shrinking matter is not. There is no model within known physics that predicts or facilitates anything even remotely like shrinking matter. On the contrary, there is direct evidence that at least some of the fundamental dimensionless constants have not changed in any way over the past few billion years. Without such changes, relative to its own state in the past, you’ll find it hard to get matter to shrink while maintaining all physics. Also, saying that these are observationally identical (irrespective of mechanisms) is a claim that requires proof. It is meaningless to keep claiming this verbally - you need to show that this is in fact true. Right - pretty much every single claim about the inner workings of the shrinking matter concept is far removed from known physics. Even if you could get it to work somehow, it would require you to postulate large amounts of new mechanisms and principles. So coming back to the central claim of this concept - can you (or anyone else) actually show us mathematically how this matter shrinkage occurs, exactly? I’m happy to start with the non-relativistic simplest case, ie Schrödingers equation and its solutions, the wave function for a hydrogen atom. If we cannot nail this down, all further claims based on it remain entirely moot. The wave function of the hydrogen atom (to stay with above example) manifestly does not transform in this way. Neither do any of the quantum fields in the Standard Model. And if they did, it wouldn’t be a shrinkage of the atom. That’s a lot of well established physics to abandon. Far too high a price to pay, so far as I am concerned, especially since standard cosmology requires no such unphysical assumptions. So it’s pretty obvious now that this idea does not work within the framework of known physics. There is literally not a single example of a real-world field, classical or quantum, I can think of right now that transforms like this. What does this even mean? Is X’=X(r/L,t/L)? If so, for a simple inverse square law distribution X=a/r^2 (a=const), you’d get X’/X=L^2, and thus according to the above X->L^2 X(r/L) = L^4 X(r). How is this meaningful?
-
GR and cosmology (split from …A Shrinking matter theory that might actually work.)
So what mechanism stops space from expanding, and keeps it exactly static (which is not an equilibrium state)? Like I said in my last post, you need to first of all show that this in fact possible within the framework of known physics - until then, all further speculations based on this are moot. A more realistic model must: 1. Accord with already known physics 2. Reproduce the same observational predictions as the old one 3. Make new predictions that the old one couldn’t It’s for you to show that this is in fact true. Again, it’s down to you to show that this is in fact true
-
GR and cosmology (split from …A Shrinking matter theory that might actually work.)
It’s just that metric expansion is accumulative - the more space you need to traverse, the more expansion you get. What do you mean by this? ‘Expansion’ is actually a bit of a misnomer (originating in differential geometry) - really all it is is that measurements of distance depend on when they are taken; there’s not really any substance somehow expanding like dough in an oven. See my comment over on the other thread. We could go back-and-forth on this until the cows come home, but ultimately the only way to be sure whether this idea actually works or not is to demonstrate it mathematically. Can you scale down the wave function of a real world atom such as hydrogen without violating or changing any physics, such that the exact observations of cosmology are reproduced? Can you then extend the same procedure to all other elements (unfortunately this could only be demonstrated numerically)? Can you scale down the Standard Model so that all particles and interactions remain what they are? I maintain this isn’t possible, not even remotely, for all and any of the reasons already mentioned (and I say this because I’ve been learning the maths of all this stuff for some time). But if someone can put forward a formalism, I’ll be more than happy to look at it - if only just for intellectual curiosity. But you know me by now, I’m by and large a mainstream guy, so it would require some pretty extraordinary and persuasive mathematical arguments for me to even begin taking this seriously. And that’s not an unreasonable stance either.
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
You see, my issue is that you have no way to actually know this. What do you base this assumption on? What do you base any of the assumptions mentioned on this thread on? Unfortunately to date no one has been able to actually put forward a working model (ie a mathematical formalism) for shrinking matter, so there isn’t any way to extract predictions of any kind from the concept. Everything that has been proposed here is speculation and guesswork. There’s nothing intrinsically wrong with that (we are in ‘Speculations’ after all), but it does make it difficult to discuss the concept in any meaningful way. One could start on a basic level, and eg look at the wave function for a hydrogen atom, which can be written down analytically (see any QM text of your choice). Can someone demonstrate how to scale this mathematically in a way that supports the assumptions given here, without violating any other physics? This would be a good first step.
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
So energy is not locally conserved? How does this fit in with Noether’s theorem? And how does simply transforming the Lagrangian like this yield a spatial shrinkage of the system? Remember also that, in order to get equations of motion for your system, you insert this Lagrangian in the Euler-Lagrange equations (or make it stationary via variational calculus). But this equation depends on the Lagrangian itself, as well as derivatives with respect to its own time and space derivatives. Hence, if you transform the entire Lagrangian as you suggest, the solution of the Euler-Lagrange equation will not be the same - which means different physics. So how does this work - you keep L the same, but then get no shrinkage. You transform the Lagrangian, but then get different physics. So how does this work?
-
GR and cosmology (split from …A Shrinking matter theory that might actually work.)
This will eventually be true in the distant future, assuming an accelerated rate. Right now, even for free space the expansion only becomes apparent on scales of ~MPc, so it isn’t detectable within galaxies (I assume you mean empty space between stars). What mechanism keeps metric expansion at exactly zero? This would imply that redshift is the same for all distant objects, since it depends only on our local rate of shrinkage. But this is not what we see at all.
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
“Rate of time” is a meaningless concept. Locally, time always ticks at 1 second per second, and lengths always measure at 1 meter per meter. You can compress the physical length of a platinum bar by packing its atoms more tightly, which in itself is not the same as a rescaling, because the atoms themselves do not change size. There is, however, a limit to this - once the compression force becomes strong enough to begin affecting atomic structure, then the platinum bar will eventually cease to be platinum. This is what happens at the formation of the neutron star itself - ordinary matter becomes degenerate because the relevant limits get exceeded, leaving only neutrons and a quark-gluon plasma. That’s precisely my point - if you increase energy levels (which is what happens when shrinking atomic structure), you end up with new states of matter that are different from the original state. This is because QFT doesn’t scale - it couples explicitly to a well defined energy scale. You can’t shrink eg an ordinary star and expect it to remain an ordinary star. It simply doesn’t happen. You would obtain the same result - nothing changes for a local observer, so far as lengths measurements are concerned. Except of course that the bar (and yourself) will be flattened into a thin sheet by the gravity of the neutron star, so you’d know you aren’t in an ordinary environment. I commend you for also considering possible problems of your idea - we don’t see this often here. Kudos 👍 By what mechanisms do these change? What determines the rates of change? What mechanism ensures that all changes are fine tuned exactly such that everything remains consistent? Using natural units is standard in all of modern physics, since it simplifies the calculations. This has no physical significance, it just saves you from writing out all the constants all the time. And yes, the Lagrangian has units of energy. Well, you will have to show this mathematically, while taking into account all already known physics. At the moment you are proposing a large number of new physical mechanisms, while assuming that these will produce precisely the results you think they’ll do. It will be up to you to show this mathematically; there’s too much in your post to actually address it all. So can you provide a mathematical formalism that shows the mechanisms for shrinkage in the framework of the Standard Model (in a way that preserves the known laws of QFT), and show that this reproduces all available cosmological observations (not just redshift)? Even for simple redshift I don’t really understand your thoughts here - if redshift was down to your local rate of shrinkage, then wherever we look, all distant objects should exhibit nearly the same redshift, or at the very least there should be no correlation with distance. Clearly this is not so, and we know that there is a direct relationship between redshift and distance of observed object. It’s like a rabbit hole - the more you look at this, the more assumptions you need in order to make it seem even remotely plausible. I really fail to see the point in all this, as it offers no advantages whatsoever compared to standard physics. And it’s not like shrinking matter is a new idea - it’s been around for as long as I can remember, and pops up regularly on forums.
-
General Relativity: Four Exterior Metric Solutions...
What is the question, exactly?
-
Why is a fine-tuned universe a problem?
I agree that the question as to whether they could have been different cannot be scientifically tested, at least not based on current knowledge. However, asking if at least some constants were different in the past is something that can be done - for example using natural fission reactors. Of course there’s some conceptual overlap between the above.
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
No. As I have already pointed out earlier, while some laws of physics may be scalable in that way, most are not scale invariant, most notably the laws of quantum physics don’t behave well under rescalings. You cannot ‘shrink’ atomic structures and composite particles and expect the physics to remain the same. For one thing, none of the fundamental interactions can be scaled, irrespective of how you fudge the fundamental constants; the whole concept of shrinking matter is pretty much dead right there. Even if that weren’t so, the wave equations that govern atomic structure do not scale as well - and neither do their solutions. And again, even if such rescalings were possible somehow, you’d come up against other issues. For example, if you shrink an atom while keeping its orbitals intact, the position of its electrons becomes more and more localised over time - which of course increases the uncertainty in their momenta. Eventually that uncertainty becomes large enough that electrons can jump orbitals (and fall back), leading to molecules becoming unstable, and ordinary matter emitting a continuous ‘glow’. Still further in the future, all atoms would become ionised; and still further, the hadrons within the nucleus would ‘dissolve’ into a quark-gluon plasma. Needless to say, we observe none of those things. Lastly, we actually have ways to check whether at least some of the fundamental constants might have had different values in the past (~2 billion years) - for example using natural fission reactors, such as at Oklo. The available data indicates that that was not the case. So no matter how you look at this, it simply doesn’t work. Even if it did, the model would generate many more problems and explanatory gaps than it solves.
-
GR and cosmology (split from …A Shrinking matter theory that might actually work.)
That is a rescaling. I don’t understand this question...can you explain? Well, that’s exactly what actually happens to light from distant sources...it’s in free fall, after all. Suppose you have a system described by a hypothetical Lagrangian of the form \[L=\frac{a}{r^2} - \frac{b}{r}\] wherein a and b are dimensionless constants. What happens to the Lagrangian when distances shrink by half, ie you perform a rescaling r’=1/2r? This is simply to demonstrate the principle, obviously real-world Lagrangians don’t look like this. The wavelengths aren’t proportional to the size of the atom, they are determined by the structure of the quantum mechanical orbitals - which are, again, not scale invariant, since the potential term in the Schrödinger equation isn’t scale invariant (never even mind the QFTs underlying this). You didn’t address my previous objection - redshift increases as the observed object gets farther away. They depend on distance, not any local quantity. Actually, that’s pretty much what quantum field theory is in fact saying, since none of the beta function of real-world quantum fields vanish. Scale invariance is quite a complicated topic, but a very basic overview can be found here: https://en.m.wikipedia.org/wiki/Scale_invariance https://en.m.wikipedia.org/wiki/Beta_function_(physics)
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
That makes no sense - if there’s no rescaling of size, there is no shrinking matter. You can’t have it both ways. One is physically possible, the other one isn’t. It’s much more than an assumption - it’s a necessary consequence of the laws of gravity, which are exceedingly well tested. They are not. To give one example - metric expansion is a function of distance, so the further out you look, the higher recession velocities are. This is true for all directions. How do you replicate this with ‘shrinking matter’, which depends only on the local rate of shrinkage? This is precisely the issue I’m pointing out to you - they do not cancel out. If you rescale, you end up with a different Lagrangian; this is why the idea doesn’t work. I’m not just claiming this for no reason - it can be mathematically shown that these interactions are not invariant under rescaling. We know this. The only example of a real-world QFT that is actually invariant under rescaling would be QED without the presence of charged particles (ie sources are far away). Even full QED with coupling sources isn’t invariant under rescaling.
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
And this is the problem, because, in natural units, the coupling constants in the weak and strong Lagrangians are dimensionless. So if you rescale lengths, the relative strengths of the various terms within the Lagrangian changes, and the whole thing breaks down. The Lagrangian of such a system consists of more terms than just the potential; and the relationship between those terms that is the issue. Yes they do - they would need to be invariant under rescaling, which they are not.
-
What would happen to space if passage of time was accelerating? Equality principle. Similarity of empty space. A Shrinking matter theory that might actually work.
Unfortunately neither the weak nor the strong interaction are invariant under rescaling, so no ‘shrinking matter’ model - irrespective of its details - can ever work, on fundamental grounds.
-
Paper: A causal mechanism for gravity
No, because the geodesic depends chiefly on the orientation of the light ray, not its spatial trajectory. You can see this in one of the other examples I gave - the satellite orbiting on the same trajectory in and against the direction of rotation, with different outcomes. So it doesn’t matter what kind of field you define in space, you can’t capture this behaviour. What does it even mean for the gradient to be shaped like a spiral? What kind of scalar field would give rise to such a gradient? But yes, feel free to investigate further. That’s how science is done, after all, and that’s how one learns 👍
-
Paper: A causal mechanism for gravity
Ok, fair enough. Well, there are very many other solutions to the field equations where that is the case. For example the FLRW metric - the notion of “time dilation field” doesn’t even make sense here, since this spacetime isn’t asymptotically flat, so no Schwarzschild observer exists at infinity to function as reference clock. You might find the book “Exact Solutions to Einstein’s Field Equations” by Stefani helpful, if you have access to it. It’s a nice survey of known analytic solutions - some very remarkable spacetimes here, which aren’t common knowledge. It’s quite mathematical though. They most certainly do in spacetime - ie the geodesics differ (there is a difference in frequency shift at least). Whether their purely spatial trajectory differs I’m not 100% certain, but I suspect it does, as the light ray will get “dragged along” by the spinning mass on one side (just as a massive test particle would), so it should experience more deflection when oriented along the direction of rotation. A quick search yields this : https://arxiv.org/pdf/1910.04372.pdf Underneath equation 16, there are plots for “effective potentials” (a mathematical term within the equation of motion); as you can see, these terms differ in the Kerr case between direct and retrograde geodesics - so there is a difference in deflection angles between these cases. The exact expression is given in equation 60, which unfortunately is very complicated, and can only be treated numerically (it’s an elliptic integral); but you find plots of typical cases a bit further down in figures 8 and 9 (dashed is Kerr-direct, dotted is Kerr-retrograde) - proving that the angle is indeed different depending on whether the deflection is direct or retrograde, as I suspected. The difference is in fact a lot larger than I would have suspected. For comparison it also shows the Schwarzschild, Reissner-Nordström and Kerr-Newman cases (we haven’t spoken about electric charge here, but that adds yet another degree of freedom).
-
Do We Have Free Will?
If you really think you have free will, you obviously haven’t been owned by a cat 🐈
-
Paper: A causal mechanism for gravity
I pointed out only that they lie in the same equatorial plane, and pass the body at the same minimum distance. But I also made it clear that they experience different frequency shifts, so the geodesics through spacetime are not identical. In practice, the total deflection angle would likely also be different, despite the same closest approach distance (I’d have to check this first). Which is precisely the first of my conditions - asymptotic flatness. Ok, thank you for clarifying, I was indeed confused on this. It makes more sense now. Now, if you demand geodesics to be approximately determined by time dilation and its gradient alone - which is the g00 component of the metric -, that means the other metric components should be negligible within the geodesic equation (which you retain, as you say). This is precisely the other three conditions in my list, plus an extra assumption of low velocity and weak fields (so that g00 dominates over g11 by a factor of c^2). So we have recovered the necessity of my boundary conditions. Your proposal may work, but only if all these conditions are met. The rotating body eg violates spherical symmetry and is not stationary, so the geodesic cannot depend on g00 alone. Each of the other examples I gave violate one or more of these conditions. GR on the other hand makes no assumptions about the metric components - it treats them all equally, and accounts for them all in the geodesic equation. For interior spacetimes (which we haven’t even spoken about yet) these components are all wired up to the various components of the energy-momentum tensor via the field equation, providing a comprehensive account of gravity and its various sources. In such scenarios, tidal effects in both time and space are important, so it goes far beyond time dilation alone. So if you can just acknowledge that your idea is useful in some circumstances, but limited in its domain of applicability, then we can be all good. After all, you can’t replaced a rank-2 tensor with a single scalar field (its 00-component), and expect to not loose any information in the process - that should make intuitive sense, no?
-
Paper: A causal mechanism for gravity
But this isn’t what GR predicts - the geodesic in fact is determined by all components of the metric, plus boundary conditions. I thought you said your idea is meant to replicate the results of GR? Also, what exactly do you mean by “scalar time dilation field”? Time dilation is a relationship between clocks - so assigning a single value to each point in space isn’t enough, you need to also fix your reference clock somehow. Well then there is a contradiction with your model, because in the real world frame dragging does very much affect the geodesics of light (and massive objects) around rotating objects. Real-world geodesics also depend on more than just the radial coordinate, unless all of the conditions I listed apply. As I mentioned earlier, for rotating objects there will be off-diagonal terms in the metric, making geodesics depend on at least two coordinates (radius and colatitude). I do not see how you propose to have this arise from a single scalar field? The gradient only tells you direction and slope of change at each point, it doesn’t add any extra degrees of freedom.
-
Paper: A causal mechanism for gravity
Do you mean a scalar gradient - which is a scalar quantity -, or the gradient of a scalar field (which is a vector)? These are very different. Yes, frame dragging is the GR effect. However, note that this arises from off-diagonal terms in the metric tensor - I do not see how you can self-consistently model this using a ‘scalar gradient’ (clarification required as per above) alone. This is why this effect does not (and cannot) exist in Newtonian gravity.
-
Gravitational Potential Energy in a 2 dimensional Universe
Sure, we can’t deduce the exact law, in particular not the constants. However, I think we can deduce the general form it needs to have - that is just a consequence of the generalised Stokes Theorem.