Jump to content

joigus

Senior Members
  • Posts

    4366
  • Joined

  • Days Won

    49

Everything posted by joigus

  1. Picture an inflating balloon. Now suppress the space around and inside the balloon, as there is no such thing as "inside" or outside the balloon. There would be only whatever stuff makes up the balloon. Now make the balloon itself 3-dimensional, with time providing for the "history" aspect of it. Spaces don't have to be embedded in higher-dimensional spaces. IOW, the only existing directions are those tangential to the balloon's rubber if you will.
  2. How about StPD at the root of many, if not all, of these reports? https://en.wikipedia.org/wiki/Schizotypal_personality_disorder Religious types could, after all, be not much more than socially-accepted schizotipicals, that have somehow met the medium, and the way, to make their illness socially palatable.
  3. Indeed. I --and others, you among them-- have said it before elsewhere on the forums, actually. It's the energy-momentum that sources the gravitational field. I also agree with the absence, of necessity, of any causal connection between the Einstein tensor and the energy-momentum tensor.
  4. There is no such thing. Thermodynamics defines temperature based on thermal equilibrium. Statistical mechanics relates it to average kinetic energy per degree of freedom. For statistical mechanics to make the connection between both concepts through the partition function and the Maxwell distribution, we need approximations on really big numbers of molecules.
  5. The group of symmetry of electromagnetism is U(1) (complex numbers of length 1), and electrical charges are at the centre of it. From the POV of symmetries, conservation laws, and irreducible representations of groups (particle multiplets) QFT of electromagnetism and its brethren --weak interaction, strong interaction-- is more user-friendly by orders of magnitude. Things kinda "fall into boxes." GR is not like that. Not by a long shot. The group of symmetry of GR is basically just any differentiable transformation of the coordinates. Once there, after one picks a set of coordinates that locally make a lot of sense (they solve the equations easy, yay!), they could go terribly wrong globally, so that one must introduce singular coordinate maps to fix the blunder. Because the symmetry group of GR is this unholy mess, group theory doesn't help much, if at all. The equations are non-linear, so: Are there any solutions that might help clarify divergences, and so on, that we might have missed entirely? Who knows. In my opinion, the very fact that the set of coordinates that, locally, happens to be the most reasonable one could (and sometimes does) totally obscure the meaning of the coordinates far away from the local choice, and thereby their predictive power out there, makes the status of any parameters that the theory suggests (mass in particular) much less helpful than charge is in EM. Mass to GR is nowhere near anything like charge is to Yang-Mills theory (our paradigm of an honest-to-goodness QFT field theory).
  6. Yes! It's like a tinkertoy assembly for logically compressed inflexions[?]. Whatever I mean by that... For some reason, phonetics, syllables and their frequencies, it seems to be very friendly to the forming of composite words. The end result doesn't sound awkward.
  7. Is this (admittedly rough) understanding that I've acquired through the years correct?: The currency of red-ox reactions is electrons The currency of acid-base reactions is protons Now, in a manner of speaking, Both oxydisers and reductors can be understood in terms of "soaking up" and "giving off" electrons Both bases and acids can be understood in terms of "soaking up" and "giving off" protons That's the reason why so much of chemistry hinges around these two dual concepts Other cations, even the smallest ones, like Li+, are "monsters" in comparison to H+. Orders of magnitude so much so. So even though the mean free path of a proton is sizeably higher than that of an electron, it's bound to be gigantic as compared to that of even such a small thing as Li+. That would qualitatively account for an extraordinarily high mobility of protons, thereby the reactiveness of anything that either gives them off or soaks them up. That's the key to the concept of Lewis acids. Is it not? Then, for something to be a base, in its most general sense, it must be able to soak up protons. But for it to display this character, there must be some protons around to soak up. Wouldn't something like this be at the root of NH3 not "behaving as a base" just by itself, or in the presence of chemicals that cannot give off protons? Wouldn't it behave as a base in the absence of water, but in the presence of acids (neutralisation) like, NH3+A --->NH4++A- with A being any acid?
  8. German scientific terms are generally very precise. They feel no embarrassment in making long composite words tagging essential characteristics of the thing. Bremsstrahlung in Spanish is radiación de frenado, which is exactly 'braking radiation', but requires three words. Pronounced as in English, I assume.
  9. For a while I felt nervous about zitterbewegung and bremsstrahlung, but it grows on you.
  10. Spatially flat and space-time flat are often conflated in the literature. I would have to review the Riemann coefficients with 0t pairings of indices (a space cannot warp in just one dimension). I'm not sure nor do I have the time (nor the energy) now to review these notions. Maybe someone can do it for all of us. Most likely @Markus Hanke. I'm sure DS space-time is often characterised as having constant curvature*. We're kind of mixing it all together as if the scalar curvature were "the thing" that says whether a manifold is flat or nor. It's more involved. If just one Rijkl is non-zero, the manifold is just not flat. Calabi-Yau manifolds are another example which are Ricci-flat (R=0), but not flat. Yes. Thank you. Read my comments to @MigL on flat vs spatially flat, Ricci-flat, and so on. They're very much in the direction you're pointing. Right now I'm beat, but I promise to follow up on this. Yes, of course you're right. This theorem due to Birkhoff[?] that the external solution is unique as long as it's static and spherically symmetric. Schwarzschild's solution was just an unfortunate example. I know very little about exact solutions in GR. I just figure there must be solutions with not all curvatures zero with no clearly identifiable matter distribution giving rise to them. *
  11. Infinite at one point. Zero everywhere else. But you're right. It's not a good example. De Sitter is more what I was thinking about.
  12. It's a bit more subtle than this, I think. You can have vacuum solutions with curvature. If you think about it, the Schwarzschild solution is a vacuum solution. De Sitter and anti-De Sitter are too. OTOH, the Einstein field equations are nonlinear, so I wouldn't rule out other exotic vacuum solutions with curvature.
  13. Right. A scalar provides a particular type of covariance. Rank zero. L'(x')=L(x) That's what one must prove in this case.
  14. LOL forgot x106 didn't I? Coming from me lately, how could it be otherwise? fortunately more than 4000 and more than 6000 could also be more than 4.7x109 yo. Thanks
  15. Had the Moon disappeared more than 4000 4.109 ya it would have been much much worse. Most Earth scientists think it was essential in the appearance of life. Or SpaceX.
  16. Ok, so you're old school. I respect that. But mind you that coordinates could be misleading you in some respects while they're helping you in others. This observation should always be carried along. In flat coordinates, sure. In curvilinear coordinates, it's a bit more involved than that. That's called the Laplace-Beltrami operator and you have to write some metric tensors in between, and also some epsilons, if I remember correctly. I would need some time to remember all the machinery. If yours is a genuine question. https://en.wikipedia.org/wiki/Laplace–Beltrami_operator Also books like Gockeler-Schucker, etc. on differential-geometry methods for theoretical physics. Are you just asking or trying to catch me again? Yes. That's what Feynman did all the time. For some reason he didn't like the g's.
  17. This is getting farther and farther away form discussing anything substantial (let alone anything within the OP context), and more and more about you getting out of your way in order to shift the context so that I could be proven wrong in that context. Anyway, Kronecker delta with a superindex and a subindex is an isotropic tensor. Kronecker delta with, eg, two covariant indices (like Tαβ=δαβ ) tells you much, much less, as it's a frame-dependent equality. δαβ is telling you that what you have here is just a rule to dot-multiply vectors. How much or how little does that "encode"? The dimension, and the fact that you're dealing with a scalar product? That's about all. I'll leave to you to decide how much that is telling you. There is a reason why the connection is given in terms of g−1∂g . Neither covariant indices nor contravariant ones give you the curvature, and it's an interplay between the two that does it. There is just another isotropic tensor in every space (the ϵ tensor). It is kind of telling you about orthogonality. The Kronecker only looks standard (1's & 0's) with one index up and the other down. The epsilon tensor only looks standard (1's, 0's, and -1's) with all indices down or all up. Otherwise, they show you all kinds of misleading info, as I clearly showed you with my textbook-standard example. Also, multilinear operators are not just "tensors" independently of a context, as you seem to imply. Multilinear operators are or are not tensors depending on the relevance of a certain group of transformations. There are such things as O(3) tensors (orthogonal tensors), U(n) tensors,... there are pseudo-Euclidean tensors (the only ones we were talking about to begin with), there are tensors under diffeomorphisms (the ones you, for some reason, want to shift the conversation to, although they have little to do with the initial discussion), etc. Let me point out more mistakes that you're making: Again, you are wrong. There's nothing special about differentials of the coordinates: All such objects form a basis. They are called coordinate bases or holonomic bases --see below. But not all bases are made up of derivatives of coordinates. It's only when they are that they thus called. This is from Stewart, Advanced General Relativity, Cambridge 1991: So no, not all bases are coordinate bases. And I wrote down a totally legit basis. More on that: https://www.physicsforums.com/threads/non-coordinate-basis-explained.950852/#:~:text=Some examples of non-coordinate basis vectors include polar basis,defined by traditional coordinate axes. Under a different name: https://en.wikipedia.org/wiki/Holonomic_basis So one thing is a basis, and quite a very specific (and a distinct) thing is a coordinate basis. In still other words: A coordinate basis is made up of a set of exact differential forms and their duals. This is in close analogy to what happens in thermodynamics: You can use the Pfaffian forms of heat and work to define any change in the energy of a system, even though there are no "heat coordinate" or "work coordinate". But a basis they are: dU=TdS-PdV, even though TdS is not d(anything) and PdV is not d(anything). Do you or do you not agree that the variational derivative @Genady was talking about should be written as, \[ \partial^{\mu}\left(\sum_{n}\frac{\partial\mathcal{L}}{\partial\left(\partial^{\mu}\phi_{n}\right)}\partial_{\nu}\phi_{n}-g_{\mu\nu}\mathcal{L}\right)=0 \] instead of, \[ \partial_{\mu}\left(\sum_{n}\frac{\partial\mathcal{L}}{\partial\left(\partial_{\mu}\phi_{n}\right)}\partial_{\nu}\phi_{n}-g_{\mu\nu}\mathcal{L}\right)=0 \] As far as I can tell, that was the question, your detour into differential geometry has been satisfactorily answered, and you seem to have nothing further to say that's remotely on-topic.
  18. Yes, you're right. The unusual writing of the Lagrangian set me off. Sorry. That is indeed the way to generalise to higher-order derivatives. I've proven it many times, but now I had just a couple of minutes and I screwed up. There's just a coefficient difference. Later.
  19. Ok. Yes, @RobertSmart is right. There is a little mistake in the constants. Let me display my calculation in detail, because his Latex seems to have been messed up by the compiling engine or whatever and I seem to find a small discrepancy with him. Your Lagrangian, \[ \mathscr{\mathcal{L}}=-\frac{1}{2}\phi\Box\phi+\frac{1}{2}m^{2}\phi^{2}-\frac{\lambda}{4!}\phi^{4} \] I prefer to write with an index notation, which is more convenient for variational derivatives: \[ \mathscr{\mathcal{L}}=-\frac{1}{2}\phi\left.\phi^{,\mu}\right._{,\mu}+\frac{1}{2}m^{2}\phi^{2}-\frac{\lambda}{4!}\phi^{4} \] As we have no dependence on first order derivatives, \[ \frac{\partial\mathscr{\mathcal{L}}}{\partial\phi_{,\mu}}=0 \] we get as the only Euler-Lagrange equation, \[ \frac{\partial\mathscr{\mathcal{L}}}{\partial\phi}=-\frac{1}{2}\left.\phi^{,\mu}\right._{,\mu}+m^{2}\phi-\frac{\lambda}{3!}\phi^{3}=0 \] Or, \[ -\frac{1}{2}\Box\phi+m^{2}\phi-\frac{\lambda}{3!}\phi^{3}=0 \] Or a bit more streamlined, \[ \Box\phi-2m^{2}\phi+\frac{\lambda}{3}\phi^{3}=0 \] Sorry I didn't get around to it sooner. Paraphrasing Sir Humphrey Appleby: Is that finally final? I hope so. PS: BTW, this is a simplified symmetry-breaking Lagrangian. The real thing in the SM is a complex SU(2)-symmetric multiplet \( \left(\phi_{1},\phi_{2},\phi_{3},\phi_{4}\right) \).
  20. OP also specifically formulated it in terms of god/gods and supernatural beings or agencies:
  21. Your eq. of motion (once corrected) looks fine. Most people prefer to write (1/2)(grad)phi(grad)phi instead of -(1/2)phi(grad)2phi, but they differ in just a total divergence, so they are equivalent (lead to the same equations of motion). By grad2 I mean the D'Alembert operator. I'll check in more detail later, I would have to do it in my head now and I could miss a sign. This is a famous equation.
  22. Exactly! In fact, that's how you define a symmetry in the action, as a total divergence does not change the "surface" (hypersurface) terms t=t1 and t=t2. You must apply Stoke's theorem first. Perhaps you did, but I didn't have time to check. It is amazing that you're doing this stuff at this point in your life.
  23. No, it was a joke. I thought the emoji had given it away. I'm sorry it didn't, and you took it seriously. Value judgement is falacious. Like "your analogy is false", "I think you're wrong". That's what I was mocking. Funny (and may I say telling), that you consider it a contest. It's not. Nor should it be. You are just wrong, or you sound to me very much like you are in what seems to be your interpretation of tensors. Take this example: flat \( \mathbb{R}^{2} \) (the plane). All of this should be self-explanatory. \[ ds^{2}=dx^{i}g_{ij}dx^{j}=\left(\begin{array}{cc} dr & d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & r^{2} \end{array}\right)\left(\begin{array}{c} dr\\ d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] \[ ds^{2}=dx_{i}g^{ij}dx_{j}=\left(\begin{array}{cc} dr & r^{2}d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & r^{-2} \end{array}\right)\left(\begin{array}{c} dr\\ r^{2}d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] \[ ds^{2}=dx_{i}\left.\delta^{i}\right._{j}dx^{j}=\left(\begin{array}{cc} dr & r^{2}d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right)\left(\begin{array}{c} dr\\ d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] In fact, \( g_{ij} \) and \( g^{ij} \) will give you more than you bargained for: They will give you a spurious sigularity that isn't there at all. There's nothing wrong at \( r=0 \). The covariant methods tell you that. But the form of the once-covariant once-contravariant components of the same silly little thing give you a clue, as \( \left.\delta^{i}\right._{j} \) tells you clearly that nothing funny is going on at that point. The Kronecker delta, in this case, is more honest-to-goodness than the other ones. If you calculate the Riemann, of course, it will tell you that beyond any doubt. It's at that level that you can talk about anoholonomy and curvature. I'm sure you know all that from what we've talked before. These are cautionary tales that are in the literature. Another possibility is that we didn't understand each other's point. One can never be sure. And I apologise to @Genady. For this was his thread on variational derivatives really and wasn't about curvature at all. 🤷‍♂️
  24. No, yours is. Wanna know of a better analogy? Mine. Mine is better. It's been 50 years since I last did this.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.