Jump to content

joigus

Senior Members
  • Posts

    4422
  • Joined

  • Days Won

    49

Everything posted by joigus

  1. This is getting farther and farther away form discussing anything substantial (let alone anything within the OP context), and more and more about you getting out of your way in order to shift the context so that I could be proven wrong in that context. Anyway, Kronecker delta with a superindex and a subindex is an isotropic tensor. Kronecker delta with, eg, two covariant indices (like Tαβ=δαβ ) tells you much, much less, as it's a frame-dependent equality. δαβ is telling you that what you have here is just a rule to dot-multiply vectors. How much or how little does that "encode"? The dimension, and the fact that you're dealing with a scalar product? That's about all. I'll leave to you to decide how much that is telling you. There is a reason why the connection is given in terms of g−1∂g . Neither covariant indices nor contravariant ones give you the curvature, and it's an interplay between the two that does it. There is just another isotropic tensor in every space (the ϵ tensor). It is kind of telling you about orthogonality. The Kronecker only looks standard (1's & 0's) with one index up and the other down. The epsilon tensor only looks standard (1's, 0's, and -1's) with all indices down or all up. Otherwise, they show you all kinds of misleading info, as I clearly showed you with my textbook-standard example. Also, multilinear operators are not just "tensors" independently of a context, as you seem to imply. Multilinear operators are or are not tensors depending on the relevance of a certain group of transformations. There are such things as O(3) tensors (orthogonal tensors), U(n) tensors,... there are pseudo-Euclidean tensors (the only ones we were talking about to begin with), there are tensors under diffeomorphisms (the ones you, for some reason, want to shift the conversation to, although they have little to do with the initial discussion), etc. Let me point out more mistakes that you're making: Again, you are wrong. There's nothing special about differentials of the coordinates: All such objects form a basis. They are called coordinate bases or holonomic bases --see below. But not all bases are made up of derivatives of coordinates. It's only when they are that they thus called. This is from Stewart, Advanced General Relativity, Cambridge 1991: So no, not all bases are coordinate bases. And I wrote down a totally legit basis. More on that: https://www.physicsforums.com/threads/non-coordinate-basis-explained.950852/#:~:text=Some examples of non-coordinate basis vectors include polar basis,defined by traditional coordinate axes. Under a different name: https://en.wikipedia.org/wiki/Holonomic_basis So one thing is a basis, and quite a very specific (and a distinct) thing is a coordinate basis. In still other words: A coordinate basis is made up of a set of exact differential forms and their duals. This is in close analogy to what happens in thermodynamics: You can use the Pfaffian forms of heat and work to define any change in the energy of a system, even though there are no "heat coordinate" or "work coordinate". But a basis they are: dU=TdS-PdV, even though TdS is not d(anything) and PdV is not d(anything). Do you or do you not agree that the variational derivative @Genady was talking about should be written as, \[ \partial^{\mu}\left(\sum_{n}\frac{\partial\mathcal{L}}{\partial\left(\partial^{\mu}\phi_{n}\right)}\partial_{\nu}\phi_{n}-g_{\mu\nu}\mathcal{L}\right)=0 \] instead of, \[ \partial_{\mu}\left(\sum_{n}\frac{\partial\mathcal{L}}{\partial\left(\partial_{\mu}\phi_{n}\right)}\partial_{\nu}\phi_{n}-g_{\mu\nu}\mathcal{L}\right)=0 \] As far as I can tell, that was the question, your detour into differential geometry has been satisfactorily answered, and you seem to have nothing further to say that's remotely on-topic.
  2. Yes, you're right. The unusual writing of the Lagrangian set me off. Sorry. That is indeed the way to generalise to higher-order derivatives. I've proven it many times, but now I had just a couple of minutes and I screwed up. There's just a coefficient difference. Later.
  3. Ok. Yes, @RobertSmart is right. There is a little mistake in the constants. Let me display my calculation in detail, because his Latex seems to have been messed up by the compiling engine or whatever and I seem to find a small discrepancy with him. Your Lagrangian, \[ \mathscr{\mathcal{L}}=-\frac{1}{2}\phi\Box\phi+\frac{1}{2}m^{2}\phi^{2}-\frac{\lambda}{4!}\phi^{4} \] I prefer to write with an index notation, which is more convenient for variational derivatives: \[ \mathscr{\mathcal{L}}=-\frac{1}{2}\phi\left.\phi^{,\mu}\right._{,\mu}+\frac{1}{2}m^{2}\phi^{2}-\frac{\lambda}{4!}\phi^{4} \] As we have no dependence on first order derivatives, \[ \frac{\partial\mathscr{\mathcal{L}}}{\partial\phi_{,\mu}}=0 \] we get as the only Euler-Lagrange equation, \[ \frac{\partial\mathscr{\mathcal{L}}}{\partial\phi}=-\frac{1}{2}\left.\phi^{,\mu}\right._{,\mu}+m^{2}\phi-\frac{\lambda}{3!}\phi^{3}=0 \] Or, \[ -\frac{1}{2}\Box\phi+m^{2}\phi-\frac{\lambda}{3!}\phi^{3}=0 \] Or a bit more streamlined, \[ \Box\phi-2m^{2}\phi+\frac{\lambda}{3}\phi^{3}=0 \] Sorry I didn't get around to it sooner. Paraphrasing Sir Humphrey Appleby: Is that finally final? I hope so. PS: BTW, this is a simplified symmetry-breaking Lagrangian. The real thing in the SM is a complex SU(2)-symmetric multiplet \( \left(\phi_{1},\phi_{2},\phi_{3},\phi_{4}\right) \).
  4. OP also specifically formulated it in terms of god/gods and supernatural beings or agencies:
  5. Your eq. of motion (once corrected) looks fine. Most people prefer to write (1/2)(grad)phi(grad)phi instead of -(1/2)phi(grad)2phi, but they differ in just a total divergence, so they are equivalent (lead to the same equations of motion). By grad2 I mean the D'Alembert operator. I'll check in more detail later, I would have to do it in my head now and I could miss a sign. This is a famous equation.
  6. Exactly! In fact, that's how you define a symmetry in the action, as a total divergence does not change the "surface" (hypersurface) terms t=t1 and t=t2. You must apply Stoke's theorem first. Perhaps you did, but I didn't have time to check. It is amazing that you're doing this stuff at this point in your life.
  7. No, it was a joke. I thought the emoji had given it away. I'm sorry it didn't, and you took it seriously. Value judgement is falacious. Like "your analogy is false", "I think you're wrong". That's what I was mocking. Funny (and may I say telling), that you consider it a contest. It's not. Nor should it be. You are just wrong, or you sound to me very much like you are in what seems to be your interpretation of tensors. Take this example: flat \( \mathbb{R}^{2} \) (the plane). All of this should be self-explanatory. \[ ds^{2}=dx^{i}g_{ij}dx^{j}=\left(\begin{array}{cc} dr & d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & r^{2} \end{array}\right)\left(\begin{array}{c} dr\\ d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] \[ ds^{2}=dx_{i}g^{ij}dx_{j}=\left(\begin{array}{cc} dr & r^{2}d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & r^{-2} \end{array}\right)\left(\begin{array}{c} dr\\ r^{2}d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] \[ ds^{2}=dx_{i}\left.\delta^{i}\right._{j}dx^{j}=\left(\begin{array}{cc} dr & r^{2}d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right)\left(\begin{array}{c} dr\\ d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] In fact, \( g_{ij} \) and \( g^{ij} \) will give you more than you bargained for: They will give you a spurious sigularity that isn't there at all. There's nothing wrong at \( r=0 \). The covariant methods tell you that. But the form of the once-covariant once-contravariant components of the same silly little thing give you a clue, as \( \left.\delta^{i}\right._{j} \) tells you clearly that nothing funny is going on at that point. The Kronecker delta, in this case, is more honest-to-goodness than the other ones. If you calculate the Riemann, of course, it will tell you that beyond any doubt. It's at that level that you can talk about anoholonomy and curvature. I'm sure you know all that from what we've talked before. These are cautionary tales that are in the literature. Another possibility is that we didn't understand each other's point. One can never be sure. And I apologise to @Genady. For this was his thread on variational derivatives really and wasn't about curvature at all. 🤷‍♂️
  8. No, yours is. Wanna know of a better analogy? Mine. Mine is better. It's been 50 years since I last did this.
  9. That's all I wanted to hear, really. If you want to discuss GR, I suggest you open a new thread doing so. Discussing topological aspects of GR on a thread about index gymnastics in flat QFT would be highly misleading. One can obtain Shakespeare's Sonnets from the alphabet, but I see no Shakespeare in ABCDEFGHIJKLMNOPQRSTUVWXYZ. Do you? It's what you do with the alphabet that matters. I can do no nothing like what Shakespeare did. Same happens with the metric. Yours is actually a common misconception. I'm just saying.
  10. What curvature? This is all pseudo-Euclidean metric we're talking about. This is QFT. \( g_{\mu\nu}=\eta_{\mu\nu} \) I think you mean in GR. But even in GR, the metric tensor does not encode curvature. The Riemann tensor does. And the metric tensor is covariantly constant. Because it is. That should give us a clue. It doesn't really encode much, does it? This is a common misconception, that the metric tensor components "encode" something. In terms of tetrads it's very clear that it's nothing but the identity operator. What's perplexing is that covariantly shifting with the Christoffels obtained from it around an infinitesimally-small closed lood gives you something else, but that's a different story.
  11. No problem. By the way, I should have written, \[ \partial^{\mu}\left(\sum_{n}\frac{\partial\mathcal{L}}{\partial\left(\partial^{\mu}\phi_{n}\right)}\partial_{\nu}\phi_{n}-g_{\mu\nu}\mathcal{L}\right)=0 \] That is correct.
  12. Nowhere. You made no mistake. None whatsoever. You are just realising that Schwartz made a mistake, not you. So it was a typo. He probably meant to write something like, \[ \partial^{\mu}\left(\sum_{n}\frac{\partial\mathcal{L}}{\partial\left(\partial_{\mu}\phi_{n}\right)}\partial_{\nu}\phi_{n}-g_{\mu\nu}\mathcal{L}\right)=0 \] which is correct, and consistent with your derivation. That's the problem with books that don't follow the covariant/contravariant convention. Index gymnastics does that for you automatically. Sorry, I thought I'd told you:
  13. As matrices, they are. But as tensors, they aren't. They are one and the same basis-independent object, coding the same physical information. This is exactly the same as a vector in one basis looking, eg, like matrix (0 1 0 0) but looking like (0 -1 0 0) in another basis. You are confusing the tensor with its coordinates. IOW: All metrics, no matter the dimension and signature, look exactly like the identity matrix (the Kronecker delta) when the scalar product is expressed under the convention that the first factor is written in covariant components, and the second one in contravariant ones. A tensor is a physical object. A matrix is just a collection of numbers used to represent that object. Let me put it this way: \[ U_{\mu}V^{\mu}=U_{\mu}\left.\delta^{\mu}\right._{\nu}V^{\nu}=U^{\mu}\left.\delta_{\mu}\right.^{\nu}V_{\nu}=U^{\mu}g_{\mu\nu}V^{\nu}=U_{\mu}g^{\mu\nu}V_{\nu} \]
  14. Oh. Got you. Yes, you're right. It should be what you say. Classic books in QFT tend to be rather fast-and-loose with the indices. \( \partial_{\nu}\mathcal{L}=\partial_{\mu}\left(g_{\mu\nu}\mathcal{L}\right) \) is not a tensor equation. \( \partial_{\nu}\mathcal{L}=\partial_{\mu}\left(\left.\delta^{\mu}\right._{\nu}\mathcal{L}\right) \) is. Although I should say there is no fundamental difference between \( g \) and \( \delta \) really. \( \delta \) is just \( g \) (viewed as just another garden-variety tensor) with an index raised (by using itself). Bogoliubov is similarly cavalier with the indices if I remember correctly. Is it from the 50's?
  15. No, no typo. It is actually a theorem (or lemma, etc) of tensor calculus that the gradient wrt contravariant coordinates is itself covariant. It is just a fortunate notational coincidence that the "sub" position in the derivative symbol seems to suggest that. Proof:
  16. It would. !!! The 1/2 factor doesn't change Lorentz invariance of the metric measure, but it's quite essential to the formalism that comes later. One would think we're done with negative energies/frequencies, and such. But no. They keep biting our buttocks later with the Fourier transform. That's where the Stueckelberg-Feynman prescription for antiparticles comes in.
  17. You want energies to be positive. As k0 (the zeroth component) of the 4-momentum is the energy component, all states must be decreed to have zero amplitude for that choice. That's achieved by the step function trick. You missed a well-known trick for delta "functions"... The delta function satisfies, \[ \delta\left(f\left(x\right)\right)=\sum_{x_{k}\in\textrm{zeroes of }f}\frac{f\left(x-x_{k}\right)}{\left|f'\left(x_{k}\right)\right|} \] for any continuous variable \( x \) and "any" well-behaved function \( f \) of such variable. Taking as your corresponding function and variable both \( k^{0} \) and, \[ f\left(k^{0}\right)=\left(k^{0}\right)^{2}-\left(\omega_{\mathbf{k}}\right)^{2} \] you get, And that's why you need the step function: to kill the un-physical \( k^0 \)'s. Negative energies do appear again in the expansion of the space of states, but they're dealt with in a different manner. This is just to define the measure for the integrals. All kinds of bad things would happen if we let those frequencies stay. I'm sure there are better explanations out there. But the delta identity is crucial to see the point.
  18. Absolutely. It's an erratum. The potential is quadratic, so it's the force that's linear. Close to equilibrium the Taylor expansion of the potential must be quadratic, as at the equilibrium position, the gradient (the force) must be zero. So the next-to-zeroth-order term for the force is proportional to V''(x0). V(x)=V(x0)+(1/2)V''(x0)(x-x0)2+...
  19. Reminds me of this priceless piece of comedy: Sigh
  20. Yes, very much so. I'm re-reading what I said as well as your comment that motivated it. I was kinda losing track of what I was trying to say, and thinking 'why the hell did I mention QFT?' And (after re-reading) I see it's because you said, The reason I mentioned QFT (or the SM as a particular case) is because I wanted to point out that sometimes, even though force and mass are not pillars of the theory, you still have to do a lot of work with this mass, so it's very far from disappearing from most considerations. But it's not like the theory is telling you what this parameter actually is or does.
  21. SM is but one particular QFT. Are you sure you're not splitting hairs here? SM cannot be in addition to QFT the same way the statistical mechanics of an Ising magnet wouldn't be in addition to statistical mechanics. It's given by a particular choice of Hamiltonian within the general procedures of quantum statistical mechanics. SM is QFT under a particular choice of Lagrangian, including gauge groups, global gauge groups, and Higgs multiplets. Unless I overlooked an essential point you made, which is certainly possible, especially of late.
  22. Ultimately it's a major SM issue, I think. But there are very general arguments in QFT in which Yang-Mills pretty much appears as the only interesting generalisation for gauge invariance. So what I mean I suppose is that from QFT to SM there's "just" (ahem) a choice of symmetry groups, generations, and mixing parameters. A very wise expert in QFT nothing fundamentally different from the general principles of QFT conveniently generalised.
  23. That's certainly what happens in GR. In QFT I think the process is much more painful. The theory is not force-based either, but we must start with mass being a parameter that discriminates between different types of fields (massless vs massive). But the physical mass (inertia) becomes more of a dynamical attribute that depends on the state and has to be calculated perturbatively. And there is no explanation for the spectrum of masses.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.