Jump to content

joigus

Senior Members
  • Posts

    4392
  • Joined

  • Days Won

    49

Everything posted by joigus

  1. No, it was a joke. I thought the emoji had given it away. I'm sorry it didn't, and you took it seriously. Value judgement is falacious. Like "your analogy is false", "I think you're wrong". That's what I was mocking. Funny (and may I say telling), that you consider it a contest. It's not. Nor should it be. You are just wrong, or you sound to me very much like you are in what seems to be your interpretation of tensors. Take this example: flat \( \mathbb{R}^{2} \) (the plane). All of this should be self-explanatory. \[ ds^{2}=dx^{i}g_{ij}dx^{j}=\left(\begin{array}{cc} dr & d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & r^{2} \end{array}\right)\left(\begin{array}{c} dr\\ d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] \[ ds^{2}=dx_{i}g^{ij}dx_{j}=\left(\begin{array}{cc} dr & r^{2}d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & r^{-2} \end{array}\right)\left(\begin{array}{c} dr\\ r^{2}d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] \[ ds^{2}=dx_{i}\left.\delta^{i}\right._{j}dx^{j}=\left(\begin{array}{cc} dr & r^{2}d\theta\end{array}\right)\left(\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right)\left(\begin{array}{c} dr\\ d\theta \end{array}\right)=dr^{2}+r^{2}d\theta^{2} \] In fact, \( g_{ij} \) and \( g^{ij} \) will give you more than you bargained for: They will give you a spurious sigularity that isn't there at all. There's nothing wrong at \( r=0 \). The covariant methods tell you that. But the form of the once-covariant once-contravariant components of the same silly little thing give you a clue, as \( \left.\delta^{i}\right._{j} \) tells you clearly that nothing funny is going on at that point. The Kronecker delta, in this case, is more honest-to-goodness than the other ones. If you calculate the Riemann, of course, it will tell you that beyond any doubt. It's at that level that you can talk about anoholonomy and curvature. I'm sure you know all that from what we've talked before. These are cautionary tales that are in the literature. Another possibility is that we didn't understand each other's point. One can never be sure. And I apologise to @Genady. For this was his thread on variational derivatives really and wasn't about curvature at all. 🤷‍♂️
  2. No, yours is. Wanna know of a better analogy? Mine. Mine is better. It's been 50 years since I last did this.
  3. That's all I wanted to hear, really. If you want to discuss GR, I suggest you open a new thread doing so. Discussing topological aspects of GR on a thread about index gymnastics in flat QFT would be highly misleading. One can obtain Shakespeare's Sonnets from the alphabet, but I see no Shakespeare in ABCDEFGHIJKLMNOPQRSTUVWXYZ. Do you? It's what you do with the alphabet that matters. I can do no nothing like what Shakespeare did. Same happens with the metric. Yours is actually a common misconception. I'm just saying.
  4. What curvature? This is all pseudo-Euclidean metric we're talking about. This is QFT. \( g_{\mu\nu}=\eta_{\mu\nu} \) I think you mean in GR. But even in GR, the metric tensor does not encode curvature. The Riemann tensor does. And the metric tensor is covariantly constant. Because it is. That should give us a clue. It doesn't really encode much, does it? This is a common misconception, that the metric tensor components "encode" something. In terms of tetrads it's very clear that it's nothing but the identity operator. What's perplexing is that covariantly shifting with the Christoffels obtained from it around an infinitesimally-small closed lood gives you something else, but that's a different story.
  5. No problem. By the way, I should have written, \[ \partial^{\mu}\left(\sum_{n}\frac{\partial\mathcal{L}}{\partial\left(\partial^{\mu}\phi_{n}\right)}\partial_{\nu}\phi_{n}-g_{\mu\nu}\mathcal{L}\right)=0 \] That is correct.
  6. Nowhere. You made no mistake. None whatsoever. You are just realising that Schwartz made a mistake, not you. So it was a typo. He probably meant to write something like, \[ \partial^{\mu}\left(\sum_{n}\frac{\partial\mathcal{L}}{\partial\left(\partial_{\mu}\phi_{n}\right)}\partial_{\nu}\phi_{n}-g_{\mu\nu}\mathcal{L}\right)=0 \] which is correct, and consistent with your derivation. That's the problem with books that don't follow the covariant/contravariant convention. Index gymnastics does that for you automatically. Sorry, I thought I'd told you:
  7. As matrices, they are. But as tensors, they aren't. They are one and the same basis-independent object, coding the same physical information. This is exactly the same as a vector in one basis looking, eg, like matrix (0 1 0 0) but looking like (0 -1 0 0) in another basis. You are confusing the tensor with its coordinates. IOW: All metrics, no matter the dimension and signature, look exactly like the identity matrix (the Kronecker delta) when the scalar product is expressed under the convention that the first factor is written in covariant components, and the second one in contravariant ones. A tensor is a physical object. A matrix is just a collection of numbers used to represent that object. Let me put it this way: \[ U_{\mu}V^{\mu}=U_{\mu}\left.\delta^{\mu}\right._{\nu}V^{\nu}=U^{\mu}\left.\delta_{\mu}\right.^{\nu}V_{\nu}=U^{\mu}g_{\mu\nu}V^{\nu}=U_{\mu}g^{\mu\nu}V_{\nu} \]
  8. Oh. Got you. Yes, you're right. It should be what you say. Classic books in QFT tend to be rather fast-and-loose with the indices. \( \partial_{\nu}\mathcal{L}=\partial_{\mu}\left(g_{\mu\nu}\mathcal{L}\right) \) is not a tensor equation. \( \partial_{\nu}\mathcal{L}=\partial_{\mu}\left(\left.\delta^{\mu}\right._{\nu}\mathcal{L}\right) \) is. Although I should say there is no fundamental difference between \( g \) and \( \delta \) really. \( \delta \) is just \( g \) (viewed as just another garden-variety tensor) with an index raised (by using itself). Bogoliubov is similarly cavalier with the indices if I remember correctly. Is it from the 50's?
  9. No, no typo. It is actually a theorem (or lemma, etc) of tensor calculus that the gradient wrt contravariant coordinates is itself covariant. It is just a fortunate notational coincidence that the "sub" position in the derivative symbol seems to suggest that. Proof:
  10. It would. !!! The 1/2 factor doesn't change Lorentz invariance of the metric measure, but it's quite essential to the formalism that comes later. One would think we're done with negative energies/frequencies, and such. But no. They keep biting our buttocks later with the Fourier transform. That's where the Stueckelberg-Feynman prescription for antiparticles comes in.
  11. You want energies to be positive. As k0 (the zeroth component) of the 4-momentum is the energy component, all states must be decreed to have zero amplitude for that choice. That's achieved by the step function trick. You missed a well-known trick for delta "functions"... The delta function satisfies, \[ \delta\left(f\left(x\right)\right)=\sum_{x_{k}\in\textrm{zeroes of }f}\frac{f\left(x-x_{k}\right)}{\left|f'\left(x_{k}\right)\right|} \] for any continuous variable \( x \) and "any" well-behaved function \( f \) of such variable. Taking as your corresponding function and variable both \( k^{0} \) and, \[ f\left(k^{0}\right)=\left(k^{0}\right)^{2}-\left(\omega_{\mathbf{k}}\right)^{2} \] you get, And that's why you need the step function: to kill the un-physical \( k^0 \)'s. Negative energies do appear again in the expansion of the space of states, but they're dealt with in a different manner. This is just to define the measure for the integrals. All kinds of bad things would happen if we let those frequencies stay. I'm sure there are better explanations out there. But the delta identity is crucial to see the point.
  12. Absolutely. It's an erratum. The potential is quadratic, so it's the force that's linear. Close to equilibrium the Taylor expansion of the potential must be quadratic, as at the equilibrium position, the gradient (the force) must be zero. So the next-to-zeroth-order term for the force is proportional to V''(x0). V(x)=V(x0)+(1/2)V''(x0)(x-x0)2+...
  13. Reminds me of this priceless piece of comedy: Sigh
  14. Yes, very much so. I'm re-reading what I said as well as your comment that motivated it. I was kinda losing track of what I was trying to say, and thinking 'why the hell did I mention QFT?' And (after re-reading) I see it's because you said, The reason I mentioned QFT (or the SM as a particular case) is because I wanted to point out that sometimes, even though force and mass are not pillars of the theory, you still have to do a lot of work with this mass, so it's very far from disappearing from most considerations. But it's not like the theory is telling you what this parameter actually is or does.
  15. SM is but one particular QFT. Are you sure you're not splitting hairs here? SM cannot be in addition to QFT the same way the statistical mechanics of an Ising magnet wouldn't be in addition to statistical mechanics. It's given by a particular choice of Hamiltonian within the general procedures of quantum statistical mechanics. SM is QFT under a particular choice of Lagrangian, including gauge groups, global gauge groups, and Higgs multiplets. Unless I overlooked an essential point you made, which is certainly possible, especially of late.
  16. Ultimately it's a major SM issue, I think. But there are very general arguments in QFT in which Yang-Mills pretty much appears as the only interesting generalisation for gauge invariance. So what I mean I suppose is that from QFT to SM there's "just" (ahem) a choice of symmetry groups, generations, and mixing parameters. A very wise expert in QFT nothing fundamentally different from the general principles of QFT conveniently generalised.
  17. That's certainly what happens in GR. In QFT I think the process is much more painful. The theory is not force-based either, but we must start with mass being a parameter that discriminates between different types of fields (massless vs massive). But the physical mass (inertia) becomes more of a dynamical attribute that depends on the state and has to be calculated perturbatively. And there is no explanation for the spectrum of masses.
  18. I should have said Newton's 2nd law, obviously. I have this kind of dyslexic-like glitch that makes me do that very much like @studiot's problem with the typing. Yes, that's exactly what I meant. Now, if all forces of Nature were like that, I wouldn't find it surprising at all. After all, the word "surprise" has to do with contrast in comparison to previous experience, or inference from that. Electricity is not like that, nor any other interaction. Thereby the word "amazed". True. In fact Newton used Kepler's third law to guess his inverse-square law. Textbooks generally point out that the power law is implied. But the equivalence principle is too. The thing is, because the mass on the receiving end of the gravitational interaction (not the mass as a source) disappears from all the physics, it is almost inescapable that the distorsion that a source introduces around it can be described in some geometric way as a distorsion of space-time itself. I think this is amazing even after one learns about GR.
  19. Exactly (my highlight in bold red). This is at no detriment to the use that @Janus and @Genady here in particular (and most physicists elsewhere as well) have given to Newton's third law as an equation. Any definition, identity, or formula can be postulated as an equation the moment one gives numerical values to any of the terms involved, or values in terms of further parameters. So, for example: sin2a+cos2a=1 is an identity. It says something obvious. A further substitution, eg sin2a=1/2 makes it an equation.
  20. They are not defined separately. Not operationally at least. They are inextricably linked, and hidden assumptions operate between both concepts, as I will try to show. Consider, F = ma = (2m)(a/2) = (3m)(a/3) = ... for different test masses m, 2m, 3m, etc we measure to good approximation impinged accelerations a, a/2, a/3, etc for a fixed spring of given elastic constant. This measures F. There is a hidden assumption here. Namely: masses are additive, and so I will be able to stick together identical test objects and assume they operate in Newton's law as twice the mass, three times the mass, etc. Something by no means obvious. In fact we know that to be false from relativity. OTOH, for a fixed m and a fixed direction, we apply different sets of springs by hooking them together (in parallel, so that the spring constants are additive) and measure to good approximation that m = F/a = 2F/2a = 3F/3a =... Mind you, this also assumes something about connecting springs together. This measures m. I don't think this is what one does to measure either mass or force, but it's introduced in books of mechanics from the 50's to the 60's or so. I would think methods based on displacement from an equilibrium position would be more accurate. But I'm not sure. Even in that case, hidden assumptions about mass and force are operating there that boil down to additivity, I'm sure. In fact, the whole lot of Newton's mechanics can be put more simply in terms of F=ma plus a principle of additivity or external transitivity, if you like; and sub-divisibility or internal transitivity, if you like. Then it becomes just one law, instead of three, plus this principle that Newton's laws are to be applied to any level of integrating sub-parts or conglomerates of parts: F=ma (first and only law) 1) System is free (F=0) => a=0 => v=constant (first law) 2) System is free F=0 but it's made of sub-parts 1 and 2. F=F12+F21 => F21=-F12 (third law) And the principle (not so hidden, but explicitly stated throughout history) that certain "magical" frames of reference exist (inertial frames) where the conglomerate of all the parts can be looked upon as free, and ultimately all the dynamics can be analysed in terms of internal forces. Laboratory -> Earth -> Solar System -> etc. The traditional expositions that there are 3 laws, inertial systems, inertia, and the whole shebang are very good to get started, but only obscure these very simple principles IMO, of F=ma (very, very strong assumption that we can isolate interactions into action on something, F, and reaction to that action, -ma, and ultimately bound to be false, as we know); and applicability both to sub-parts and super-parts.
  21. (My emphasis.) There it is. There's your standard of force if you want to make the definition operational. The fixed spring is your standard of force. It's actually inescapable that one needs the other, as F=ma involves both and neither F, nor m, is a primitive concept with a direct observational interpretation, like time or space have. I see no a priori reason to rule out a more complicated mathematical dependence like, say, F=(m+C2m2+C3m3+...)a with m being the additive parameter representing the "amount of stuff", F being our standard spring, C2, C3, etc, very small coefficients under a wide range of dynamical conditions, and the other force laws that we know and love later accomodating this complicated dependence. I'd challenge anybody to provide a robust argument why it cannot be that way. Newton's choice is very sound and very natural, and harmonises wonderfully with symmetries, known behaviours, etc, but I see no a priori reason why it should be that way, and not some other. I don't essentially disagree with anything Janus has said, by the way.
  22. Oh I get it. Is it 'beautiful', 'praised', 'commendable'? I've read different translations.
  23. Well, good point. It's a bit subtler than what I said. I do remember a similar operational definition to what you say in Mechanics by Keith R. Symon. But even for your operational definition of force, you need to set a unit of mass. So it's kind of circular. Mass helps you define force, while you need a standard of force to define mass. They're tied to each other, really. Let me put it in my words: If you think it makes sense to fix a standard of force independent on anything it acts on, to that extent, you can define mass. If you think it makes sense to fix a standard of mass independent of the force that acts on it, then you can define force. It could be more complicated. It could be that there is no way to abstract the 'push or pull' that you exert on a body from the parameters that define the body. Maybe it's something closer to what I called a formula (mathematicians use that distinction, I know). What it is not is an equation, unless you use the formula to plug in numbers and solve for the unknown, of course.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.