Jump to content


Senior Members
  • Posts

  • Joined

  • Days Won


joigus last won the day on August 11

joigus had the most liked content!

About joigus

  • Birthday 05/04/1965

Profile Information

  • Location
  • Interests
    Biology, Chemistry, Physics
  • College Major/Degree
  • Favorite Area of Science
    Theoretical Physics
  • Biography
    I was born, then I started learning. I'm still learning.
  • Occupation

Recent Profile Visitors

16844 profile views

joigus's Achievements


Scientist (10/13)



  1. You make some good points that the problem has other possible alternative treatments, probably more realistic, and that the real problem is more involved. But I'm not sure that using the Navier-Stokes equation would be the best approach for, eg, freshman or sophomore students. We don't know the student's level, so... It seems he's been exposed to the --probably simplistic, granted-- formula of Pex(V2-V1). Because that's the recipe that he's going to be responsible for in his assignments and exams, I suppose, trying to draw a simple intuitive reasoning of how it works and why, to a first approximation is, IMO, the way to go here. There is a million-dollar prize for just proving the existence and uniqueness of the Navier-Stokes. Generations of mathematicians have failed at solving it in general. On the other hand, H, S are quite abstract in comparison to P, V, which are a lot more intuitive. Should we start teaching gas behaviour by the Van Der Waals equation? It's certainly closer to how real gases behave. There's the key to phase transitions, the triple point of water, and so on. But generations of students have come to understand gases first with the concept of an ideal gas for a reason. As to using PVk, I'm all in favour of it for general quasi-static processes, when the system has a P. When the system is out of equilibrium and adjusting to a new equilibrium, I'm not sure it's the right approach from the conceptual point of view. Unless you know of a way to derive an effective P and an effective k from the Navier-Stokes equation in out-of-equilibrium situations. If such method exists, I'm not aware of it and I would love to learn about it to the extent of my abilities, which are quite limited, I must say.
  2. Thank you. You don't need to explain yourself. You're very welcome. The fact that professional scientists have thought in similar terms indicates that the idea is not silly at all. You did express it in a non-standard way, though, and I was a bit confused.
  3. Quite the opposite: https://www.livescience.com/9090-religion-people-happier-hint-god.html Life becomes easier when you are a believer, especially if you believe in whatever your neighbours declare to believe. Try to be a Christian in Yemen! Then you fit in, you don't think about ethical problems, because you've got it all written down for you, and you have a picture of a fancy wonderland where to go when your time is up on this planet. What's harder, as anything that's worth anything in this life, is facing things objectively and without bias. And learning from the process. That's incomparably harder, and brings incomparably more good to this world.
  4. This is known not to be the case. The reason is that in order to have chaotic systems in classical mechanics, you need very little: 1) High sensitivity to initial conditions 2) Mixing of trajectories (so-called topological mixing), which means that any particular "patch" of possible initial conditions ends up --through evolution-- covering all the possible space of possible dynamical states (phase space). Any dynamical system that satisfies non-linear equations (which means any dynamical system to all intents and purposes) is non-linear. Even linear systems, for more than 2 degrees of freedom (number of independent coordinates necessary to describe them) is chaotic too. On the contrary, quantum systems are inevitably non-chaotic, as the Schrödinger equation is always linear (the superposition or addition of two possible motions is also a possible motion). There is a connection between both, though, which manifests itself through quantum scarring. Also, indeterminism in large chaotic systems does not come from the quantum. Quantum fluctuations are negligible for planetary motion, yet the 3-body problem already displays chaos, even though quantum mechanics can be safely ignored in that context. So for all we know, even if the world were classical --and not quantum-- the slightest complexity in the dynamics would imply chaotic behaviour.
  5. I don't understand the question. Chaos is a qualitative property. It's not a number. What do you mean it "fluctuates"?
  6. Here's my analysis of the situation. I did this many years ago to help myself understand the details of this --very well known but quite academic, and at the same time somewhat puzzling-- example. Diagramatics: Obviously, there must be an initial pressure in the gas (P1) that is in excess to the environment's pressure (P1>Pex.) Otherwise the gas wouldn't expand --let's assume is an expansion. One problem is this pressure is not mentioned in the statement. That's one reason why it's confusing. There is, of course, a fiducial isotherm wich goes through this point (P1,V1), and we're not going to use either. Isotherms 1 and 2 are there just as a reference, two coordinate curves on a P,V diagram if you will, but they play no role in the problem. They would look as straight lines on a P,T diagram, and they look like hyperbolae on a P,V diagram. That's perhaps another reason why it's confusing. Now, very important: The straight line in red does not represent the actual trajectory on the P-V plane of the system. The system is jumping outside of the P,V,T surface of possible states of the system, the reason being that it's going through a series of states that are not equilibrium states. It would look something like the orange line that joins (P1,V1) and (P2,V2). So @sethoflagos has a point, if I understood them correctly. Something's going on of which we're given no account. What happens in between? That's the final and most important reason why discussing the problem of the "in between non-thermodynamic states" is so confusing. During this time, the system is no longer described by equilibrium thermodynamics, and sure enough time starts playing a role. We must appeal to a mix between thermodynamical and mechanical arguments, if only to qualitatively understand what's going on. Here's what's going on: If the initial and final internal energies of the gas are resp. U1 and U2, we have an energy tradeoff that looks like: \[ U\left(t\right)=U_{1}+\delta U\left(t\right)+\delta K_{\textrm{piston}}\left(t\right) \] But it stands to reason that the energy balance can be expressed as a function of only initial and final states, because the "mechanical boundary conditions" if you will, are constrained by the thermodynamics of the problem. The air from the environment acts as an inexhaustible reservoir of pressure, for lack of a better word. Both external air and internal pressure act on the piston, increasing its kinetic energy as they go. They counteract each other, but don't exactly cancel: \[ \delta W_{\textrm{gas -> piston}}=-\int P_{\textrm{gas}}\left(x,t\right)dV_{\textrm{gas}}\left(t\right) \] \[ \delta W_{\textrm{ex ->piston}}=-\int P_{\textrm{ex}}\left(x,t\right)dV_{\textrm{ex}}\left(t\right)=+\int P_{\textrm{ex}}\left(x,t\right)dV_{\textrm{gas}}\left(t\right) \] \[ \delta U_{\textrm{gas}}\left(t\right)=+\int P_{\textrm{ex}}\left(x,t\right)dV\left(t\right)-\int P_{\textrm{gas}}\left(x,t\right)dV_{\textrm{gas}}\left(t\right)=-\int\left[P_{\textrm{gas}}\left(x,t\right)-P_{\textrm{ex}}\left(x,t\right)\right]dV_{\textrm{gas}}\left(t\right) \] Finally, the piston stops. How does it do that? The gas, no doubt, overshoots a little bit --see diagram--, so even though the reservoir to a good approximation stays where it is thermodynamically, locally it exerts an excess pressure to take the piston to its final position. So, initially, the piston works against the air with its local field P(x,t) averaged on the piston's surface; but finally, the air works against the piston to compensate for the overshooting and leaving it at its final rest position. We must assume also that the walls leave heat go in and out. So, \[ K_{\textrm{piston}}\left(2\right)=0 \] So no matter how complicated the details are, the overall effect of the air is to compensate for the unbalance in the work terms so as to deliver the unbalance back to the gas, so to speak, in such a way that, energetically, it's as if all the time the pressure Pex had been the only pressure that's been doing all the work. Final note: The proposed curve is, of course, a simplification. Really, the gas doesn't have a thermodynamic pressure, as @studiot and myself have pointed out. But even so we can talk, I think, about an average pressure that the gas exerts on the piston near the surface. That's the pressure I'm talking about when I wrote things like Pgas(t)
  7. No. Curvature is defined by going from one point to another by applying equal changes in 2 coordinates in different order, and thereby learning if there's a difference in going through different paths across your space. If there is, your space is curved. You can also have curvature as solutions to Einstein's vacuum field equations. It so happens that there are 20 numbers that give you the curvatures in dimension 4 or more. Such is the nature of Riemannian curvature. That's the curvature we talk about when we do GR. Mind you, you can also define a special curvature for a curve, but that's not Riemannian curvature. It's based on a moving reference frame embedded in a flat space of 3 dimensions. The Riemannian curvature of a curve is 0. That's because Riemannian geometry is intrinsic: not embedded in a higher-dimensional space. Language and intuitive notions by themselves are misleading: A curve is not curved, a plane is not curved (makes sense, but it's true nonetheless), and a cilinder is not curved. A cone is not curved, but at a point, where it has infinite curvature. I can't tell which one is your curvature. And I don't think that's a good thing. It all sounds like you lack a basic understanding of curvature. A historical note: Einstein found his famous equations only after a long correspondence with David Hilbert, and consultation with mathematicians, and from there he postulated a certain combination of the 20 curvatures of space-time (reducing them to just 10) to be proportional to the different components of energy-momentum densities. This checks with 4(4+1)/2=10 independent components of the energy-momentum tensor in dimension 4. He achieved that only after several faltering attempts. If my memory serves, it took him the best part of a decade to come up with that after his initial intuitions. He later proved that in the limit of low velocities and weak gravitational fields you get Newton's law of gravitation. He predicted light to be bent by gravitational fields. Lastly, he did not coin the name "Einstein field equations". It was other people who named them after him. To my mind, it belittles the genius of Einstein when people try to emulate or better this mind-blowing feat based on some loose notions and a couple of graphs. Don't take this personally but, at what point are you going to decide it's time to go back to the drawing board? To be more precise: pseudo-Riemannian, because of the difference in sign for time and space in the metric.
  8. But irreversibility is a premise of the OP. Also, entropy increase doesn't only happen because of heat transfer. Certainly, entropy change will happen whenever there is heat transfer. But it will also happen in situations in which there is irreversible work involved, like eg when you stir the fluid with a blender or a rapid fan. The fact that irreversible work leads to entropy increase is betrayed by the fact that, after a while, the temperature increases, as this irreversible work is quickly converted to heat. If you wait a couple of seconds, say, and measure the temperature, you'll see that it's decreased (expansion) or increased (compression), and it's no longer possible to know whether this change in temperature has come from irreversible work or from heating with, eg, a Bunsen burner --or from cooling through a wall. It's only that, in this particular example, it's much easier to calculate the work without involving energy at all, thus reducing the calculational work to a minimum. Now, what you seem to be demanding from the OP is to: 1) Express the entropy as a function of V, T, and the number of molecules; or perhaps P, V, and the number of molecules, if it's a P,V,T,N system. 2) Calculate the values of S1 and S2, given that S is a state function --it sure is--, and compute it. Now, step 1) isn't elementary. It certainly can be done, and could be needed in more complicated irreversible processes. But for expansion against a fixed external pressure it's more simply done with the method that the OP proposed. I meant "without involving entropy at all".
  9. That's not how curvature is defined. Do you know what curvature is?
  10. What does this have to do with a metric in curved space-time? And how could energy-momentum be "depleted" when the particle picks up speed?
  11. Thank you. It seemed too far-fetched to me too. You're right, material from erosion must be orders of magnitude more sizeable for both reasons you point out.
  12. I once heard claims that even such subtle effect as changes in the moment of inertia of the Earth due to seasonal leaf shedding in deciduous forests can have a measurable effect in the Earth's rotation. I don't know how much truth is behind such claims, or whether it's in the milliseconds. Could that be true? It seems like small potatoes in comparison with motions in the mantle, for example.
  13. Exactly! They key fact is your observation that, \[ \frac{\boldsymbol{f}\left(\boldsymbol{r}\left(t\right)\right)}{\left\Vert \boldsymbol{f}\left(\boldsymbol{r}\left(t\right)\right)\right\Vert }=\frac{\boldsymbol{r}'\left(t\right)}{\left\Vert \boldsymbol{r}'\left(t\right)\right\Vert } \] You only have to dot-multiply by \( d\boldsymbol{r}\left(t\right)=\boldsymbol{r}'\left(t\right)dt \), remember the definition of the norm, and you're there. I hope that helps.
  14. The argument still stands. Orthonormal is a particular case of orthogonal. Orthonormal=orthogonal and normalised. More specifically, a gravitational singularity is a region of spacetime in which any components of the Riemann curvature tensor become infinite. Read carefully @Markus Hanke's previous post. You do not invent the properties of the metric. You postulate the other (non-gravitational) fields. Then you obtain the energy-momentum tensor. Then you symmetrise it (with techniques like, eg, Belinfante's symmetrisation technique), because the canonical energy-momentum tensor is generally non-symmetric, and the source of the gravitational field must be symmetric in the space-time indices. Then you postulate boundary conditions, as Markus told you. Then you solve for your metric. Having done all that, you're still not home-free, because the particular coordinates that you use to solve for the metric can have false singularities, ie, singularities of your coordinate map that are not physical. So you must obtain the Riemann tensor and try to identify the singularities there. You have a lot of ground to cover still before you can meaningfully talk about your singularity. I hope the comments here you find helpful. The metric is not gauge-invariant. It's the Riemann tensor that's gauge invariant. This is in close analogy to electromagnetism. The vector potential in EM doesn't really give you the physics (except for the Aharonov-Bohm effect or the "holonomy" of the field). Infinitely many vector potentials give you the same physics. It's Faraday's tensor plus the holonomy which gives you the complete physics of electrodynamics. There is only one Faraday tensor (the E's and the B's) that define the physics. Gravity displays remarkable mathematical similarities to EM. It has a huge gauge arbitrariness. In modern GR we say space-time is not defined by a metric, but by an equivalence class of infinitely many metrics, all gauge-equivalent to each other. The matter is even more subtle. Sometimes you find a coordinate map that solves the equations. But the map has singularities of itself --fictitious. Then you introduce a change of coordinate maps that fixes the coordinate singularities. Example: Kruskal-Skezeres coordinates being a well-known example.
  15. OK I have to come clean at this point and confess I do not know what "being more primitive" means in mathematics. I think it was Poincaré who tried to base everything mathematics in terms of group theory. Another attempt of basing maths on something "primitive" was Felix Klein's Erlangen program to unify geometry. Category theory seems to be another attempt at building a really primitive branch of maths. "Primitive" meaning something like "least number of assumptions." [?] Perhaps "primitive" means theory A can be based on concepts derived from theory B, but not the other way around, and therefore theory B being more "primitive" than theory A? I'm not sure of what mathematicians mean when they say they're trying to refer things to something more primitive.
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.