Jump to content

What does 'emergent' mean in a physics context (split from Information Paradox)


StringJunky
 Share

Recommended Posts

7 minutes ago, Markus Hanke said:

The principle of least action is a general principle of nature, which applies both in the classical and the quantum domain. It says that a given system will evolve such that the ‘action’ - a quantity which equals the time integral of the Lagrangian of the system (being the difference between kinetic and potential energy) - is extremal, usually taken as being at a minimum. This is equivalent to the Euler-Lagrange equation. Hence, to find the evolution equation of a system, you can first work out its Lagrangian, and then find the extremum of the corresponding action. For example, the Einstein equations emerge in this way from the Hilbert action.

This is amongst the most fundamental and most powerful known principles in physics.

Thank you very much for a very good explanation of what the principle of least action is. Unfortunately, it doesn't relate to my question. But it's OK. You don't necessarily know what my question relates to.

Link to comment
Share on other sites

What my question (see above) relates to, can be found in a variety of sources. A "simple", Feynman-style explanation on how the principle of least action emerges in the path integral picture, is in his book "QED: The Strange Theory of Light and Matter". On pp.42-45 he applies this picture in an example of reflection of light and "derives" the principle of least action, in this case. He concludes, "And that’s why, in approximation, we can get away with the crude picture of the world that says that light only goes where the time is least" (p.45).

More generally, on p.123, "This brings us all the way back to classical physics, which supposes that there are fields and that electrons move through them in such a way as to make a certain quantity least. (Physicists call this quantity “action” and formulate this rule as the “principle of least action.”) This is one example of how the rules of quantum electrodynamics produce phenomena on a large scale."

More formal derivation is in Zee, A.. Quantum Field Theory in a Nutshell. On p.12, "Applying the stationary phase or steepest descent method ... [to a path integral] we obtain ...  the “classical path” determined by solving the Euler-Lagrange equation ... with appropriate boundary conditions." (I've removed the math expressions.)

Edit: After re-reading these sections in the books I got the answer to my question. Thus I don't have any more open questions in this thread. 

 

 

Edited by Genady
Link to comment
Share on other sites

6 hours ago, Genady said:

Thank you very much for a very good explanation of what the principle of least action is. Unfortunately, it doesn't relate to my question. But it's OK. You don't necessarily know what my question relates to.

Indeed, I didn’t. Thanks for clarifying. And I had to make an edit to my post, as I was typing it in haste, and it got all muddled up and imprecise.

Yes, it is interesting that the path integral formalism in QFT gives the same results as the action principle; but I’m not sure if this can be considered a derivation. I rather think these are different formulations of the same principle (but I’m open to correction on this)?

Edited by Markus Hanke
Link to comment
Share on other sites

  • 3 weeks later...
On 1/10/2022 at 1:51 AM, Markus Hanke said:

question that arises then is why simple systems should organise themselves into vastly more complex ones. It is interesting to note that the universe at large is a sea of increasing entropy interspersed with islands of low entropy - which is what living ecosystems fundamentally are. Left alone, these islands of low entropy grow and spread about, and it is not immediately obvious why that should be so, given the principle of least action.

Had any more thoughts along those lines?

Or anyone else?

Link to comment
Share on other sites

56 minutes ago, dimreepr said:

Why do you ask?

I was discussing entropy  on another forum and I mentioned  to another poster there that Markus had addressed  this point recently.

I quoted Markus' post there in that other forum (Markus used to be admin there once upon a time) and noticed that  he had left his own question  hanging.

3  weeks have passed and perhaps more could be said on the relationship between  living processes and the  behaviour of entropy 

I think @joigus also said he might reflect on it

 

 

Link to comment
Share on other sites

7 minutes ago, geordief said:

... on the relationship between  living processes and the  behaviour of entropy 

May I ask, what is the question about that relationship?

Link to comment
Share on other sites

22 minutes ago, Genady said:

May I ask, what is the question about that relationship?

I am waiting (ie hoping)for a reply from Markus in the first instance 

I really only gave that reply out of courtesy  to dimreepr who wanted to know " why do you  ask?"

I am sure Markus (and joigus) would have more educated ideas on the subject than myself (if they have any)

 

Edited by geordief
Link to comment
Share on other sites

1 hour ago, geordief said:

I think @joigus also said he might reflect on it

Yes. Thank you for your interest. @Markus Hanke said that it's not immediately obvious why or how islands of negative entropy can exist in a universe in which the principle of least action rules (on the one hand), while entropy generally tends to a maximum (universe-wise). I think that's basically Markus' observation that drew my attention.

Then I said that the principle of least action is very straightforward and calculationally useful in practical terms, while obscure at best as to any possible interpretation.

1) What is the action? (from a physical POV) Does it mean anything at all?

2) The principle of least action, in integral form, is all about surfaces. What do these surfaces represent?

I'm working on a couple of drawings explaining what I mean in some detail. But my rough idea is that it's posible that you cannot extend these surfaces to include "the system", and nothing but the system, at will. At some point you're messing up entropic information. All this related to the fact that, it may be, that the distincion system/rest-of-the-universe ends up meeting some fundamental limitation having to do with entropy.

I'm not even sure this relates very closely with Markus' observation.

But this is going somewhat off-topic. Maybe a split is in order?

Edited by joigus
minor correction
Link to comment
Share on other sites

18 hours ago, geordief said:

Had any more thoughts along those lines?

Perhaps I have simply overthought all this - ultimately it comes down to gravity. You start off with an early universe that is basically a homogenous soup of energy. If you just naively scale this up, then it is difficult to understand how gradients of entropy could evolve.

The problem though is that a state like this is in an unstable equilibrium - even the tiniest fluctuation is enough, and gravity will kick in, drawing the ever so slightly denser regions together in themselves, creating in homogeneities. Over time this leads to what we see now - clusters, galaxies, and stars. And once you get gradients of energy like this, from which work can be extracted, it is no surprise that entropy increases at different rates in different places. It is not a big step from here to local complex systems.

I had made an attempt to connect this to the stationary action principle, but I now think this might have been ill-conceived. The trouble is that the action principle is not well defined for all types of systems; in particular, in most cases it does not apply to dissipative systems, which is what a biosphere would be. So this is problematic. Another issue is that, even if an action principle exists, the action itself might not be unique.

So to make a long story short, I no longer think there is necessarily an issue with entropy and action principles.

Whether or not this is sufficient to explain the sheer degree of local complexity we see is another question again.

15 hours ago, joigus said:

The principle of least action, in integral form, is all about surfaces. What do these surfaces represent?

Could you elucidate on this a bit? It’s the variation of a line integral, in the form it’s usually given. But I think I may just be missing the point you are attempting to make.

15 hours ago, joigus said:

All this related to the fact that, it may be, that the distincion system/rest-of-the-universe ends up meeting some fundamental limitation having to do with entropy.

This may be related to my above point about not all systems admitting well-defined action principles. Also, the action is a global property of an entire region in phase space (ie an integral over time), whereas the second law is a local statement (in time). If the region has no clear boundaries, then it will be difficult to make sense of the concept.

Link to comment
Share on other sites

19 minutes ago, Markus Hanke said:

You start off with an early universe that is basically a homogenous soup of energy.

I can't buy into this.

Energy is a property of something it is not a 'something' ie not a substance.

24 minutes ago, Markus Hanke said:

Could you elucidate on this a bit? It’s the variation of a line integral, in the form it’s usually given. But I think I may just be missing the point you are attempting to make.

Yes it is about a mathematical space which is an abstraction from 'real space'.

I think that this space has also to be convex for the integrals to work (have meaning) in reality, though the subject of convexity is now a subject of much research.
 

The surface or hypersurface referred to contain the variations, but modern terminology now refers to extremal principles rather than variational ones .

Link to comment
Share on other sites

1 hour ago, Markus Hanke said:

Could you elucidate on this a bit? It’s the variation of a line integral, in the form it’s usually given. But I think I may just be missing the point you are attempting to make.

This is kind of what I mean in pictures. Lately I find infographics very helpful. The least-action principle doesn't look so much related to surfaces in particle theory, but in field theory it does, very strongly:

Least_Action_Principle_Field_Theory_2022-02-03-Note-18-31.thumb.png.a04ec1fec6bc9820a012939c479c4e76.png

This, and the developments of physics in the last couple of centuries, suggests to me a big picture that would go something like this:

Least_Action_Principle_progression_2022-02-03-Note-18-31.png.38521f8ec3ecba680db0791303e421c9.png

Entropy has no universal definition. Its form very much depends on what our control parameters are. The principle of least action adopts different forms at different scales, only formally reminiscent of each other. It is conceivable to me that at some scale (mesoscopic) these islands of negative entropy find their domain. Keep in mind that the boundary between micro and macroscopic (mesoscopic) is not so well understood yet.

But I'm turning very vague and this is getting somewhat off-topic.

 

Link to comment
Share on other sites

1 hour ago, joigus said:

 

Least_Action_Principle_Field_Theory_2022-02-03-Note-18-31.thumb.png.a04ec1fec6bc9820a012939c479c4e76.png

Sorry, I meant the integral (action) to be defined on the interior of \( \Sigma \). Conserved quantities (defined via Noether theorem) are really meaningful at the boundary. Evolution equations have a total-divergence arbitrariness. This total divergence is a surface term. And the spatial part of the integral is irrelevant because fields are chosen to be vanishingly small at spatial infinity. Local evolution equations are insensitive to the choice of surface. It's the objects that one uses to describe the configurations that change with every scale of approach.

Link to comment
Share on other sites

On 2/3/2022 at 5:26 PM, joigus said:

Markus Hanke said that it's not immediately obvious why or how islands of negative entropy can exist in a universe in which the principle of least action rules (on the one hand), while entropy generally tends to a maximum (universe-wise). I think that's basically Markus' observation that drew my attention.

I suggest looking at Frank Ramsey's work.

https://en.wikipedia.org/wiki/Frank_Ramsey_(mathematician)

Quote

A great amount of later work in mathematics was fruitfully developed out of the ostensibly minor lemma used by Ramsey in his decidability proof: this lemma turned out to be an important early result in combinatorics, supporting the idea that within some sufficiently large systems, however disordered, there must be some order. So fruitful, in fact, was Ramsey's theorem that today there is an entire branch of mathematics, known as Ramsey theory, which is dedicated to studying similar results.

 

Link to comment
Share on other sites

1 hour ago, studiot said:

I suggest looking at Frank Ramsey's work.

https://en.wikipedia.org/wiki/Frank_Ramsey_(mathematician)

 

Are those nuggets of order   the bits of  grit in the oyster shell around which the orderings that characterize living systems   might form? (life being "pearl"of ordered existence in the  cosmic oceans of disorder could be  a pleasant way of looking at it)

Brings to mind Will Shakespear's "sceptered  isle"  lines.

 

That Ramsey theorem sounds like a formalized Shakespeare -Monkey Typewriter theorem to me.

Edited by geordief
Link to comment
Share on other sites

1 hour ago, geordief said:

Are those nuggets of order   the bits of  grit in the oyster shell around which the orderings that characterize living systems   might form? (life being "pearl"of ordered existence in the  cosmic oceans of disorder could be  a pleasant way of looking at it)

Brings to mind Will Shakespear's "sceptered  isle"  lines.

 

That Ramsey theorem sounds like a formalized Shakespeare -Monkey Typewriter theorem to me.

You should know by now that mathematicians like to make precise statements that are as general as possible and as vague as possible.

:)

So there is no limit to the nuggets of order, they could be large, they could be small.

The mathematical phrase is 'at least' .

Ramsey was not only active in this area, he was also active in developing applied variational theory and acting as a bridge between the era of Russell and that of Godel.

Edited by studiot
Link to comment
Share on other sites

On 2/4/2022 at 4:31 PM, studiot said:

Energy is a property of something it is not a 'something' ie not a substance.

You’re right of course, my terminology was sloppy. Let’s, for simplicity’s sake, say it was a homogenous ‘soup of particles’ (though this is problematic too, but you get my drift).

On 2/4/2022 at 7:14 PM, joigus said:

Sorry, I meant the integral (action) to be defined on the interior of Σ . Conserved quantities (defined via Noether theorem) are really meaningful at the boundary. Evolution equations have a total-divergence arbitrariness. This total divergence is a surface term. And the spatial part of the integral is irrelevant because fields are chosen to be vanishingly small at spatial infinity. Local evolution equations are insensitive to the choice of surface. It's the objects that one uses to describe the configurations that change with every scale of approach.

Ah, I get you now. I hadn’t looked at it quite in this way...something to think about.

13 hours ago, studiot said:

I suggest looking at Frank Ramsey's work.

Great reference +1

I was not aware of this field of maths. Goes on my reading list!

Link to comment
Share on other sites

On 2/5/2022 at 1:03 PM, geordief said:

Are those nuggets of order 

@joigus @Markus Hanke

 

Here is an interesting discourse on order and disorder in relation to binary strings.

From 'What is Random' by Edward Beltrami  -  Springr-Verlag . 1999

I have highlighted a short passage to read first. This explains the what it is all about ie what may be nuggets of order in a binary string. This passage is on the third attachment.
The rest is supporting background.

The point is how the nuggets can arise from purely statistical considerations. Entropy after all arises from statistical considerations of the behaviour of large ensembles.

cluster1.thumb.jpg.c6f3bb816b384e866c679dc4d036c788.jpgcluster2.thumb.jpg.3bacba8d894a1a90250b258ea3f07760.jpgcluster3.thumb.jpg.ce30e7f971cfb71e71b57f79cb413723.jpg

Edited by studiot
Link to comment
Share on other sites

  • 2 weeks later...

I think a good way to look at randomness as emergent, is to consider binary strings of different lengths.

The longer a binary string is, the more examples of random (i.e. incompressible) patterns there are. But each string might have the same probability, like if it's the outcome of a roulette wheel spin with a lot more slots than usual; like thousands.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.