Jump to content

Will entropy be low much of the time?


Tristan L

Recommended Posts

As I understand entropy and the Second Law of Thermodynamics, things stand as follows:

A closed system has a set Mi of possible microstates between which it randomly changes. The set Mi of all possible microstates is partitioned into macrostates, resulting in a partition Ma of Mi. The members of Ma are pairwise disjoint subsets of Mi, and their union is Mi. The entropy S(ma) of a macrostate ma in Ma is the logarithm of the probability P(ma) of ma happening, which is in turn the sum Sum_{mi ma}p(mi) of the probabilities p(mi) of all microstates mi in ma. The entropy s_{Ma}(mi) of a microstate mi with respect to Ma is the probability of the macrostate in Ma to which mi belongs. The current entropy s_{Ma} of the system with reprect to Ma is the entropy of the microstate in which the system is currently in with respect to Ma.

The Second Law of Thermodynamics simply states that a closed system is more likely to pass from a less probable state into a more probable one than from a more probable state into a less probable one. Thus, it is merely a stochastical truism.

By thermal fluctuations, the fluctuation theorem, and the Poincaré recurrence theorem, and generally by basic stochastical laws, the system will someday go back to a low-entropy state. However, also by basic stochastical considerations, the time during which the system has a high entropy and is thus boring and hostile to life and information processing is vastly greater than the time during which it has a low entropy and is thus interesting and friendly to info-processing and life. Thus, there are vast time swathes during which the system is dull and boring, interspersed by tiny whiles during which it is interesting. Or so it might seem...

Now, what caught my eye is that the entropy we ascribe to a microstate depends on which partition Ma of Mi into macrostates we choose. Physicists usually choose Ma in terms of thermodynamic properties like pressure, temperature and volume. Let’s call this partition of macrostates “Ma_thermo”. However, who says that Ma_thermo is the most natural partition of Mi into macrostates?

For example, I can also define macrostates in terms of, say, how well the particles in the system spell out runes. Let’s call this partition Ma_rune. Now, the system-entropy s_{Ma_thermo} with respect to Ma_thermo can be very different from the system-entropy s_{Ma_rune} with respect to Ma_rune. For example, a microstate in which all the particles spell out tiny Fehu-runes ‘’ probably has a high thermodynamic entropy but a low rune entropy.

What’s very interesting is that at any point in time t, we can choose a partition Ma_t of Mi into macrostates such that the entropy s_{Ma_t}(mi_t) of the system at t w.r.t. Ma_t is very low. Doesn’t that mean the following?:

At any time-point t, the entropy s_{Ma_t} of the system is low with respect to some partition Ma_t of Mi into macrostates. Therefore, information processing and life at time t work according to the measure s_{Ma_t} of entropy induced by Ma_t. The system entropy s_{Ma_t}rises as time goes on until info-processing and life based on the Ma_t measure of entropy can no longer work. However, at that later time t’, there will be another partition Ma_t’ of Mi into macrostates such that the system entropy is low w.r.t. Ma_t’. Therefore, at t’, info-processing and life based on the measure s_{Ma_t’} of entropy will be possible at t’. It follows that information processing and life are always possible, it’s just that different forms thereof happen at different times. Why, then, do we regard thermodynamic entropy as a particularly natural measure of entropy? Simply because we happen to live in a time during which thermodynamic entropy is low, so the life that works in our time, including us, is based on the thermodynamic measure of entopy.

Some minor adjusments might have to be made. For instance, it may be the case that a useful partition of Mi into macrostates has to meet certain criteria, e.g. that the macrostates have some measure of neighborhood and closeness to each other such that the system can pass directly from one macrostate only to the same macrostate or a neighboring one. However, won’t there still be many more measures of entropy equally natural as thermodynamic entropy?

Also, once complex structures have been established, these structures will depend on the entropy measure which gave rise to them even if the current optimal entropy measure is a little different.

Together, these adjusments would lead to the following picture:

During each time interval [t1, t2], there is a natural measure of entropy s1 with respect to which the system’s entropy is low at t1. During [t1, t2] – at least during its early part – life and info-processing based on s1 are therefore possible. During the next interval [t2, t3], s1 is very high, but another shape of entropy s2 is very low at t2. Therefore, during [t2, t3] (at least in the beginning), info-processing and life based on s1 are no longer possible, but info-processing and life based on s2 works just fine. During each time interval, the intelligent life that exists then regards as natural the entropy measure which is low in that interval. For example, at a time during which thermodynamic entropy is low, intelligent lifes (including humans) regard thermodynamic entropy as THE entropy, and at a time during which rune entropy is low, intelligent life (likely very different from humans) regards rune entropy as THE entropy.

Therefore my question: Doesn’t all that mean that entropy is low and that info-processing and life in general are possible for a much greater fraction of time than thought before?

Link to comment
Share on other sites

1 hour ago, Tristan L said:

As I understand entropy and the Second Law of Thermodynamics, things stand as follows:

..........etc

Didn't you miss something out here?

Thermodynamic entropy has to have units of energy per degree of temperature.

Other entropies, such as your runinations (pun intended) will have different units.

In a block of flats there is (or should be) a one-to-one between the pigeonhole letter boxes at the entrance and the flats and their organisational structure.
But would you rather live in a pigeonhole or flat?
They are not the same.

Link to comment
Share on other sites

Entropy is log of the M_a only if P(M_a)=1 and P(neg M_a)=0.

Otherwise it's the sum of negative pxlog(p) (the average value of log p.)

Now, as a function of the p's, -Sum p log(p) always complies with observable-independent property of concavity:

https://link.springer.com/article/10.1007/BF00665928

There are interesting points in what you say. I cannot be 100 % sure I've understood everything. Something that reminds me a lot of what you're saying is Bertrand's circle paradox:

https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)

IOW: Maximal entropy states p_i depend on observable to be measured. 

But general properties of entropy don't. Thermo's 2nd law is unaffected, I think. It's quite solid.

I'm not completely sure my arguments (if any here) are watertight. But I'm trying to offer you some food for thought that I think goes in the direction you're reasoning.

Link to comment
Share on other sites

10 hours ago, studiot said:

Didn't you miss something out here?

Thermodynamic entropy has to have units of energy per degree of temperature.

Other entropies, such as your runinations (pun intended) will have different units.

The units are only due to a constant of proportionality (the Boltzmann constant). However, in essence, every entropy is a number defined in terms of probaility, including both thermodynamic entropy (defined statistically mechanically and without unneeded constants of proportionality) and "runish entropy". What's essential about thermodynamic entropy is that it's defined in terms of thermodynamic macrostates. "Rune entropy", on the other hand, is defined in terms of how well the particles spell out runes.

 

10 hours ago, studiot said:

In a block of flats there is (or should be) a one-to-one between the pigeonhole letter boxes at the entrance and the flats and their organisational structure.

But would you rather live in a pigeonhole or flat?
They are not the same.

Of course I'd rather live in a flat, but that's only because I'm a human an not a pigeon. Translating this metaphor, it means that I'd rather live in a universe with low thermodynamic entropy rather than low runic entropy, but only since I'm a thermodynamic lifeform and not a runish one. Maybe at a time in the far future when thermodynamic entropy is high but runish entropy is low, there will be an intelligent runish lifeform asking another one whether it likes to live in the low-rune-entropy universe it knows or is so unreasonable as to want to live in a universe with low thermodynamic entropy.

The thermodynamic world is indeed very different from the runish world, but I see no reason for thermo-chauvinism. Low thermo-entropy is good for thermo-life, and low rune-entropy is good for runish life. Alice and Bob can have very different machines, where Alice's is built such that it uses a pressure difference between two chambers, and Bob's machine is built such that it extracts useful work from the Fehu-state I described above, e.g. by having tiny Fehu-shaped chambers in it or something. It's just that in our current world, Alice's machine is much more useful as thermo-entropy is low while rune-entropy is high at the current time.

Isn't that right?

6 hours ago, joigus said:

Entropy is log of the M_a only if P(M_a)=1 and P(neg M_a)=0.

Otherwise it's the sum of negative pxlog(p) (the average value of log p.)

Yeah, that's right. My bad. I should have said that if all microstates are equally likely, the entropy of a macrostate is proportional to the probability of that macrostate. According changes have to be made throughout my text. However, that doesn't change anything about its basic tenets, regardless of whether the microstates are equally likely or not, does it? I hope and think not, but please correct me if I'm wrong.

6 hours ago, joigus said:

There are interesting points in what you say. I cannot be 100 % sure I've understood everything. Something that reminds me a lot of what you're saying is Bertrand's circle paradox:

https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)

IOW: Maximal entropy states p_i depend on observable to be measured. 

Exactly. My point is that if I choose, say, having the particles arranged so as to spell out runes rather than thermodynamic properties like pressure, temperature and volume, I get a very different entropy measure and thus also a different state of maximal entropy. So, rune-entropy can be low while thermo-entropy is high. Doesn't that mean that runish life is possible in a universe with low rune entropy? Why should e.g. temperature be more priveleged that rune-spelling?

 

6 hours ago, joigus said:

But general properties of entropy don't. Thermo's 2nd law is unaffected, I think. It's quite solid.

Yes, I fully agree. On average, thermo-entropy increases with time, and when it has become very high, it will take eons to spontaneously become low again. The same thing goes for rune-entropy. However, since there are so humongously many measures of entropy, there will always be at least one that falls and one that is very low at any time. Therefore, life will always be possible. When thermodynamic entropy becomes to high, thermo-life stops, but then, e.g. rune-entropy is low, so rune-life starts. When rune-entropy has become too high, runish life ends and is again replaced by another shape of life. My point is that rather than being interesting and life-filled for very short whiles separated by huge boring lifeless intervals, the universe (imagine it to be a closed system, for expansion and similar stuff is another topic) will be interesting and life-filled for much of the time. It's not life itself that needs eons to come again, it's only each particular shape of life that takes eons to come again. That's my point, which I hope is right. Perhaps some of the entropy measures aren't as good as others, but is thermo-entropy really better than every other measure of entropy?

6 hours ago, joigus said:

I'm not completely sure my arguments (if any here) are watertight.

As far as I can see, I think they are.

6 hours ago, joigus said:

I'm trying to offer you some food for thought that I think goes in the direction you're reasoning.

Yes, it certainly does!

Edited by Tristan L
Link to comment
Share on other sites

34 minutes ago, Tristan L said:

Of course I'd rather live in a flat, but that's only because I'm a human an not a pigeon. Translating this metaphor, it means that I'd rather live in a universe with low thermodynamic entropy rather than low runic entropy, but only since I'm a thermodynamic lifeform and not a runish one. Maybe at a time in the far future when thermodynamic entropy is high but runish entropy is low, there will be an intelligent runish lifeform asking another one whether it likes to live in the low-rune-entropy universe it knows or is so unreasonable as to want to live in a universe with low thermodynamic entropy.

The thermodynamic world is indeed very different from the runish world, but I see no reason for thermo-chauvinism. Low thermo-entropy is good for thermo-life, and low rune-entropy is good for runish life. Alice and Bob can have very different machines, where Alice's is built such that it uses a pressure difference between two chambers, and Bob's machine is built such that it extracts useful work from the Fehu-state I described above, e.g. by having tiny Fehu-shaped chambers in it or something. It's just that in our current world, Alice's machine is much more useful as thermo-entropy is low while rune-entropy is high at the current time.

Thank you for considering my comments.

I'm sorry my analogy was not clear enough for you to understand. So try this one instead.

Both chess and draughts are played on the same board.

But they are very different games with very different rules, and different end results.
Events can happen in chess that cannot happen in draughts and vice versa.

The same can be said of the partitions of your master set.

This carries over to the other part of your answer.

40 minutes ago, Tristan L said:

The units are only due to a constant of proportionality (the Boltzmann constant). However, in essence, every entropy is a number defined in terms of probaility, including both thermodynamic entropy (defined statistically mechanically and without unneeded constants of proportionality) and "runish entropy". What's essential about thermodynamic entropy is that it's defined in terms of thermodynamic macrostates. "Rune entropy", on the other hand, is defined in terms of how well the particles spell out runes.

There are umpteen relationships in physics where something is proportional to something else.
And very often the contant of proportionality carries the units as in strain (a dimensionless quantity) is proportional to stress.
But that does not mean we can disregard the constant and say therefore stress and strain are the same as thermodynamic entropies.
Otherwise you could model one on the other, but if you tried you would obtain conflicting results, just as if you tried to play chess with a draughts set or vice versa.

Information entropy and Thermodynamic entropy are not the same, or subject to the same laws (as in the rules of chess and draughts).

 

 

Link to comment
Share on other sites

1 hour ago, Tristan L said:

since there are so humongously many measures of entropy

I think about here could be the origin of the "fallacy."

Please be aware I'm not trying to prove you wrong. Maybe you're on to something maybe you aren't. Either way it's interesting!!! You're making sense and I want to oblige.

The cardinality (number of possibilities) of microstates is what's humongously big.

Macroscopic ways of organizing the data are not growing like factorials, or products of factorials, or products of factorials corrected by smaller factorials in the denominators. They're kept constant (maybe humongously so in some sense, but constant) fixed by the number of descriptions you wish to give yourself. Now make the number of microstates grow. That's what's going to dominate everything. The effect of taking the microstates to infinity is going to be the overriding effect.

Link to comment
Share on other sites

12 hours ago, Tristan L said:

On average, thermo-entropy increases with time, and when it has become very high, it will take eons to spontaneously become low again.

Here I think you're being persuaded by a subtle misconception. When entropy has reached a maximum, the system has undergone total thermalization and nothing statistical depends on time. Things keep changing, but only microscopically. All the physical parameters are fixed at their average value. Any changes will manifest themselves in second order effects or fluctuations. If temperature is high, the system will be very efficient at erasing these deviations from equilibrium very quickly. Some months ago, I developed a picture meant to illustrate these concepts, only for educational purposes, and inspired by some musings due to physicist Tony Zee, that temperature is some kind of inverse relaxation time for the system, or proportional to it. It probably overlaps with formalism that other people have developed, because in physics it's very difficult to come up with anything that's really original and new.

So in your initial OP, there is already a problem, and I should have detected it right away had I been cleverer. Namely: Will entropy be low much of the time?

There is no time in entropy. Entropy kills time. That's its job description. I have a perception that we're faced with entropy at the surface of a black hole, because something is killing time there too! But those are just speculations.

Although I very much like your post. Those are very intelligent questions. +1

I hope that helps.

Edited by joigus
mistyped
Link to comment
Share on other sites

15 minutes ago, joigus said:

Here I think you're being persuaded by a subtle misconception. When entropy has reached a maximum, the system has undergone total thermalization and nothing statistical depends on time. Things keep changing, but only microscopically. All the physical parameters are fixed at their average value. Any changes will manifest themselves in second order effects or fluctuations.
...

There is no time in entropy. Entropy kills time.

 

This is the unaswered question.  +1

Consider a system of  molecules in some volume.
Unless both molecules have the same velocity they do not have the same kinetic energy and therefore the internal energy is not evenly distributed between the molecules (maximum entropy).

So the famous 'hot death' of the universe must be a static (as in unchanging) situation from the point of view of maximum entropy.

But this view ignores the fact there are twin drivers in the thermodynamic world that often pull in opposite directions.

The principle of minimum energy can be used to devise a system that will oscillate indefinitely at fixed (maximum) entropy.

 

 

 

Link to comment
Share on other sites

25 minutes ago, studiot said:

[...]

Consider a system of  molecules in some volume.
Unless both molecules have the same velocity they do not have the same kinetic energy and therefore the internal energy is not evenly distributed between the molecules (maximum entropy).

So the famous 'hot death' of the universe must be a static (as in unchanging) situation from the point of view of maximum entropy.

But this view ignores the fact there are twin drivers in the thermodynamic world that often pull in opposite directions.

[...]

Things to say, but very little time now. My entropy must be acting up. ;)

A whole new ballgame, both with the two molecules and with the universe. For completely different reasons. One is very small N (number of DOF,) and the other the possibility of frustrated thermalization due to cosmological parameters. Maybe we should get @Mordred interested in the discussion.

39 minutes ago, studiot said:

The principle of minimum energy can be used to devise a system that will oscillate indefinitely at fixed (maximum) entropy.

Very interesting case for quantum systems near T=0, probably done to death by the experts but interesting to discuss nonetheless, and see if we learn something from discussion.

Talk to you later. Very stimulating conversation.

Link to comment
Share on other sites

First Answer to studiot:

13 hours ago, studiot said:

Thank you for considering my comments.

You're welcome.

13 hours ago, studiot said:

I'm sorry my analogy was not clear enough for you to understand. So try this one instead.

Both chess and draughts are played on the same board.

But they are very different games with very different rules, and different end results.
Events can happen in chess that cannot happen in draughts and vice versa.

The same can be said of the partitions of your master set.

I'm sorry to have to point out that apparently, you do not understand your own analogy well enough. Therefore, let me make it clearer to you. Your flats being in a one-to-one correspondence with your pigeonholes is analogous to two games being isomorphic each other, which is in turn like two partitions of the set of microstates being isomorphic to each other. Chess and checkers, however, are not isomorphic to each other; there is no one-to-one correspondence between their possible game configurations and allowed moves. That's why they work differently.

Regarding the partitions, there are some that aren't isomorphic to each other, and others that are. The thermodynamic partition is isomorphic to every partition that we get by taking the thermo-partition and then applying an arbitrary permutation of the microstates. Not all of these partitions are distinct, but there are still many partitions isomorphic to the thermo-partition but still distinct from it. So, there are many measures of entropy equivalent to thermo-entropy but distinct from it, and the system will much more often be

1. in a state of low entropy w.r.t. some partition isomorphic to the thermo-partition

than

2. in a state of low entropy w.r.t. the thermo-partition itself.

13 hours ago, studiot said:

There are umpteen relationships in physics where something is proportional to something else.
And very often the contant of proportionality carries the units as in strain (a dimensionless quantity) is proportional to stress.
But that does not mean we can disregard the constant and say therefore stress and strain are the same as thermodynamic entropies.
Otherwise you could model one on the other, but if you tried you would obtain conflicting results, just as if you tried to play chess with a draughts set or vice versa.

Thermodynamic entropy is just information entropy w.r.t. the thermo-partition multiplied by the Boltzmann constant afaik. They are not only defined in terms of isomorphic partitions, but in terms of one and the same partition. One is just the other multiplied by a constant. Could you please tell me how you supposedly get conflicting results with them?

As I've already said, chess and checkers are not isomorphic, unlike thermodynamic entropy and information entropy w.r.t. the thermo-partition. Thermodynamic entropy vs. information entropy w.r.t. the thermo-partition is like playing chess on some board with chess-pieces of a certain size vs. playing chess on a physically bigger board with bigger chess pieces, but with the number of squares an everything else kept the same. Therefore, we can safely equate the two and just talk of thermo-entropy, and to make the math a bit easier, we'll not use unneeded units that distract from the essence.

13 hours ago, studiot said:

Information entropy and Thermodynamic entropy are not the same, or subject to the same laws (as in the rules of chess and draughts).

I've already said why info-entropy w.r.t. thermo-partition and thermo-entropy are essentially the same (not just isomorphic) and why they're very different from the Ch's, which aren't even isomorphic. But I ask you again: Since when are info-entropy w.r.t. thermo-partition and thermo-entropy subject to different laws?

Please do tell.

**************************************************************************************************************
 

Answer to joigus:

12 hours ago, joigus said:

Please be aware I'm not trying to prove you wrong. Maybe you're on to something maybe you aren't. Either way it's interesting!!! You're making sense and I want to oblige.

 

 

Of course I'm aware of that. Also, don't get me wrong and think that I want to be right in order to be right. I want to be right since I don't like the heat death of the Universe at all :(. But of course, I won't let that make me bend results. From a purely scientific point, my being right and my being wrong are indeed both interesting, but from a life-loving perspective, I really do hope to be right :). I find your thoughts very interesting.

12 hours ago, joigus said:

I think about here could be the origin of the "fallacy."

 

The cardinality (number of possibilities) of microstates is what's humongously big.

Macroscopic ways of organizing the data are not growing like factorials, or products of factorials, or products of factorials corrected by smaller factorials in the denominators. They're kept constant (maybe humongously so in some sense, but constant) fixed by the number of descriptions you wish to give yourself. Now make the number of microstates grow. That's what's going to dominate everything. The effect of taking the microstates to infinity is going to be the overriding effect.

 

Actually, the number of partitions of a set with n elements is the Bell number Bn, and the sequence of Bell numbers does grow quite quicky. So if we have n microstates, there are Bn ways to define macrostates. So, while for a particular kind of choosing a partition, the number of macrostates in that partition might get overwhelmed by the number of microstates, for any number of microstates, there is a way of partioning them such that the number of macrostates in that partition is not overwhelmed. Now, of course, not all partitions are isomorphic, but even just the partitions isomorphic to some partition is very big in many cases. I've calculated (hopefully right) that for any positive whole number k, sequence (l_1, ... , l_k) of positive whole numbers, and strictly rising sequence (m_1, ... , m_k) of positive whole numbers, there are (l_1*m_1+...+l_k*m_k)! / (  m_1!^l_1 * l_1!  *  ...  *  m_k!^l_k * l_k!  ) ways to partition a set with n = l_1*m_1+...+l_k*m_k members into l_1 sets of m_1 elements each, ..., and l_k sets of m_k elements each. Here, k, (l_1, ... , l_k) and (m_1, ... , m_k) uniquely determine an equivalence class of isomorphic partitions, if I'm right. This result is consistent with the first few Bell numbers. Thus, since the thermo-partition isn't trivial (k=1, l_1=1, m_1=n or k=1, l_1=n, m_1=1), there are many partitions isomorphic but not identical to the thermo-partition, and their number likely does grow humongously as the number of microstates rises.

 

Take the following very simple system, in which we'll assume time is discrete to make it even simpler: We have n bits which are either 1 or 0. In each step and for each bit, there's a probability of p that the bit will change. The bits change independently of each other. Let's interpret the bits as LEDs of the same brightness which are either on or off. The microstates of the system are the ways in which the individual LEDs are on or off. We can then define two microstates as belonging to the same macrostate if they both have the same overall brightness. If we take n = 4, for example, the microstates are (0, 0, 0, 0), (0, 0, 0, 1), ..., (1, 1, 1, 1), sixteen in total. The brighntess-macrostates are

{(0, 0, 0, 0)} (brightness = 0, probability = 1/16),

{(0, 0, 0, 1), (0, 0, 1, 0), (0, 1, 0, 0), (1, 0, 0, 0)} (brightness = 1, probability = 4/16),

{(0, 0, 1, 1), (0, 1, 0, 1), (1, 0, 0, 1), (0, 1, 1, 0), (1, 0, 1, 0), (1, 1, 0, 0)} (brightness = 2, probability = 6/16),

{(0, 1, 1, 1), (1, 0, 1 1), (1, 1, 0, 1), (1, 1, 1, 0)} (brightness = 3, probability = 4/16),

{(1, 1, 1, 1)} (brightness = 4, probability = 1/16).

Simple calculations show us that the system will on average evolve from brighness 0 or brightness 4 (low probability, low entropy) to brightness 2 (high probaility, high entropy).

However, when the system is in the bightness-macrostate of brighness 2, which has maximum brighness-entropy, e.g. by being in microstate (0, 1, 1, 0), we can simply choose a different measure of entropy which is low by choosing the partition into brightess'-macrostates, where the brightness' of a microstate (x, y, z, w) = the brightness of the microstate (x, 1-y, 1-z, w) :

{(0, 1, 1, 0)} (brightness' = 0, probability = 1/16),

{(0, 1, 1, 1), (0, 1, 0, 0), (0, 0, 1, 0), (1, 1, 1, 0)} (brightness' = 1, probability = 4/16),

{(0, 1, 0, 1), (0, 0, 1, 1), (1, 1, 1, 1), (0, 0, 0, 0), (1, 1, 0, 0), (1, 0, 1, 0)} (brightness' = 2, probability = 6/16),

{(0, 0, 0, 1), (1, 1, 0 1), (1, 0, 1, 1), (1, 0, 0, 0)} (brightness' = 3, probability = 4/16),

{(1, 0, 0, 1)} (brightness' = 4, probability = 1/16).

The system will also tend to change from low brightness'-entropy to high brightness'-entropy, but then I can choose yet another measure of brightness, brightness'', according to which the entropy is low. The thing is that at any time, I can choose a partion of the set of microstates into macrostates which is isomorphic to the brightness-partition and for which the current microstate has minimum entropy.

But anyway, the system will someday return to the low brightness-entropy state of brightness=4. Since it is so simple, we can even observe that spontaneous fall in brightness-entropy.

1 hour ago, joigus said:

Here I think you're being persuaded by a subtle misconception. When entropy has reached a maximum, the system has undergone total thermalization and nothing statistical depends on time. Things keep changing, but only microscopically. All the physical parameters are fixed at their average value. Any changes will manifest themselves in second order effects or fluctuations. If temperature is high, the system will be very efficient at erasing these deviations from equilibrium very quickly. Some months ago, I developed a picture meant to illustrate these concepts, only for educational purposes, and inspired by some musings due to physicist Tony Zee, that temperature is some kind of inverse relaxation time for the system, or proportional to it. It probably overlaps with formalism that other people have developed, because in physics it's very difficult to come up with anything that's really original and new.

So in your initial OP, there is already a problem, and I should have detected it right away had I been cleverer. Namely: Will entropy be low much of the time?

There is no time in entropy. Entropy kills time. That's its job description. I have a perception that we're faced with entropy at the surface of a black hole, because something is killing time there too! But those are just speculations.

Although I very much like your post. Those are very intelligent questions. +1

I hope that helps.

Does that mean for our simple system above that the microstates (0, 0, 1, 1), (0, 1, 0, 1), (1, 0, 0, 1), (0, 1, 1, 0), (1, 0, 1, 0), and (1, 1, 0, 0) somehow magically stop the flow of time? Entropy is emergent, right? So, how can it stop something as fundamental as time?

You yourself said that microscopic changes will go on happening, which means that there must always be time. By the Poincaré recurrence theorem and the Fluctuation theorem, the system will almost certainly go back to its original state of low entropy. It just needs a very, very long time to do that. After all, the Second Law isn't some law of magic which says that a magical property called entropy defined in terms of some magically unique partition must always rise, right?

And spontaneous entropy falls have been observed in very small systems, haven't they?

Again, I find your ideas very stimulating and fruitful.

Second Answer to studiot:

1 hour ago, studiot said:
1 hour ago, joigus said:

Here I think you're being persuaded by a subtle misconception. When entropy has reached a maximum, the system has undergone total thermalization and nothing statistical depends on time. Things keep changing, but only microscopically. All the physical parameters are fixed at their average value. Any changes will manifest themselves in second order effects or fluctuations.
...

There is no time in entropy. Entropy kills time.

 

This is the unaswered question.  +1

No longer. See my answer to that above.

Edited by Tristan L
Link to comment
Share on other sites

57 minutes ago, Tristan L said:

I'm sorry to have to point out that apparently, you do not understand your own analogy well enough

And I'm sorry to point out that you seem to me to be bent on finding fault with my attempts to explain my principle point to you, rather than understanding the point itself.

As I understand your thesis here, you are proposing that there is one and only one Law or rule that applies to your partitions, that due to Boltzman.

However it remains your responsibility to support your thesis so please explain the anomalous first ionisation energies of Nitrogen, Phosphorus and Arsenic in terms of your proposition.

 

Edited by studiot
Link to comment
Share on other sites

8 hours ago, Tristan L said:

for any number of microstates, there is a way of partioning them such that the number of macrostates in that partition is not overwhelmed.

OK. Maybe so, but I see at least three problems with your strategy.

1st) It's not about how I define macrostates based on arbitrary assumptions such that the number of macrostates always overwhelms the number of microstates. Microstates for any reasonable definition of them are vastly more than macrostates. That kind of reasoning in science is called ad hoc, and I'm sure you know why it's not a useful avenue. Besides, what do these macrostates mean? How do they play in the general structure of known physics?

2nd) Macroscopic distinctions in physics always have to be measured. In the case of pressure, temperature or volume, it's through pressure gauges, thermometers and length scales marked up in the container. How do you measure your runes?

3rd) I've been talking about macroscopic distinctions with no further qualifications, but the truth is physics only permits you to apply the laws of statistical mechanics in a reasonable way that allows you to subdivide the system in a so-called canonical/macrocanonical ensemble, and get to something like the Maxwell-Boltzmann distribution, when you consider quantities whose balances between the cells of the canonical system can be reasoned about in terms of local exchange. IOW: quantities that satisfy local conservation laws. That narrows down the list essentially to energy, number of entities (mass, moles, molecules,) angular momentum, linear momentum, or things directly related with energy, charge conservation and rotation, like magnetic moments, etc.

I'm sorry but, no matter how interesting runes are in your theoretical mind, and they may be from a POV of pure intelectual exercise, nature doesn't care about them. Runes, and other fantastically complicated to define --and fantastically irrelevant-- quantities are probably created and destroyed every nanosecond without being transferred anywhere near where they are formed. There's no exchange of runes. There's no local conservation of runes. There's no equipartition for runes. There's no near T=0 freezing of the rune DOF. And I even see more severe problems with QM, in which most observables you can write down are really incompatible. That's probably why runes don't appear in the laws of statistical mechanics.

As to time-stopping, it was only meant as an intuitive phrasing. From the macroscopic POV, times does disappear from the problem once equilibrium is reached. Period. If you're not convinced, try to sit down in front of a gas at room temperature and see how much you have to wait for a rune to appear, or AAMOF for anything noticeable to happen, and how long it takes for it to disappear after you've waited several Earth life's worth of time for it to appear. That's a simple enough experiment to conduct.

And there are some more things, but in due time.

 

Link to comment
Share on other sites

Here is another simple problem

 

tube1.jpg.7c7280be993ec0fe695642bddc28c851.jpg

 

Suppose you have a sealed adiabatic tube containing an adiabatic frictionless piston dividing the tube into two chambers, A and B as shown.

Let both sides of the system contain an ideal gas.

Discuss the time evolution of the system in terms of entropy.

Edited by studiot
Link to comment
Share on other sites

Answer to joigus:

From what you've said, I think that I finally get where the problem lies:

The set of all possible microstates isn't a simple unstructured set Mi, but a highly structured set (Mi, (STRUCTURE, e.g. relations and functions)). Partitions Ma1, Ma2 are isomorphic if and only if they have a partition-isomorphism (are isomorphic as partitions) and that partition-isomorphism respects STRUCTURE. Also, only partitions which respect STRUCTURE are eligible as partitions into macrostates. For example, if STRUCTURE is made up of a linear order on Mi, only not-crossing partitions are allowed.

In the case of our simple system, there is a "neighborhood"-relation on the set of microstates, which tells us which state can become which other states with only one LED turning on or off. The brightness-partition, the brightness'-partition, and the rest of the sixteen partitions which we get from the brightness-partition by defining for each microstate (f, u, Þ, a) a new brighness-measure b_(f, u, Þ, a) through b_(f, u, Þ, a)(x, y, z, w) := (f*x - (1-f)*(1-x), u*y - (1-u)*(1-y), Þ*z - (1-Þ)*(1-z), a*w - (1-a)*(1-w)), are isomorphic to each other in strong sense that they and their isomorphisms respect the neighborhood-relation. However, simply exchanging e.g. (1, 1, 1, 0) with (0, 0, 0, 1) in the brightness-partition yields a forbidden partition (call it partition in terms of "brighthood"), since the other microstates (1, 1, 0, 1), (1, 0, 1, 1), (0, 1, 1, 1) in the same brighthood-macrostate as (0, 0, 0, 1) only differ from the one and only brighthood=4 microstate (1, 1, 1, 1) by one LED, but (0, 0, 0, 1) differs from it by three LEDs.

Likewise, the many partitions which are isomorphic to the thermo-partition in the partition-sense don't respect the additional structure (of which there is a lot) given by the things which you've mentioned. If I understand you in the right way, the one and only partition respecting all that additional structure is the thermo-partition. Is that right?

23 hours ago, joigus said:

Besides, what do these macrostates mean? How do they play in the general structure of known physics?

They mean what they mean - sets of microstates, and they are sets of microstates allowed by the known laws of physics.

23 hours ago, joigus said:

How do you measure your runes?

Perhaps with a rune-shaped "sock" weaved out of thin threads which tear when strong wind blows against them. As soon as an unusually big number of gas-particles assemble inside the sock, they will cause an outflowing wind that rips the sock apart.

23 hours ago, joigus said:

3rd)

Yes, I think that you're right.

Your three points have been important for my above analysis.

Regarding the time-stopping, I think that I now get what you mean: There are vast swathes of time during which the thermo-entropy is maximal or almost maximal (after all, it's always slightly and randomly fluctuating), but since nothing interesting happens during these times, there's nothing and no one that observes them, so in effect, they're not-existent. So, as soon as life becomes impossible due to too high entropy, the Poincare Recurrence Time will pass as if the blink of an eye since no one is there to observe it, and after the Universe has become interesting again, life can again take hold. So though you've shown that my idea of the Universe being interesting much of the time is wrong, you've also shown that the Universe is actually interesting most of the time since from a macroscopic POV, the boring times don't exist. Am I right?

23 hours ago, joigus said:

From the macroscopic POV, times does disappear from the problem once equilibrium is reached. Period. If you're not convinced, try to sit down in front of a gas at room temperature and see how much you have to wait for a rune to appear, or AAMOF for anything noticeable to happen, and how long it takes for it to disappear after you've waited several Earth life's worth of time for it to appear. That's a simple enough experiment to conduct.

But after a very, very long time (which is nonetheless puny compared to Graham's number of years, for instance), everything will be as it once was by the Poincare Recurrence Theorem. Therefore, time (in the macroscopic sense) will comes back one day, and will in fact come back endlessly often. By the same theorem, runes will spontaneously appear in the gas, but it will take much longer than the age of the Universe, so we can't expect to see something like that happen in a paractical experiment. But on the whole, the Universe will be interesting for an infinitely long macroscopic time (which isn't continous, of course), and also boring for an infinitely long fundamental (but not macroscopic time). Of course, that doesn't take the evolution of space-time itself into account (e.g. expansion, dark energy asf.).

Your idea that time doesn't macroscopically exist when entropy is maximal or near-maximal has actually proven quite hope-giving, I hope.

Answer to studiot:

On 6/1/2020 at 1:01 PM, studiot said:

And I'm sorry to point out that you seem to me to be bent on finding fault with my attempts to explain my principle point to you, rather than understanding the point itself.

Actually, I'm bent on finding the truth, and I think that I might've come pretty close with my above analysis in this post.

You claimed that thermo-entropy and info-entropy behave differently and obey different laws and, if I understand you in the right way, that this is so only because they're just proportional and not identical. You still owe me an explanation for that.

Your likening of the Boltzmann constant to the constant of proportionality between stress and strain is not valid since the former is a universal constant whereas the latter is not. After all, we could measure temperature in joules, and then the Boltzmann constant would have no units.

On 6/1/2020 at 1:01 PM, studiot said:

As I understand your thesis here, you are proposing that there is one and only one Law or rule that applies to your partitions, that due to Boltzman.

However it remains your responsibility to support your thesis so please explain the anomalous first ionisation energies of Nitrogen, Phosphorus and Arsenic in terms of your proposition.

I never said that there is only one rule applying to my partitions. I only wondered whether there is only one partition which is isomorphic to the thermo-partion. In a purely partitional sense, that is certainly not the case, but my analysis above, based partly on what joigus has said, suggests that there may indeed be no other partition which is isomorphic to the thermo-partition in the sense of respecting the additional structure. The anomalous first ionisation energies of Nitrogen, Phosphorus and Arsenic are explained by QM, but as I said, I never said that one law was enough for explaining everything. I was only talking about isomorphy.

This discussion is really interesting.

Question for joigus and studiot:

Even if the thermo-partition is the only one in its equivalence class w.r.t. "strong" isomorphy, is it really the only interesting one? Can we really be sure e.g. that no extremely complex computations are actually going on in the seemingly dull and boring air around us?

After all, if Alice and Bob send each other encypted messages, it looks like nonsense to us, but they may still be having a very meaningful discussion about statistical thermodynamics.

Edited by Tristan L
Link to comment
Share on other sites

19 minutes ago, Tristan L said:

Regarding the time-stopping, I think that I now get what you mean: There are vast swathes of time during which the thermo-entropy is maximal or almost maximal (after all, it's always slightly and randomly fluctuating), 

Exactly! Have you heard of Boltzmann brains?

 

21 minutes ago, Tristan L said:

So though you've shown that my idea of the Universe being interesting much of the time is wrong, you've also shown that the Universe is actually interesting most of the time since from a macroscopic POV, the boring times don't exist. Am I right?

Well, I haven't shown you that your idea is wrong. I haven't shown you much, AAMOF. I've argued to you, I think, it's not plausible if you take it seriously to make a model of what a gas in a box is going to actually do. I've argued from general concepts derived from what I know. But there are qualifications to be made in cosmology. I would have to think about them deep and hard, or maybe have some expert in cosmology tell us what they think.

The universe is not a boring place most of the time we are given to watch it because, in the case of the Earth, it's governed by fluxes of energy, coming in, and going out. Open systems like those are not Poincaré recurrences. They are the kind of systems that can hold something like life. There are very interesting models of systems which undergo self-organization under those conditions.

But the universe is not like a closed box which thermalizes after some time. And I don't think the universe as a whole satisfies Poincaré recurrences. That's what I meant when I said,

23 hours ago, joigus said:

And there are some more things, but in due time.

So if you don't like a universe that will thermally die, who knows, maybe that's not gonna happen and you (or some version of you in some far far away future or in some far far away cluster of the multiverse, is having that expectation fulfilled. Does that help? ;)

Maybe the universe repeats itself geometrically, by some periodicity condition. There may be many possibilities.

44 minutes ago, Tristan L said:

Question for joigus and studiot:

Even if the thermo-partition is the only one in its equivalence class w.r.t. "strong" isomorphy, is it really the only interesting one? Can we really be sure e.g. that no extremely complex computations are actually going on in the seemingly dull and boring air around us?

After all, if Alice and Bob send each other encypted messages, it looks like nonsense to us, but they may still be having a very meaningful discussion about statistical thermodynamics.

They're going on. For example, some of the molecules I'm breathing now will be gasped by the last breathing creature that will live on Earth, and others were inhaled by the 1st breathing creature that lived on Earth. But I'm none the wiser. Yet, if the temperature goes up one degree, I will notice.

Nice conversation.

Link to comment
Share on other sites

1 hour ago, Tristan L said:

Question for joigus and studiot:

Even if the thermo-partition is the only one in its equivalence class w.r.t. "strong" isomorphy, is it really the only interesting one? Can we really be sure e.g. that no extremely complex computations are actually going on in the seemingly dull and boring air around us?

After all, if Alice and Bob send each other encypted messages, it looks like nonsense to us, but they may still be having a very meaningful discussion about statistical thermodynamics.

I really don't understand what you are getting at here. I am not saying the thermo partition is the only one. Quite the reverse.

That is the whole point about my flats and pigeonholes analogy or chessboard squares, that you have yet to understand.

Perhaps this statement of yours will help since I am matching the flats/pigeonholes or squares as a definition of particular classes (not of equivalence classes in general but different ones)

There is a one to one correspondence between the state structure in thermodynamics systems following Boltzman's distribution and some information systems.
Of course there is, the layout of available 'boxes' follows the same law for both.

But the use of this is different and there are other laws which also apply to one or the other individually, which are different.

Can you point to an emergent phenomen in information theory?
I can offer you one from the physical world, that as far as I know, has no counterpart in informatuon theory.

You have answered my question rather briefly

1 hour ago, Tristan L said:

The anomalous first ionisation energies of Nitrogen, Phosphorus and Arsenic are explained by QM, but as I said,

Can you point to a QM law applied to information theory to produce QM effects in information behaviour?

I am listening out for your detailed explanation of the anomalous ionisation energies and its alleged counterpart in informatuion theory.

 

2 hours ago, Tristan L said:

I never said that there is only one rule applying to my partitions

I am sorry if I misunderstood you but that was the impression I gained reading your previous posts.
If I did please consider that other may do as well.

However if you now confirm your view that two structures may have some similarities or identities but also some differences.

I can happily accept that.

My point then becomes, you cannot (necessarily) separate off the differences and declare them identical.
Though you may, of course, take advantage of the similarities in using one to model the other.

 

2 hours ago, Tristan L said:

You claimed that thermo-entropy and info-entropy behave differently and obey different laws and, if I understand you in the right way, that this is so only because they're just proportional and not identical. You still owe me an explanation for that.

Hopefully the above now puts my view on this into context.

Just as you have said that you didn't say there is only one law,

I didn't say that all the laws of thermodynamics are different, I claimed that some are the same and some are different.

Can you now offer me the same courtesy?

 

In relation to this, have you hear of the division into "The Relations of Constitution" and the "The Conditions of Compatibility"  ?

If not it might pay you to study them a little. They are an important way of analysing things.

 

@Tristan L

and @joigus

1 hour ago, joigus said:

I think, it's not plausible if you take it seriously to make a model of what a gas in a box is going to actually do.

Surely that is the point of Thermodynamics - To model what a box of gas is going to do ?

However I have not been following the runes example very closely, but perhaps I can offer my 'salt and pepper' set explanation of Leylines here  ?


 

Link to comment
Share on other sites

45 minutes ago, studiot said:

 

and @joigus

Surely that is the point of Thermodynamics - To model what a box of gas is going to do ?

However I have not been following the runes example very closely, but perhaps I can offer my 'salt and pepper' set explanation of Leylines here  ?
 

I never said that. Thermodynamics is about much more than that, of course. There are reversible processes, irreversible ones, and different interesting coefficients we've talked about before. But a gas is a good example to start talking about to illustrate its power and generality.

I must confess, @studiot, that I wasn't following your arguments in this particular post as closely as I follow them in other posts, as I was following the OP's. And that's because the OP was rather lengthy already. I haven't been even able to follow all the details about the runes and the states based on them either --maybe lack of time and tiredness among other things. I thought I understood more or less what the OP was trying to do and tried to warn them as to what I called the "subtle misconceptions" in their approach. I thought it was an honest attempt at understanding the subtle concepts underlying the formalism.

Any of your 'salt and pepper' explanations are welcome on my part. And even the ginger and lemon tea ones. ;)

 

Edited by joigus
minor correction
Link to comment
Share on other sites

9 hours ago, joigus said:

I never said that.

I don't understand.

I quoted directly from your post  before your last one.

But my comment was a bit cheeky. I simply  meant that thermodynamics was developed to enable us to predict  (and therefore use) the time evolution of systems, including boxes of gas.

It was not a criticism.

:)

 

9 hours ago, joigus said:

Any of your 'salt and pepper' explanations are welcome on my part. And even the ginger and lemon tea ones. ;)

Salt and pepper are part of the ncessary scientific apparatus for this.
 

Take a sheet of paper and shake out some ground pepper over it.

Mark where each grain falls on the paper.

Dust off the pepper and take a ruler.

You will find that a random ruler line matches the position marks of some of the pepper dots.

Leylines are not magic. (I believe you recently referred to someone's experiment with random chords)

:)

Link to comment
Share on other sites

 

1 hour ago, studiot said:
11 hours ago, joigus said:

I never said that.

I don't understand.

I quoted directly from your post  before your last one.

Yes, sorry. "That" is this:

11 hours ago, studiot said:

Surely that is the point of Thermodynamics - To model what a box of gas is going to do ?

I never said that the whole point of thermodynamics is to model what a box of gas is going to do. That's what I thought you were pointing at. But thermodynamics is certainly powerful and sometimes you can predict behaviours in processes, define and measure coefficients, etc.

Nevertheless, sometimes when I start reading through the forum I'm a bit tired and there's a danger for me to misinterpret. And I don't see criticism --of ideas-- as a bad thing.

And as to the 'salt and pepper' I'm afraid I did it again. Now I understand what you meant, and that would be a good analogy for the runes IMO.

You --unwillingly, of course-- had me looking for 'salt-and-pepper' idiom definitions at some point. LOL

13 hours ago, joigus said:

I think, [the idea of runes is] not plausible if you take it seriously to make a model of what a gas in a box is going to actually do.

Here. That's what I said. Any comments, further qualifications or criticism welcome.

Link to comment
Share on other sites

Answer to joigus:

23 hours ago, joigus said:

Exactly! Have you heard of Boltzmann brains?

Yes, I have, and just like there will be Boltzmann brains (which don't live long) after a long enough time, there will be Boltzmann galaxies (which can sustain life for a long time) after an even longer enough time. In fact, it is almost certain (probability = 1) that this will happen endlessly often afaik.

 

23 hours ago, joigus said:

Well, I haven't shown you that your idea is wrong.

Right; I should have said that you had shown good reasons why my idea may well be wrong. I thought that I had written "likely", but apparently I was wrong.

 

23 hours ago, joigus said:

Open systems like those are not Poincaré recurrences. They are the kind of systems that can hold something like life.

But if the Universe were a closed system with an endless past and an endless future, the structures which gave rise to them (solar nebulas? galaxies?) would be Poincaré recurrences, I think. However,

23 hours ago, joigus said:

But the universe is not like a closed box which thermalizes after some time. And I don't think the universe as a whole satisfies Poincaré recurrences.

 

23 hours ago, joigus said:

Does that help?

👍

 

Answer to studiot:

22 hours ago, studiot said:

Can you now offer me the same courtesy?

Yes, I think so. I think that I misinterpreted your argument and analogy. My new interpretation is as follows:

The one-to-one correspondence between the boards stands for partitional isomorphy, whereas the different laws of chess and checkers stand for the additional structure on the set Mi of microstates, e.g. the neighborhood-relation in the simple LED-system above. Many partitions which are partition-isomorphic to the thermo-partition aren't isomorphic to it in the stronger sense, which also takes the additional structure into account. For example, in the LED-system, the brightness-partition is strongly isomorphic to the brightness'-partition, but not to the merely partitionally isomorphic brighthood-partition.

If that is what you mean, I fully agree with you.

Regarding the units of entropy and the Boltzmann constant, I still cannot see how one quantity which is a constant multiple of another can obey different laws than it. Also, you can actually set the Boltzmann constant equal to 1, and in fact, the Planck unit system equates universal constants like the Boltzmann constant with 1 or a numeric multiple thereof. But I now think that you meant something else, namely that the existence of units indicates that there is more structure on Mi than just the sethood of Mi. If that's what you meant, I agree.

 

22 hours ago, studiot said:

There is a one to one correspondence between the state structure in thermodynamics systems following Boltzman's distribution and some information systems.
Of course there is, the layout of available 'boxes' follows the same law for both.

Do you only mean that they have the same partitional structure (number of macrostates, number of microstates in each macrostate), th.i. are partitionally isomorphic? If yes, then that's in accordance with my interpretation of you above. However, if you mean that they are isomorphic in the strong sense, th.i. have the same number of microstates, the same corresponding microstate-transitions, and the same probabilities of corresponding transitions, then that contradicts my above interpretation, and I cannot follow you.

 

22 hours ago, studiot said:

Can you point to an emergent phenomen in information theory?
I can offer you one from the physical world, that as far as I know, has no counterpart in informatuon theory.

For an informational system which has exactly the same microstate structure as the physical world (transition-correspondence, same probabilities, and all), the states of that info-system which correspond to the emergent and complex states of the physical world are the informatioal emergent phenomena you're looking for.

 

22 hours ago, studiot said:

However if you now confirm your view that two structures may have some similarities or identities but also some differences.

I can happily accept that.

My point then becomes, you cannot (necessarily) separate off the differences and declare them identical.

So long as the differences are or result in structural (th.i. substantial) differences, you can indeed not equate the two things in question. However, if the two things have exactly the same structure, then you can regard them as essentially the same (though not selfsame, of course). For example, the set of all even positive whole numbers together with the x -> x+2 function has exactly the same structure as the set of all positive whole numbers with the x -> x+1 function. Therefore, it's meaningless to ask which of the two are the "true" natural numbers.

 

22 hours ago, studiot said:

Can you point to a QM law applied to information theory to produce QM effects in information behaviour?

I am listening out for your detailed explanation of the anomalous ionisation energies and its alleged counterpart in informatuion theory.

Perhaps with quantum info theory?

But as long as the quantum effects, e.g. the anomalously high ionisation energies, do not result in structural and informational differences, I don't really have to explain them. It's like with Turing machines; we don't have to care for what details (e.g. number of symbols used) distinguish one universal Turing machine from another. As long as they've been shown to be UTMs, that's the only thing we have to care about since they can perfectly simulate each other.

 

22 hours ago, studiot said:

In relation to this, have you hear of the division into "The Relations of Constitution" and the "The Conditions of Compatibility"  ?

If not it might pay you to study them a little. They are an important way of analysing things.

Now I have, and I might look into the topic.

 

On 6/2/2020 at 3:35 PM, studiot said:

Suppose you have a sealed adiabatic tube containing an adiabatic frictionless piston dividing the tube into two chambers, A and B as shown.

Let both sides of the system contain an ideal gas.

Discuss the time evolution of the system in terms of entropy.

With that, you've brought a really interesting problem to my attention. I guess that what you want to say is that entropy alone doesn't give us enough info to solve it; we need additional details about the physical world. Is that right?

If so, then this shows that these details have a bearing on the informational structure of our world. When I started this thread, I originally wanted to bring up the following issue, but decided it to be too far off-topic, but apparently not so. The issue is this:

Actually, it doesn't really make sense to assign probabilities to states. It only makes sense to assign probabilities to state transitions or talk about conditional probabilities (which is basically the same, I think, though perhaps a bit broader). Therefore, since entropy assumes that states have likelihood, it might not grasp all the informational structure of the system. Perhaps, the piston problem shows that there is more to the informational structure of the physical world than state-probabilities.

Anyway, the piston-problem has led me to this very interesting article: https://arxiv.org/ftp/physics/papers/0207/0207073.pdf

 

21 hours ago, joigus said:

the OP was rather lengthy already

Indeed. I hope that hasn't taken so much time and made so much entropy that is has hastened the coming of the heat death 🥵.

Edited by Tristan L
Link to comment
Share on other sites

3 minutes ago, Tristan L said:

But if the Universe were a closed system with an endless past and an endless future, the structures which gave rise to them (solar nebulas? galaxies?) would be Poincaré recurrences, I think. However,

Yes, that's true. It's a theorem. You can't argue with a theorem. ;)

Link to comment
Share on other sites

1 hour ago, Tristan L said:

With that, you've brought a really interesting problem to my attention. I guess that what you want to say is that entropy alone doesn't give us enough info to solve it; we need additional details about the physical world. Is that right?

I'l just answer this one for now since it is an example of Caratheorory's formulation of the Second Law. You have correctly anticipated part of my answer.

I do think our discussion is beginning to get somewhere now. +1

Quote

Caratheodory

In the neighbourhood of any system in a state of equilibrium there exist states which are inaccessible by adiabatic processes alone.

Perhaps you have come across this?

Anyway here is an analysis of the system

Suppose that when the piston is at the position shown, the system is in equilibrium. So the left hand chamber equilibrium volume is VA and the right hand volume VB.

For the subsytems (partitions in your parlance) nearby states have volumes (VA + dVA)  and (VB + dVB).
So their entropy changes are


[math]d{S_A} = \frac{{d{E_A}}}{{{T_A}}} + \frac{{{P_A}}}{{{T_A}}}d{V_A}[/math]

and


[math]d{S_B} = \frac{{d{E_B}}}{{{T_B}}} + \frac{{{P_B}}}{{{T_B}}}d{V_B}[/math]

But


[math]d{E_A} =  - Pd{V_A}[/math]


and


[math]d{E_B} =  - Pd{V_B}[/math]

so


[math]d{S_A} = d{S_B} = 0[/math]


Therefore


[math]dS\left( {system} \right) = d{S_A} + d{S_B} = 0[/math]


So the entropy is the same in all positions of the piston.

Thank you for the link I have downloaded the paper for later reading.

 

I am glad you mentioned Turing because his machine is the crux of the difference between information entropy and the entropy of physical systems.
So I will come back to this.

 

 

 

 

Edited by studiot
Link to comment
Share on other sites

8 hours ago, studiot said:

For the subsytems (partitions in your parlance) nearby states have volumes (VA + dVA)  and (VB + dVB).SA=dEATA+PATAdVA

Actually, that's not what I mean by "partitions". A partition P of a set Mi of all possible microstates is a way to split it up into not-empty, pairwise disjoint subsets whose union is Mi, and the elements of P are the macrostates of/w.r.t. P. For example, if we have exactly six possible microstates 1, 2, 3, 4, 5, 6, the set of microstates is Mi = {1, 2, 3, 4, 5, 6}, the set {{1}, {2, 3}, {4, 5, 6}} is one of the partitions of Mi, and {2, 3} is one of the macrostates w.r.t. that partition.

The thermo-partition is the set of all thermodynamic macrostates, th.i. the way in which thermodynamics groups microstates.

Mi usually carries a structure, so that we're dealing with (Mi, STRUCTURE) and not just Mi as an unstructured set. Partitions P, Q of Mi are isomorphic if there is an automorphism f of (Mi, STRUCTURE) such that we get Q from P by applying f to each of the elements (microstates) of the elements (macrostates) of P. Partitional isomorphy is isomorphy of partitions if STRUCTURE is trivial (th.i. if we deal with Mi as an unstructured set), so that any permutation of Mi is an automorphism as far as partitional isomorphy is concerned.

That's at least how I use those words.

Link to comment
Share on other sites

5 hours ago, Tristan L said:

Actually, that's not what I mean by "partitions". A partition P of a set Mi of all possible microstates is a way to split it up into not-empty, pairwise disjoint subsets whose union is Mi, and the elements of P are the macrostates of/w.r.t. P. For example, if we have exactly six possible microstates 1, 2, 3, 4, 5, 6, the set of microstates is Mi = {1, 2, 3, 4, 5, 6}, the set {{1}, {2, 3}, {4, 5, 6}} is one of the partitions of Mi, and {2, 3} is one of the macrostates w.r.t. that partition.

The thermo-partition is the set of all thermodynamic macrostates, th.i. the way in which thermodynamics groups microstates.

Mi usually carries a structure, so that we're dealing with (Mi, STRUCTURE) and not just Mi as an unstructured set. Partitions P, Q of Mi are isomorphic if there is an automorphism f of (Mi, STRUCTURE) such that we get Q from P by applying f to each of the elements (microstates) of the elements (macrostates) of P. Partitional isomorphy is isomorphy of partitions if STRUCTURE is trivial (th.i. if we deal with Mi as an unstructured set), so that any permutation of Mi is an automorphism as far as partitional isomorphy is concerned.

That's at least how I use those words.

 

I am not, and never have, disagreed with your outlining of basic mathematical set theory in respect of the word 'partition'.
This is one reason why I have tried to avoid using the word since its use in classical thermodynamics is somewhat different.

However the devil is in the detail as always.

You are quite right to identify STRUCTURE and marking out differences between sets with the same Mi.
This is the point I have been trying to make.

However you have missed one important point.

Mathematical partitioning of Mi (is based on) equipartition and implicitly assumes the 'equipartition theorem' of thermodynamics.
You have also mentioned disjoint partitions, which is important in the mathematical statistics of this.

STRUCTURE, as observed in the physical world, creates some stumbling blocks to this.

I was hoping to introduce this in a more measured way, but you have jumped the gun.

As I said before, we both mentioned Turing so suppose the tape running through the machine includes the following sequence    ........1 , 1 , 1, 0, 0, 0......

STRUCTURE includes the possibility that the first 1 in that sequence can affect the entry currently under the inspection viewer of the machine.
Disjoint require that it cannot.

So  a Turing machine cannot analyse that situation.
Nor can it arise in information technology, whose partitions are disjoint.

The anomalous behaviour of Nitrogen etc is an example of this, as already noted.

The interesting behaviour of Nitric Oxide, on the other hand provides an example of a genuine statistical two state system behaviour.

 

Finally, I do hope, you are not trying to disagree with Cartheodory.

However you come at it, statistically or classically, you must arrive at the same set of equations.
And you seem to be disagreeing with my classical presentation, because it is much shorter than the same conclusion reached in the statistical paper you linked to  ?

 

Link to comment
Share on other sites

  • swansont changed the title to Will entropy be low much of the time?
  • swansont locked this topic
Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.