Jump to content

Tristan L

Members
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Tristan L

  • Rank
    Lepton

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I didn't and don't want to do that; rather, I only wanted and still want to make sure that there are no misunderstandings before I answer your other points and go on with the discussion. The important thing is that in this whole thread, I have only ever talked about partitioning the set of microstates into macrostates, not partitioning the system into subsystems. However, you seem to imply that I have done the latter by saying In reality, I have only ever talked about partitioning the phase space. In what way does it do that? Does that mean that without the equipartition theorem, one microstate could belong to more than one macrostate? What exactly do you mean by that? All members of a partition are pairwise disjoint by definition. Being sets, some partitions P, Q are disjoint (share no common subsets of the ground-set), while others are not. Partitions of what? Of course not, but as I said, I want to do away with any misunderstandings on my or your part before talking about your other points, including Caratheodory. Actually, I linked to the paper mainly because I find it interesting that there may be a way to outsmart the Second Law 😁.
  2. Actually, that's not what I mean by "partitions". A partition P of a set Mi of all possible microstates is a way to split it up into not-empty, pairwise disjoint subsets whose union is Mi, and the elements of P are the macrostates of/w.r.t. P. For example, if we have exactly six possible microstates 1, 2, 3, 4, 5, 6, the set of microstates is Mi = {1, 2, 3, 4, 5, 6}, the set {{1}, {2, 3}, {4, 5, 6}} is one of the partitions of Mi, and {2, 3} is one of the macrostates w.r.t. that partition. The thermo-partition is the set of all thermodynamic macrostates, th.i. the way in which thermodynamics groups microstates. Mi usually carries a structure, so that we're dealing with (Mi, STRUCTURE) and not just Mi as an unstructured set. Partitions P, Q of Mi are isomorphic if there is an automorphism f of (Mi, STRUCTURE) such that we get Q from P by applying f to each of the elements (microstates) of the elements (macrostates) of P. Partitional isomorphy is isomorphy of partitions if STRUCTURE is trivial (th.i. if we deal with Mi as an unstructured set), so that any permutation of Mi is an automorphism as far as partitional isomorphy is concerned. That's at least how I use those words.
  3. Answer to joigus: Yes, I have, and just like there will be Boltzmann brains (which don't live long) after a long enough time, there will be Boltzmann galaxies (which can sustain life for a long time) after an even longer enough time. In fact, it is almost certain (probability = 1) that this will happen endlessly often afaik. Right; I should have said that you had shown good reasons why my idea may well be wrong. I thought that I had written "likely", but apparently I was wrong. But if the Universe were a closed system with an endless past and an endless future, the structures which gave rise to them (solar nebulas? galaxies?) would be Poincaré recurrences, I think. However, 👍 Answer to studiot: Yes, I think so. I think that I misinterpreted your argument and analogy. My new interpretation is as follows: The one-to-one correspondence between the boards stands for partitional isomorphy, whereas the different laws of chess and checkers stand for the additional structure on the set Mi of microstates, e.g. the neighborhood-relation in the simple LED-system above. Many partitions which are partition-isomorphic to the thermo-partition aren't isomorphic to it in the stronger sense, which also takes the additional structure into account. For example, in the LED-system, the brightness-partition is strongly isomorphic to the brightness'-partition, but not to the merely partitionally isomorphic brighthood-partition. If that is what you mean, I fully agree with you. Regarding the units of entropy and the Boltzmann constant, I still cannot see how one quantity which is a constant multiple of another can obey different laws than it. Also, you can actually set the Boltzmann constant equal to 1, and in fact, the Planck unit system equates universal constants like the Boltzmann constant with 1 or a numeric multiple thereof. But I now think that you meant something else, namely that the existence of units indicates that there is more structure on Mi than just the sethood of Mi. If that's what you meant, I agree. Do you only mean that they have the same partitional structure (number of macrostates, number of microstates in each macrostate), th.i. are partitionally isomorphic? If yes, then that's in accordance with my interpretation of you above. However, if you mean that they are isomorphic in the strong sense, th.i. have the same number of microstates, the same corresponding microstate-transitions, and the same probabilities of corresponding transitions, then that contradicts my above interpretation, and I cannot follow you. For an informational system which has exactly the same microstate structure as the physical world (transition-correspondence, same probabilities, and all), the states of that info-system which correspond to the emergent and complex states of the physical world are the informatioal emergent phenomena you're looking for. So long as the differences are or result in structural (th.i. substantial) differences, you can indeed not equate the two things in question. However, if the two things have exactly the same structure, then you can regard them as essentially the same (though not selfsame, of course). For example, the set of all even positive whole numbers together with the x -> x+2 function has exactly the same structure as the set of all positive whole numbers with the x -> x+1 function. Therefore, it's meaningless to ask which of the two are the "true" natural numbers. Perhaps with quantum info theory? But as long as the quantum effects, e.g. the anomalously high ionisation energies, do not result in structural and informational differences, I don't really have to explain them. It's like with Turing machines; we don't have to care for what details (e.g. number of symbols used) distinguish one universal Turing machine from another. As long as they've been shown to be UTMs, that's the only thing we have to care about since they can perfectly simulate each other. Now I have, and I might look into the topic. With that, you've brought a really interesting problem to my attention. I guess that what you want to say is that entropy alone doesn't give us enough info to solve it; we need additional details about the physical world. Is that right? If so, then this shows that these details have a bearing on the informational structure of our world. When I started this thread, I originally wanted to bring up the following issue, but decided it to be too far off-topic, but apparently not so. The issue is this: Actually, it doesn't really make sense to assign probabilities to states. It only makes sense to assign probabilities to state transitions or talk about conditional probabilities (which is basically the same, I think, though perhaps a bit broader). Therefore, since entropy assumes that states have likelihood, it might not grasp all the informational structure of the system. Perhaps, the piston problem shows that there is more to the informational structure of the physical world than state-probabilities. Anyway, the piston-problem has led me to this very interesting article: https://arxiv.org/ftp/physics/papers/0207/0207073.pdf Indeed. I hope that hasn't taken so much time and made so much entropy that is has hastened the coming of the heat death 🥵.
  4. Answer to joigus: From what you've said, I think that I finally get where the problem lies: The set of all possible microstates isn't a simple unstructured set Mi, but a highly structured set (Mi, (STRUCTURE, e.g. relations and functions)). Partitions Ma1, Ma2 are isomorphic if and only if they have a partition-isomorphism (are isomorphic as partitions) and that partition-isomorphism respects STRUCTURE. Also, only partitions which respect STRUCTURE are eligible as partitions into macrostates. For example, if STRUCTURE is made up of a linear order on Mi, only not-crossing partitions are allowed. In the case of our simple system, there is a "neighborhood"-relation on the set of microstates, which tells us which state can become which other states with only one LED turning on or off. The brightness-partition, the brightness'-partition, and the rest of the sixteen partitions which we get from the brightness-partition by defining for each microstate (f, u, Þ, a) a new brighness-measure b_(f, u, Þ, a) through b_(f, u, Þ, a)(x, y, z, w) := (f*x - (1-f)*(1-x), u*y - (1-u)*(1-y), Þ*z - (1-Þ)*(1-z), a*w - (1-a)*(1-w)), are isomorphic to each other in strong sense that they and their isomorphisms respect the neighborhood-relation. However, simply exchanging e.g. (1, 1, 1, 0) with (0, 0, 0, 1) in the brightness-partition yields a forbidden partition (call it partition in terms of "brighthood"), since the other microstates (1, 1, 0, 1), (1, 0, 1, 1), (0, 1, 1, 1) in the same brighthood-macrostate as (0, 0, 0, 1) only differ from the one and only brighthood=4 microstate (1, 1, 1, 1) by one LED, but (0, 0, 0, 1) differs from it by three LEDs. Likewise, the many partitions which are isomorphic to the thermo-partition in the partition-sense don't respect the additional structure (of which there is a lot) given by the things which you've mentioned. If I understand you in the right way, the one and only partition respecting all that additional structure is the thermo-partition. Is that right? They mean what they mean - sets of microstates, and they are sets of microstates allowed by the known laws of physics. Perhaps with a rune-shaped "sock" weaved out of thin threads which tear when strong wind blows against them. As soon as an unusually big number of gas-particles assemble inside the sock, they will cause an outflowing wind that rips the sock apart. Yes, I think that you're right. Your three points have been important for my above analysis. Regarding the time-stopping, I think that I now get what you mean: There are vast swathes of time during which the thermo-entropy is maximal or almost maximal (after all, it's always slightly and randomly fluctuating), but since nothing interesting happens during these times, there's nothing and no one that observes them, so in effect, they're not-existent. So, as soon as life becomes impossible due to too high entropy, the Poincare Recurrence Time will pass as if the blink of an eye since no one is there to observe it, and after the Universe has become interesting again, life can again take hold. So though you've shown that my idea of the Universe being interesting much of the time is wrong, you've also shown that the Universe is actually interesting most of the time since from a macroscopic POV, the boring times don't exist. Am I right? But after a very, very long time (which is nonetheless puny compared to Graham's number of years, for instance), everything will be as it once was by the Poincare Recurrence Theorem. Therefore, time (in the macroscopic sense) will comes back one day, and will in fact come back endlessly often. By the same theorem, runes will spontaneously appear in the gas, but it will take much longer than the age of the Universe, so we can't expect to see something like that happen in a paractical experiment. But on the whole, the Universe will be interesting for an infinitely long macroscopic time (which isn't continous, of course), and also boring for an infinitely long fundamental (but not macroscopic time). Of course, that doesn't take the evolution of space-time itself into account (e.g. expansion, dark energy asf.). Your idea that time doesn't macroscopically exist when entropy is maximal or near-maximal has actually proven quite hope-giving, I hope. Answer to studiot: Actually, I'm bent on finding the truth, and I think that I might've come pretty close with my above analysis in this post. You claimed that thermo-entropy and info-entropy behave differently and obey different laws and, if I understand you in the right way, that this is so only because they're just proportional and not identical. You still owe me an explanation for that. Your likening of the Boltzmann constant to the constant of proportionality between stress and strain is not valid since the former is a universal constant whereas the latter is not. After all, we could measure temperature in joules, and then the Boltzmann constant would have no units. I never said that there is only one rule applying to my partitions. I only wondered whether there is only one partition which is isomorphic to the thermo-partion. In a purely partitional sense, that is certainly not the case, but my analysis above, based partly on what joigus has said, suggests that there may indeed be no other partition which is isomorphic to the thermo-partition in the sense of respecting the additional structure. The anomalous first ionisation energies of Nitrogen, Phosphorus and Arsenic are explained by QM, but as I said, I never said that one law was enough for explaining everything. I was only talking about isomorphy. This discussion is really interesting. Question for joigus and studiot: Even if the thermo-partition is the only one in its equivalence class w.r.t. "strong" isomorphy, is it really the only interesting one? Can we really be sure e.g. that no extremely complex computations are actually going on in the seemingly dull and boring air around us? After all, if Alice and Bob send each other encypted messages, it looks like nonsense to us, but they may still be having a very meaningful discussion about statistical thermodynamics.
  5. First Answer to studiot: You're welcome. I'm sorry to have to point out that apparently, you do not understand your own analogy well enough. Therefore, let me make it clearer to you. Your flats being in a one-to-one correspondence with your pigeonholes is analogous to two games being isomorphic each other, which is in turn like two partitions of the set of microstates being isomorphic to each other. Chess and checkers, however, are not isomorphic to each other; there is no one-to-one correspondence between their possible game configurations and allowed moves. That's why they work differently. Regarding the partitions, there are some that aren't isomorphic to each other, and others that are. The thermodynamic partition is isomorphic to every partition that we get by taking the thermo-partition and then applying an arbitrary permutation of the microstates. Not all of these partitions are distinct, but there are still many partitions isomorphic to the thermo-partition but still distinct from it. So, there are many measures of entropy equivalent to thermo-entropy but distinct from it, and the system will much more often be 1. in a state of low entropy w.r.t. some partition isomorphic to the thermo-partition than 2. in a state of low entropy w.r.t. the thermo-partition itself. Thermodynamic entropy is just information entropy w.r.t. the thermo-partition multiplied by the Boltzmann constant afaik. They are not only defined in terms of isomorphic partitions, but in terms of one and the same partition. One is just the other multiplied by a constant. Could you please tell me how you supposedly get conflicting results with them? As I've already said, chess and checkers are not isomorphic, unlike thermodynamic entropy and information entropy w.r.t. the thermo-partition. Thermodynamic entropy vs. information entropy w.r.t. the thermo-partition is like playing chess on some board with chess-pieces of a certain size vs. playing chess on a physically bigger board with bigger chess pieces, but with the number of squares an everything else kept the same. Therefore, we can safely equate the two and just talk of thermo-entropy, and to make the math a bit easier, we'll not use unneeded units that distract from the essence. I've already said why info-entropy w.r.t. thermo-partition and thermo-entropy are essentially the same (not just isomorphic) and why they're very different from the Ch's, which aren't even isomorphic. But I ask you again: Since when are info-entropy w.r.t. thermo-partition and thermo-entropy subject to different laws? Please do tell. ************************************************************************************************************** Answer to joigus: Of course I'm aware of that. Also, don't get me wrong and think that I want to be right in order to be right. I want to be right since I don't like the heat death of the Universe at all . But of course, I won't let that make me bend results. From a purely scientific point, my being right and my being wrong are indeed both interesting, but from a life-loving perspective, I really do hope to be right . I find your thoughts very interesting. Actually, the number of partitions of a set with n elements is the Bell number Bn, and the sequence of Bell numbers does grow quite quicky. So if we have n microstates, there are Bn ways to define macrostates. So, while for a particular kind of choosing a partition, the number of macrostates in that partition might get overwhelmed by the number of microstates, for any number of microstates, there is a way of partioning them such that the number of macrostates in that partition is not overwhelmed. Now, of course, not all partitions are isomorphic, but even just the partitions isomorphic to some partition is very big in many cases. I've calculated (hopefully right) that for any positive whole number k, sequence (l_1, ... , l_k) of positive whole numbers, and strictly rising sequence (m_1, ... , m_k) of positive whole numbers, there are (l_1*m_1+...+l_k*m_k)! / ( m_1!^l_1 * l_1! * ... * m_k!^l_k * l_k! ) ways to partition a set with n = l_1*m_1+...+l_k*m_k members into l_1 sets of m_1 elements each, ..., and l_k sets of m_k elements each. Here, k, (l_1, ... , l_k) and (m_1, ... , m_k) uniquely determine an equivalence class of isomorphic partitions, if I'm right. This result is consistent with the first few Bell numbers. Thus, since the thermo-partition isn't trivial (k=1, l_1=1, m_1=n or k=1, l_1=n, m_1=1), there are many partitions isomorphic but not identical to the thermo-partition, and their number likely does grow humongously as the number of microstates rises. Take the following very simple system, in which we'll assume time is discrete to make it even simpler: We have n bits which are either 1 or 0. In each step and for each bit, there's a probability of p that the bit will change. The bits change independently of each other. Let's interpret the bits as LEDs of the same brightness which are either on or off. The microstates of the system are the ways in which the individual LEDs are on or off. We can then define two microstates as belonging to the same macrostate if they both have the same overall brightness. If we take n = 4, for example, the microstates are (0, 0, 0, 0), (0, 0, 0, 1), ..., (1, 1, 1, 1), sixteen in total. The brighntess-macrostates are {(0, 0, 0, 0)} (brightness = 0, probability = 1/16), {(0, 0, 0, 1), (0, 0, 1, 0), (0, 1, 0, 0), (1, 0, 0, 0)} (brightness = 1, probability = 4/16), {(0, 0, 1, 1), (0, 1, 0, 1), (1, 0, 0, 1), (0, 1, 1, 0), (1, 0, 1, 0), (1, 1, 0, 0)} (brightness = 2, probability = 6/16), {(0, 1, 1, 1), (1, 0, 1 1), (1, 1, 0, 1), (1, 1, 1, 0)} (brightness = 3, probability = 4/16), {(1, 1, 1, 1)} (brightness = 4, probability = 1/16). Simple calculations show us that the system will on average evolve from brighness 0 or brightness 4 (low probability, low entropy) to brightness 2 (high probaility, high entropy). However, when the system is in the bightness-macrostate of brighness 2, which has maximum brighness-entropy, e.g. by being in microstate (0, 1, 1, 0), we can simply choose a different measure of entropy which is low by choosing the partition into brightess'-macrostates, where the brightness' of a microstate (x, y, z, w) = the brightness of the microstate (x, 1-y, 1-z, w) : {(0, 1, 1, 0)} (brightness' = 0, probability = 1/16), {(0, 1, 1, 1), (0, 1, 0, 0), (0, 0, 1, 0), (1, 1, 1, 0)} (brightness' = 1, probability = 4/16), {(0, 1, 0, 1), (0, 0, 1, 1), (1, 1, 1, 1), (0, 0, 0, 0), (1, 1, 0, 0), (1, 0, 1, 0)} (brightness' = 2, probability = 6/16), {(0, 0, 0, 1), (1, 1, 0 1), (1, 0, 1, 1), (1, 0, 0, 0)} (brightness' = 3, probability = 4/16), {(1, 0, 0, 1)} (brightness' = 4, probability = 1/16). The system will also tend to change from low brightness'-entropy to high brightness'-entropy, but then I can choose yet another measure of brightness, brightness'', according to which the entropy is low. The thing is that at any time, I can choose a partion of the set of microstates into macrostates which is isomorphic to the brightness-partition and for which the current microstate has minimum entropy. But anyway, the system will someday return to the low brightness-entropy state of brightness=4. Since it is so simple, we can even observe that spontaneous fall in brightness-entropy. Does that mean for our simple system above that the microstates (0, 0, 1, 1), (0, 1, 0, 1), (1, 0, 0, 1), (0, 1, 1, 0), (1, 0, 1, 0), and (1, 1, 0, 0) somehow magically stop the flow of time? Entropy is emergent, right? So, how can it stop something as fundamental as time? You yourself said that microscopic changes will go on happening, which means that there must always be time. By the Poincaré recurrence theorem and the Fluctuation theorem, the system will almost certainly go back to its original state of low entropy. It just needs a very, very long time to do that. After all, the Second Law isn't some law of magic which says that a magical property called entropy defined in terms of some magically unique partition must always rise, right? And spontaneous entropy falls have been observed in very small systems, haven't they? Again, I find your ideas very stimulating and fruitful. Second Answer to studiot: This is the unaswered question. +1 No longer. See my answer to that above.
  6. Just a quick correction of my correction: I forgot "the logarithm of" after "proportional to".
  7. The units are only due to a constant of proportionality (the Boltzmann constant). However, in essence, every entropy is a number defined in terms of probaility, including both thermodynamic entropy (defined statistically mechanically and without unneeded constants of proportionality) and "runish entropy". What's essential about thermodynamic entropy is that it's defined in terms of thermodynamic macrostates. "Rune entropy", on the other hand, is defined in terms of how well the particles spell out runes. Of course I'd rather live in a flat, but that's only because I'm a human an not a pigeon. Translating this metaphor, it means that I'd rather live in a universe with low thermodynamic entropy rather than low runic entropy, but only since I'm a thermodynamic lifeform and not a runish one. Maybe at a time in the far future when thermodynamic entropy is high but runish entropy is low, there will be an intelligent runish lifeform asking another one whether it likes to live in the low-rune-entropy universe it knows or is so unreasonable as to want to live in a universe with low thermodynamic entropy. The thermodynamic world is indeed very different from the runish world, but I see no reason for thermo-chauvinism. Low thermo-entropy is good for thermo-life, and low rune-entropy is good for runish life. Alice and Bob can have very different machines, where Alice's is built such that it uses a pressure difference between two chambers, and Bob's machine is built such that it extracts useful work from the Fehu-state I described above, e.g. by having tiny Fehu-shaped chambers in it or something. It's just that in our current world, Alice's machine is much more useful as thermo-entropy is low while rune-entropy is high at the current time. Isn't that right? Yeah, that's right. My bad. I should have said that if all microstates are equally likely, the entropy of a macrostate is proportional to the probability of that macrostate. According changes have to be made throughout my text. However, that doesn't change anything about its basic tenets, regardless of whether the microstates are equally likely or not, does it? I hope and think not, but please correct me if I'm wrong. Exactly. My point is that if I choose, say, having the particles arranged so as to spell out runes rather than thermodynamic properties like pressure, temperature and volume, I get a very different entropy measure and thus also a different state of maximal entropy. So, rune-entropy can be low while thermo-entropy is high. Doesn't that mean that runish life is possible in a universe with low rune entropy? Why should e.g. temperature be more priveleged that rune-spelling? Yes, I fully agree. On average, thermo-entropy increases with time, and when it has become very high, it will take eons to spontaneously become low again. The same thing goes for rune-entropy. However, since there are so humongously many measures of entropy, there will always be at least one that falls and one that is very low at any time. Therefore, life will always be possible. When thermodynamic entropy becomes to high, thermo-life stops, but then, e.g. rune-entropy is low, so rune-life starts. When rune-entropy has become too high, runish life ends and is again replaced by another shape of life. My point is that rather than being interesting and life-filled for very short whiles separated by huge boring lifeless intervals, the universe (imagine it to be a closed system, for expansion and similar stuff is another topic) will be interesting and life-filled for much of the time. It's not life itself that needs eons to come again, it's only each particular shape of life that takes eons to come again. That's my point, which I hope is right. Perhaps some of the entropy measures aren't as good as others, but is thermo-entropy really better than every other measure of entropy? As far as I can see, I think they are. Yes, it certainly does!
  8. As I understand entropy and the Second Law of Thermodynamics, things stand as follows: A closed system has a set Mi of possible microstates between which it randomly changes. The set Mi of all possible microstates is partitioned into macrostates, resulting in a partition Ma of Mi. The members of Ma are pairwise disjoint subsets of Mi, and their union is Mi. The entropy S(ma) of a macrostate ma in Ma is the logarithm of the probability P(ma) of ma happening, which is in turn the sum Sum_{mi ∊ ma}p(mi) of the probabilities p(mi) of all microstates mi in ma. The entropy s_{Ma}(mi) of a microstate mi with respect to Ma is the probability of the macrostate in Ma to which mi belongs. The current entropy s_{Ma} of the system with reprect to Ma is the entropy of the microstate in which the system is currently in with respect to Ma. The Second Law of Thermodynamics simply states that a closed system is more likely to pass from a less probable state into a more probable one than from a more probable state into a less probable one. Thus, it is merely a stochastical truism. By thermal fluctuations, the fluctuation theorem, and the Poincaré recurrence theorem, and generally by basic stochastical laws, the system will someday go back to a low-entropy state. However, also by basic stochastical considerations, the time during which the system has a high entropy and is thus boring and hostile to life and information processing is vastly greater than the time during which it has a low entropy and is thus interesting and friendly to info-processing and life. Thus, there are vast time swathes during which the system is dull and boring, interspersed by tiny whiles during which it is interesting. Or so it might seem... Now, what caught my eye is that the entropy we ascribe to a microstate depends on which partition Ma of Mi into macrostates we choose. Physicists usually choose Ma in terms of thermodynamic properties like pressure, temperature and volume. Let’s call this partition of macrostates “Ma_thermo”. However, who says that Ma_thermo is the most natural partition of Mi into macrostates? For example, I can also define macrostates in terms of, say, how well the particles in the system spell out runes. Let’s call this partition Ma_rune. Now, the system-entropy s_{Ma_thermo} with respect to Ma_thermo can be very different from the system-entropy s_{Ma_rune} with respect to Ma_rune. For example, a microstate in which all the particles spell out tiny Fehu-runes ‘ᚠ’ probably has a high thermodynamic entropy but a low rune entropy. What’s very interesting is that at any point in time t, we can choose a partition Ma_t of Mi into macrostates such that the entropy s_{Ma_t}(mi_t) of the system at t w.r.t. Ma_t is very low. Doesn’t that mean the following?: At any time-point t, the entropy s_{Ma_t} of the system is low with respect to some partition Ma_t of Mi into macrostates. Therefore, information processing and life at time t work according to the measure s_{Ma_t} of entropy induced by Ma_t. The system entropy s_{Ma_t}rises as time goes on until info-processing and life based on the Ma_t measure of entropy can no longer work. However, at that later time t’, there will be another partition Ma_t’ of Mi into macrostates such that the system entropy is low w.r.t. Ma_t’. Therefore, at t’, info-processing and life based on the measure s_{Ma_t’} of entropy will be possible at t’. It follows that information processing and life are always possible, it’s just that different forms thereof happen at different times. Why, then, do we regard thermodynamic entropy as a particularly natural measure of entropy? Simply because we happen to live in a time during which thermodynamic entropy is low, so the life that works in our time, including us, is based on the thermodynamic measure of entopy. Some minor adjusments might have to be made. For instance, it may be the case that a useful partition of Mi into macrostates has to meet certain criteria, e.g. that the macrostates have some measure of neighborhood and closeness to each other such that the system can pass directly from one macrostate only to the same macrostate or a neighboring one. However, won’t there still be many more measures of entropy equally natural as thermodynamic entropy? Also, once complex structures have been established, these structures will depend on the entropy measure which gave rise to them even if the current optimal entropy measure is a little different. Together, these adjusments would lead to the following picture: During each time interval [t1, t2], there is a natural measure of entropy s1 with respect to which the system’s entropy is low at t1. During [t1, t2] – at least during its early part – life and info-processing based on s1 are therefore possible. During the next interval [t2, t3], s1 is very high, but another shape of entropy s2 is very low at t2. Therefore, during [t2, t3] (at least in the beginning), info-processing and life based on s1 are no longer possible, but info-processing and life based on s2 works just fine. During each time interval, the intelligent life that exists then regards as natural the entropy measure which is low in that interval. For example, at a time during which thermodynamic entropy is low, intelligent lifes (including humans) regard thermodynamic entropy as THE entropy, and at a time during which rune entropy is low, intelligent life (likely very different from humans) regards rune entropy as THE entropy. Therefore my question: Doesn’t all that mean that entropy is low and that info-processing and life in general are possible for a much greater fraction of time than thought before?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.