Jump to content

How can information (Shannon) entropy decrease ?


studiot

Recommended Posts

42 minutes ago, studiot said:

So why did you 'aim' such  personally abusive remarks instead of answering the question you still have not answered?

What are you talking about?

You wanted me to answer a point about computation; there seems to be some dispute about whether an identity operation constitutes a computation.

My question was, I think, why would you build a computer that does nothing? Why would you build a computer that leaves a string of symbols unchanged?

Then I introduce FSA's and you respond by accusing me of not answering a different question about entropy. Can we focus on one thing at a time?

Do you know what a finite-state machine is, have you done any programming? What languages? Can you code in assembly language? Have you written any languages or designed an instruction set, even for a simulated machine? I can answer yes to all the above.[ed: the third one's answer would be, "lots, or I can write one if that's what you need"]

Can I ask, do you have any formal CS education, or can you maybe draw a diagram with logic gates in it? Otherwise, are you here trying to learn something?

On 2/21/2022 at 3:01 PM, joigus said:

Also, there is no such a thing as non-informational DOF of a physical system. All DOF are on the same footing.

studiot: what do you think joigus is actually saying? What does what he says, if it's true of course, then say about what a computer is, as distinct from what a computer does?

Also, I consider you as someone who perhaps hasn't discussed things like entropy, computation, what information is, and so on, very seriously before now.

But I'm Captain Serious here when it somes to the subject of computers--I've studied a few CS subjects, more than most people. Well beyond what I'd need to do to get a job; in fact being employed in the industry was a lot like mowing lawns, in terms of what I could have been doing. But the money's real good.

Edited by SuperSlim
Link to comment
Share on other sites

On 2/22/2022 at 11:27 AM, studiot said:

Well I'm sorry if all you want to do is make unqualified generalistic pronouncements , one counter example is enough to disprove your claim.

I'm sorry too, that your attitude towards the subject of computation isn't very helpful to discussion. I didn't make an unqualified generalistic pronouncement, it was a qualified and relevant to the subject generalistic pronouncement, and I'm sorry if you haven't been exposed to a university-level education. In which one is pretty much required to question one's grasp of subjects, as much as possible; really thrash those equations.

My counter to your argument, a Turing machine, FSAs and a string that doesn't change. I think that's quite a good one--nothing is erased, but where did the string come from? Doesn't there have to be a read/write head, and it has to move left or right? So it can do this over a fixed string for as long as you like, without halting, or if you want, with halting. Why do you give your counterexample the status of switching off the machine, is my next question. What does that erase on my tape?

We haven't started discussing what reversible logic is. After all, according to an emeritus professor, when two logical paths merge, information is lost; this constitutes erasure.

What better model than Boolean logic gates? I saw this used in an online lecture series to explain reversible logic. Part of the explanation involves the use of phase space volumes, and when do you "run out of room".

 

Logic gates: 497606855_Booleangates.png.d746dced072e45fd9a27fe4fa38f6791.png

 

The first two are two-input gates, the last is the unary negation operator. This last operator doesn't merge any inputs.

What does the truth table look like when you keep one of the inputs, say the x input, and make the gate a two output gate? What you then have is a way to reverse the gate, logically. But there's a problem with how much information the x input can carry through to the outputs. You can also include the idea that the AND and OR gates with two outputs have memory--they each remember the x input (but locally!).

Edited by SuperSlim
Link to comment
Share on other sites

I'll throw in a hint here: the AND operation is not affine; but without it you don't have Boolean algebra; you need the AND operation to get a universal algebra.

It follows that no set of affine logic gates is universal. This has consequences for quantum computation, according to Seth Lloyd at MIT. I guess he'd know.

And I've seen a recent paper that claims to erase information without heat loss (a requirement of Landauer's principle), by using a bath of angular momentum, i.e. by giving the angular momentum stored in a qubit extra degrees of freedom. I'm not sure they really manage to escape the use of heat though, because the bath isn't thermally isolated.

 

So have I gotten closer to an answer to the question: "how can Shannon entropy decrease?", by starting with what an increase (erasure) or decrease (storage) in local entropy means for isolated logic gates (?) By copying an input directly to an output the phase space volume is increased, there is "more room". In the phase space corresponding to information (something can be measured with probability 1, if a measurement is made), there's a function of position and momentum whose integral over both differentials is the identity transformation.

The measurement Hamiltonian is an identity transformation of the position/momentum phase space.

Maxwell's demon can't overcome this rule--to measure both the position and momentum of a gas particle, the demon must read information and store it.

Edited by SuperSlim
Link to comment
Share on other sites

Part II

Maxwell's demon doesn't get to violate the second law of thermodynamics because the identity transformation of a system of particles in thermal equilibrium, is a system of particles in thermal equilibrium. The demon can't read any information and just sees a continuous thermal background--a scalar field with constant value everywhere.

So the Maxwell version of a demon doesn't get to store any information and lower its information entropy. If it could it could then violate the second law.

Now I can put my feet back up on the desk, and heave a sigh of relief.

Edited by SuperSlim
Link to comment
Share on other sites

8 hours ago, SuperSlim said:

Maxwell's demon doesn't get to violate the second law of thermodynamics because the identity transformation of a system of particles in thermal equilibrium, is a system of particles in thermal equilibrium. The demon can't read any information and just sees a continuous thermal background--a scalar field with constant value everywhere.

Sigh!

This paragraph makes no sense and is totally irrelevant to the thread question.

How can information entropy decrease ?

 

Happily this question was ably demonstrated by @joigus direct calculation on my example, something you had ample opportunity to perform yourself, but chose not to do.

This is now the third thread where you have avoided engaging in direct questions or comments about your posts, the other two being your wave thread  and your experiments and information thread.

This is a pity because in the case of the latter I supplied a direct answer to your chemistry question ( which you have not acknowledged).

 

My answer here in this thread, without engaging set theory, is that as information entropy and information measure are not exactly inverse functions but reverse ones or contravarient onces, then as information measure increases (which it must do as one progresses through a calculation so you end up with more information (the result) than you started with) information entropy must decrease.

I have also pointed out (and once again you have not responded) that one limitation to this would be further times when the calculation is run, you already have the answer.

Link to comment
Share on other sites

On 2/21/2022 at 12:28 AM, studiot said:

represented as information by the string 0101

What happens to the information entropy at each stage of the Q&A  ?

I've read the example again and noted something I find interesting. Initially it is known per the definition of the example that there is a 4 x 4 square board. In the final string 0101 this information is not present as far as I can tell, it could be column 2 & row 2 from any number of squares? This is not an error, I'm just curiously noting that the entropy of the string '0101' itself seems to differ from the result one gets from calculating the entropy step by step knowing the size of the board.

A related note, @joigus as far as I can tell correctly determines that once the coin is found there is no option left; the search for the coin has only one outcome "coin found" and hence the entropy is zero. The string 0101 though, contains more information because it tells not only that the search terminated* it also tells where the coin is.

Comments and corrections are welcome.

 

*) '1' occurs twice in the string.

Link to comment
Share on other sites

1 hour ago, Ghideon said:

I've read the example again and noted something I find interesting. Initially it is known per the definition of the example that there is a 4 x 4 square board. In the final string 0101 this information is not present as far as I can tell, it could be column 2 & row 2 from any number of squares? This is not an error, I'm just curiously noting that the entropy of the string '0101' itself seems to differ from the result one gets from calculating the entropy step by step knowing the size of the board.

A related note, @joigus as far as I can tell correctly determines that once the coin is found there is no option left; the search for the coin has only one outcome "coin found" and hence the entropy is zero. The string 0101 though, contains more information because it tells not only that the search terminated* it also tells where the coin is.

Comments and corrections are welcome.

 

*) '1' occurs twice in the string.

Thank you +1.

I had been really hoping that someone would look more carefully at my list of 4 questions to reach the answer,because it is not 'optimal' in that my questions will not work in every case.

There is, however, an 'optimal' set of 4 questions that will work in every position.

This set corresponds to a true 'binary search' where you halve the number of possibilities at each step.

This is no accident.

A 4 x 4 cell board has 16 = 24 cells.

So it takes 4 binary digits to uniquely label each cell.

And 16 is the maximum number of cells you can uniquely label (identify) in this way.

Does this help ?

Edited by studiot
Link to comment
Share on other sites

2 hours ago, studiot said:

Does this help ?

Yes! +1

I may have misinterpreted the translation from questions/answers to binary string. To clarify; If the coin is in square 1A, what is the binary string for that position?

My interpretation (that may be wrong!) is that the four answers in your initial example (square 2B) "no", "yes", "no", "yes" are translated to / encoded as "0" "1" "0" "1". This interpretation means that a coin in square 1A is found by:
Is it in the first column?  -  Yes 
Is it in the first row? - Yes
and the resulting string is 11. 
As you see I (try to) encode the answers to the questions and not necessarily the number of the square.

The nuances of your example makes this discussion more interesting in my opinion; "information" and the entropy of that information may possibly have more than one interpretation. 

 

(Note 1: I may be overthinking this; if so I blame my day job / current profession)
(Note 2: I have some other ideas but I'll post one line of thought at a time; otherwise there may be too much different stuff in each post)
 

Edited by Ghideon
clarification & spelling
Link to comment
Share on other sites

7 hours ago, studiot said:

Sigh!

This paragraph makes no sense and is totally irrelevant to the thread question.

How can information entropy decrease ?

That's interesting. It's how Charles Bennett explains why Maxwell's demon doesn't get to store any information about gas molecules, and lower their information entropy.

 

7 hours ago, studiot said:

Happily this question was ably demonstrated by @joigus direct calculation on my example, something you had ample opportunity to perform yourself, but chose not to do.

7 hours ago, studiot said:

My answer here in this thread, without engaging set theory, is that as information entropy and information measure are not exactly inverse functions but reverse ones or contravarient onces, then as information measure increases (which it must do as one progresses through a calculation so you end up with more information (the result) than you started with) information entropy must decrease.

Except you haven't defined erasure of information. But that's ok, a computation doesn't need to erase information.

I still don't get why you think I haven't addressed your questions. I think I have. I also think I'm possibly talking straight over your head.

Look, I understand what Charles Bennett says about Maxwell's demon; I understand the concept and I understand what information entropy is.

Here's a question, if I receive a message I'm expecting to receive, does it have more or less information content than a message I'm not expecting, but also receive?

Can you answer it?

7 hours ago, studiot said:

I have also pointed out (and once again you have not responded) that one limitation to this would be further times when the calculation is run, you already have the answer.

Yes you have. My response to that is that it isn't a limitation; in fact the existence of a result of a computation has no bearing on whether a computation can run.

Generally you seem to have some rather naive ideas about the subject; whereas I have at least 5 years of tertiary level study to rely on.  Please don't try to convince me, or anyone else, that you know more than I do about computers; that's an argument you can't win. Sorry.

On 2/23/2022 at 4:57 AM, studiot said:

I don't see this as a difference difference between thermodynamic entropy and Shannon (or Hartley) entropy or any statement about the Second Law.

It's possible you will never see such a thing. I predict you will continue to fail to understand what either kind of entropy actually is. I predict this because, here we are, and you still seem to be confused. A poster, joigus, has already said the two are "joined at the hip". That isn't an explanation, it isn't really all that enlightening, but it is true.

Are you disagreeing with what he said, or just with what I said?

Edited by SuperSlim
Link to comment
Share on other sites

@Ghideon and @studiot. I think I know what both of you are getting at. Thank you for very constructive comments that go in the direction of elucidating something central here.

--------------

Qualification:

The "canonical" entropy \( -\sum_{i}p_{i}\ln p_{i} \) is not the only way you can define an entropy. We should be clear about this from the get-go. There are other definitions of entropy that you can try, and they happen to be useful, like the Rényi entropy, and still others.

But once you have decided what form of entropy it is that characterizes the level of ignorance about your system in terms of your control parameters, the calculation of the entropy based on your knowledge of the system should be independent of how you arrive at this knowledge.

---------------

But the way you arrive at that information is not neutral as to the heating of the rest of the universe!

I'm calling the particular questions your "control parameters."

It's very interesting what you both point out. Namely: that the coding of the answers (and presumably the physical process underlying it) strongly depends on the questions that you ask. Suppose we agree on a different, much-less-than-optimal set of questions:

1) Is the coin in square 1,1?

2) Is the coin in square 1,2?

and so on.

If you happen to find the coin in square 1,1 at the first try, then you have managed to get the entropy of the system to 0 by just asking one binary question, and in Ghideon's parlance, your string would be just 1.

The way in which you arrive at the answer is more or less efficient (dissipates less "heat" so to speak), in this case, if you happen to be lucky.

But the strings Ghideon proposes, I think, are really coding for the thermodynamic process that leads you to final state S=0. Not for the entropy.

Does that make sense?

 

Link to comment
Share on other sites

Maybe the problem studiot posted would be easier to use if it was reformulated as a position/momentum problem.

So it's like you know a particle is in one of N partitions. You know a particle can have one, and only one momentum (if it's classical).

You have a two-parameter addressing mechanism that "reads" the contents of row i, column j. So it's like searching  a memory for a "1" amongst a lot of "0"s.

So it's down to what you can record, and how you record it. What does an algorithm that searches randomly need to do when it finds the particle? Is a brute-force, sequential search easier, since then you don't need to store information about empty partitions, just use a loop control. There's even a loop invariant!

Also notice how it illustrates the difference between stored information, and an algorithm that searches for it (and changes local and global entropy). That seems kind of unavoidable (strong hint).

Edited by SuperSlim
Link to comment
Share on other sites

17 hours ago, Ghideon said:

I may have misinterpreted the translation from questions/answers to binary string. To clarify; If the coin is in square 1A, what is the binary string for that position?

My interpretation (that may be wrong!) is that the four answers in your initial example (square 2B) "no", "yes", "no", "yes" are translated to / encoded as "0" "1" "0" "1". This interpretation means that a coin in square 1A is found by:
Is it in the first column?  -  Yes 
Is it in the first row? - Yes
and the resulting string is 11. 
As you see I (try to) encode the answers to the questions and not necessarily the number of the square.

You are correct that the encoding is 0 for No and 1 for Yes.

So the coding therefore depends upon both the order of the questions and the questions themselves, and not the position of the square.

So yes the shorter string "11" would identify square 1A.

Since @joigus has also observed that the question "Is the coin in square 1A ?" would lead to an even shorter string "1" identifying this square it leads nicely to some further comments.
1) However if the answer was No then there would remain 15 other possible places. So each answer partitions the set into two subsets squares for which the question holds true and squares for which the question is false.

1) I have already observed the set of questions to elicit the identifying code may vary in number.

2) Therefore there exists a minumum set which will identify any given square.

3) The minimum question set that will identify any square is a binay search.  This discards half the squares (8) ; Then Half the remaining (4) ; Then again half the remianing (2) ; and finally half the remaining to end with one square.
This takes 4 steps and is called a binary search.

However this minimum set is not unique and also since the order is important the output string will not be unique either.

Such a set might be, and searching for square D4,

1)  Is the coin in the upper half ? :  No

2) Is the coin in the left hand half of the remainder ?  :  No

3) Is the coin in the top half of the remainder ?  :  No

4) Is the coin in the right hand half of the remainder ?  :   Yes

leading to the string "0001"

Swapping questions (2) and (4) would lead to the string "0100"

 

Two further comments

Firstly about joigus' question what happens if there is only one cell.
The actually you would not need to perform any compution since the coin cannot be anywhere else.

This brings out the observation I often make about probabilities viz the probabilities 0 and 1 have different properties from any other value inbetween.

Secondly since all the information is contained in the questions and answers, the configuration of 'the board' is irrelevant.
The cells may be arranged in a line or a ring or scattered.

Link to comment
Share on other sites

Edit: I x-posted with @studiot, I may have to edit my response after reading the post above.
In this post I’ll separate my thoughts in different sections to clarify. 

Binary encoding of the location vs the questions

Studiots questions lead to the string 0101. The same string could also be found by square by square questions if the squares are numbered 0-15. "Is the coin in square 1,A?", "Is the coin in square 2,A?" and so on. The sixth square has number 5 decimal = 0101 binary. This way of doing it does not encode the yes/no answers into 1/0 as Studiot initially required but it happens to result in the same string. This was one of the things that confused me initially. Illustration:

image.png.272d2ad14d18d5a3534e2908535d0566.png

 

Zero entropy 
@joigus calculations results in entropy=0. We also initially have the information that the coin is in the grid; there is no option “not in the grid”. Confirming that the coin is in the grid does not add information as far as I can tell; so entropy=0.  I think we we could claim that entropy=0 for any question since no question can change the information; the coin is in the grid and hence it will be found. Note that in this case we can not answer where the coin is from the end result alone, zero information entropy does not allow for storage of a grid identifier. 

 

Different paths and questions resulting in 0101 

21 hours ago, studiot said:

This set corresponds to a true 'binary search' where you halve the number of possibilities at each step.

To arrive at the string 0101 while using binary search I think of something like this*: 
1. Is the coin in the lower half of the grid? No (0)
2. Is the coin in the top left quadrant of the grid? yes (1)
3. Is the coin in the first row of the top left quadrant? No (0)
4. Is the coin in the lower right corner of the top left quadrant? Yes (1)

Illustration, red entries correspond to "no"=0 and green means "yes"=1

image.png.f9e190e00697aa3fa59baa6c81513dac.png

The resulting string 0101 translates into a grid position. But it has a different meaning than the 0101 that results from studiots initial questions. Just as in studiots case, we need some additional information to be able to interpret 0101 into a grid square. As far as I can tell the questions in this binary search example follows the same pattern of decreasing entropy if we apply joigus calculations. But the numbers will be different since a different number of options may be rejected.

 


*) Note that I deliberately construct the questions so that it results in the correct yes/no sequence. The approach though is general and could find the coin in any position of the grid.

Edited by Ghideon
x-post
Link to comment
Share on other sites

I think there are several aspects in which one could try to make this example more similar to what's going on in physics and, in particular, in computers. I'm no computer scientist, so don't take anything I say too seriously.

I think interesting features are at play in thermodynamic systems that are not necessarily that relevant when it comes to computers. The most important one, IMO, is that computers are not generally set up in a way that they will end up being in a state of equilibrium. The other, perhaps equally important, is that the way in which circuits in a computer switch on and off is not ergodic, meaning that during a typical run of a program, the bit-states do not cover all possible configurations, let alone equally likely. I suppose --correct me if I'm wrong-- they will be more likely to run along some paths than others. And my intuition is that the sequence of configurations would have a strong dependence on initial conditions too, something that's not a characteristic of thermal-physics.

Another feature that our toy example doesn't have, but both computers and thermodynamic systems share is, as @SuperSlim has been trying to tell us all along, evolution. So we would have to picture our system more like a table on which we're free to add and remove coins according to an updating law (evolution) that determines which square is occupied, and which is not, in an unpredictable way.

If that's the case, I would find it easier to interpret Landauer's principle in the following --overly simplistic-- way. This would be a state frozen in time:

freeBit.png.82144b20bafebeaeccb792df24ba48e4.png

Where you have to picture any number of coins compatible with available room as free to move about following an updating procedure. All squares are available to be "ocupied" with a microstate 0 or 1. At this point, the updating law (program) has yet to write a bit on a given square. Mind you, this concept could be applied to either RAM, ROM or SWAP memory.

Now the program finally writes a bit. Whether it's midway during a calculation (for storing the value of a variable, for example), or to hard-disk memory, it takes some kind of process to hold the value there. It could be 0, or it could be 1. It's the holding that's important. I will denote this with a reddish hue:

writtenBit0.png.ffcd0ff5a0ae8aeaa00c7d208f569069.png

(Bit 0)

Or:

writtenBit1.png.f69538f96947b5403d9f700d8b55fafe.png

(Bit 1)

At this point, the rest of the coins are free to occupy whatever other squares, as the case may be, except for that in position (D,4), which is occupied (whether it be 0 or 1 the bit that's occupying it), until it's freed again. This restricts lowers the entropy, because, for however long, the system has become more predictable.

If I erase the bit, what does it mean? I think it means that I cease holding the value at (D,4) in position, the square being available again either for temporary use, or for more long-term one.

I may be completely wrong, or perhaps oversimplifying too much, but please bear with me, as I'm groping towards understanding Landauer's principle on a physical basis.

One final comment for the time being, I don't think you need evolution in order to define an entropy. Whenever you have a statistical system with a fixed number of states, as in @studiot frozen-in-time example, you can define an entropy as long as you have a way to asign probabilities to the different configurations.

 

 

 

Edited by joigus
minor correction
Link to comment
Share on other sites

1 hour ago, joigus said:

I think interesting features are at play in thermodynamic systems that are not necessarily that relevant when it comes to computers. The most important one, IMO, is that computers are not generally set up in a way that they will end up being in a state of equilibrium.

Good point

1 hour ago, joigus said:

If I erase the bit, what does it mean? I think it means that I cease holding the value at (D,4) in position, the square being available again either for temporary use, or for more long-term one.

Another good point , 'Erasure'  is not a feature of thermodymic systems, that I am aware of. Most especially isolated ones since erasure can surely only be effected by an external agent, which by definition in an isolated system does not exist.

 

Your whole post represents some great new thinking, I have just picked out a couple of points to add +1 to.

+1 also to Ghideon for his thoughts and colour scheme diagrams.

 

Edited by studiot
Link to comment
Share on other sites

2 hours ago, studiot said:

Another good point , 'Erasure'  is not a feature of thermodymic systems, that I am aware of.

That's a generally true statement; don't forget a computer is a thermodynamic system, but one with constraints and restrictions; one of which is how information is represented, what is the computational basis. Thermodynamic systems don't generally have restrictions like that. A computer is based on an artificial choice, made by engineers, they are of course forced to choose something physical and deal with what nature has to offer (type of thing).

2 hours ago, studiot said:

Most especially isolated ones since erasure can surely only be effected by an external agent, which by definition in an isolated system does not exist.

That's why computers are not isolated systems; the computational basis has to live in a heat bath; the content of this bath has to be controlled which is another restriction on the system.

3 hours ago, joigus said:

One final comment for the time being, I don't think you need evolution in order to define an entropy. Whenever you have a statistical system with a fixed number of states, as in @studiot frozen-in-time example, you can define an entropy as long as you have a way to asign probabilities to the different configurations.

You've already mentioned that entropy has various different formulations. Shannon entropy is a measure of the differences in information content between fixed messages; it's not evolutionary entropy. What that is is algorithmic entropy. This is related to the so-called shortest program that implements the algorithm. For instance a program that prints a single number, say 8, can do whatever it likes to calculate the value 8 as long as it outputs an 8. The simplest algorithm just prints an 8, but "the most complex algorithm which outputs an 8", is not well-defined . . .

Algorithmic complexity has a Chaitin measure

Edited by SuperSlim
Link to comment
Share on other sites

On 2/26/2022 at 5:55 PM, joigus said:

I think interesting features are at play in thermodynamic systems that are not necessarily that relevant when it comes to computers.

I’m used to analyze digital processing of information but I’ve not thought about computations that takes place as part of designing an algorithm and/or programming a computer. The developer is interpreting and translating information into programs which is a physical process affecting entropy? In studiots example the questions must come from somewhere and some computation is required to deduce the next question. This may not be part of the example but I find it interesting; should one isolate the information about the grid and coin or try to look “globally” and include the information, computations etc that may be required to construct the correct questions?

A concrete example; the questions about columns can stop once the coin is located. This is a reasonable algorithm but not the only possible one; a computer (or human) could continue to ask until the last column; it's less efficient but would work to locate the coin. Do the decision and computations behind that decision affect entropy? I don't have enough knowledge about physics to have an opinion.

 

On 2/25/2022 at 1:56 PM, studiot said:

Secondly since all the information is contained in the questions and answers, the configuration of 'the board' is irrelevant.
The cells may be arranged in a line or a ring or scattered.

I agree, at least to some extent (it may depend on how I look at "all the information" ). By examination we can extract / deduce a lot of information from the questions and answers but it is limited. The following may be rather obvious but I still find them interesting.

1: Given the first question “Is it in the first column?” and an assumption that the question is based on known facts we may say that the questioner had to know about the structure of the cells. Otherwise the initial question(s) would be formulated to deduce the organization of the cells? For instance the first question could be "Are the cells organised in a grid?" Or "Can I assign integers 1,2,...,n to the cells and ask if they contain a coin?" 

2: There is no reason to continue to ask about the columns once the coin is located in one of the columns. This means that the number of columns can’t be deduced from the questions.

The reasoning in 1 & 2 is as far as I can tell trivial for a human being but not trivial to program.

(I'm travelling at the moment; may not be able to respond promptly)

Link to comment
Share on other sites

2 hours ago, Ghideon said:

I’m used to analyze digital processing of information but I’ve not thought about computations that takes place as part of designing an algorithm and/or programming a computer. The developer is interpreting and translating information into programs which is a physical process affecting entropy? In studiots example the questions must come from somewhere and some computation is required to deduce the next question. This may not be part of the example but I find it interesting; should one isolate the information about the grid and coin or try to look “globally” and include the information, computations etc that may be required to construct the correct questions?

Even if there is no evolution, any setting that involves probability requires an ensemble: An infinite set of instantiations of the system that justifies the use of probability. I wonder if that's what you mean by "globally."

I look at it more from the side of physics, so there's a lot to learn for me.

No rush. Interesting discussion is what's interesting. Have a nice trip.

Link to comment
Share on other sites

I don't know if you noticed, but the rows and columns represent a design choice. So does the coin.

You ran an algorithm to do this designing and posted a diagram. It's a way to represent, I guess, a kind of memory device.

There are a lot of things in nature that fit in this class: a store of information is a physical object, with "extra" structure. Except that we decide what the extra is, it isn't "really" there, we just say it is. We prove it by drawing diagrams.

Link to comment
Share on other sites

@joigus  and @Ghideon

Thank you for your further thoughts.

I'm not sure about the role of storing the information or what difference it makes.

Surely the situation simply depends upon whether such information is available or not in the system, rather than whether it is strored or retrieved somewhere ?

Here is one of mine.

Here are three pin jointed frames with symmetric loads, L, mounted on a foundation AB.

 

1)  Has insufficient information to determine the forces in the frame.

2) Has exactly the right amount of information to determine the forces in the frame.

3) Has too much information to determine the forces in the frame.

 

As far as I can tell, there is zero thermodynamic energy associated with state change here, yet it is interesting so note effects of both too much and too little inforamation.

frame1.jpg.d03fccccf5a1375bfc84fab35feb4ef0.jpg

 

Link to comment
Share on other sites

20 hours ago, joigus said:

Even if there is no evolution, any setting that involves probability requires an ensemble: An infinite set of instantiations of the system that justifies the use of probability. I wonder if that's what you mean by "globally."

(Bold by me) Thanks for your input! Just to be sure, in this context an "ensemble" is the concept defined in mathematical physics* ?  
I may need to clarify what I tried to communicate by using "globally". I'll try an analogy since I'm not fluent in the correct physics / mathematics vocabulary. Assume I looked at Studiot's example from an economical point of view. I could choose to count the costs for a computer storing the information about the grid, coin and questions. Or I could extend the scope and include the costs for the developer that interpreted the task of locating the coin, designed questions and did programming. This extension of the scope is what I meant by "globally". (Note: this line of thought is not important; just a curious observation; not necessary to further investigate in case my description is confusing.)

On 2/26/2022 at 5:55 PM, joigus said:

I suppose --correct me if I'm wrong-- they will be more likely to run along some paths than others. And my intuition is that the sequence of configurations would have a strong dependence on initial conditions too, something that's not a characteristic of thermal-physics.

As far as I can tell you are correct. If we would assume the opposite: "all paths of execution of a program is equally probable" it is easy to contradict by a simple example program. 

 

5 hours ago, studiot said:

I'm not sure about the role of storing the information or what difference it makes.

Surely the situation simply depends upon whether such information is available or not in the system, rather than whether it is strored or retrieved somewhere ?

I am not sure. For instance when decrypting an encrypted message or expanding some compressed file, does that affect the situation? 

 

*) Reference: https://en.wikipedia.org/wiki/Ensemble_(mathematical_physics). Side note: "ensemble" occurs in machine learning, an area more familiar to me than thermodynamics.  

Link to comment
Share on other sites

1 hour ago, Ghideon said:
6 hours ago, studiot said:

I'm not sure about the role of storing the information or what difference it makes.

Surely the situation simply depends upon whether such information is available or not in the system, rather than whether it is strored or retrieved somewhere ?

I am not sure. For instance when decrypting an encrypted message or expanding some compressed file, does that affect the situation? 

Thanks for the reply.

I don't see that subsequent decrypting or expansion is relevant.

Here is an example of what I mean.

A list of books on the top shelf of my bookcase is information.

However such as list may or may not actually be drawn up or exist.

Yet the information exists and is still available and could be obtained by looking along the shelf.

Even if the list is drawn up, say by taking photographs, it may never actually be read.

So neither the drawing up of the information nor the subsequent processing (reading) is necessary to the existence of the information itself.

The information exists, because the shelf of books exists.

Link to comment
Share on other sites

8 hours ago, studiot said:

I'm not sure about the role of storing the information or what difference it makes.

Surely the situation simply depends upon whether such information is available or not in the system, rather than whether it is strored or retrieved somewhere ?

 

Sorry, I can't see a difference here.

7 hours ago, Ghideon said:

Just to be sure, in this context an "ensemble" is the concept defined in mathematical physics* ?  

Yes. Infinitely many (ideally) copies of the system on which to perform the statistics.

7 hours ago, Ghideon said:

I may need to clarify what I tried to communicate by using "globally". [...]

OK, I see. Thanks for the clarification.

Link to comment
Share on other sites

21 hours ago, studiot said:

I don't see that subsequent decrypting or expansion is relevant.

Here is an example of what I mean.

A list of books on the top shelf of my bookcase is information.

However such as list may or may not actually be drawn up or exist.

Yet the information exists and is still available and could be obtained by looking along the shelf.

Even if the list is drawn up, say by taking photographs, it may never actually be read.

So neither the drawing up of the information nor the subsequent processing (reading) is necessary to the existence of the information itself.

The information exists, because the shelf of books exists.

Thanks for your thoughts and comments. Maybe our example can be expressed mathematically? At least just to find where we share an opinion or disagree?

Let I be information.
J is some information that can be deduced from I.
Some function f exists that produces J given I as input. f:IJ
Information K is required to create the function f. This could be a formula, an algorithm or some optimisation. 
In the bookcase example I is the book shelf and J is the list of books. Function is intuitively easy, we create a list by looking at the books.

Does this make sense? If so, in the bookcase example we have:

JI  KI No additional information needed to create a list, we look at the books (maybe K=empty is an equally valid way to get to the same result).

In any way the Shannon entropy does not change for J or I as far as I can tell, the entropy is calculated for a fixed set of books and/or a list. 

My curiosity is about non-trivial f and K. For instance in the initial example with the coin and if JI and KI holds in this case. And if there is any difference how does entropy of K relate to entropy of I and J if such a relation exists. I. consider the function that produces the questions in the initial coin example to be "non-trivial" in this context.
Is the thermodynamic analogy is open vs closed system? 

 


 

9 hours ago, studiot said:

Anyons

Thanks! Something new to add to my reading list.

Edited by Ghideon
clarification
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.