Jump to content

Evolution has never been observed


cabinintheforest

Recommended Posts

Just as probability theory predicts that systems undergoing influence from physical only processes will over time migrate to a state with the highest probability distribution, so too would information under the influence of physical only processes. Thus thermodynamic systems relying on physical processes to transfer energy seem incapable of reducing the total system probability distribution to a state that is significantly less probable. Likewise physical systems seem incapable of generating significant amounts of new information.

Not at all. Information, in this sense is the number and accuacy of variables needed to describe a system. In information theory random systems need greater amounts of information to describe them than ordered systems. This means that a high entropy system has a higher information content than a low entropy system (but low entropy systems can also contain a high amount of information - this is because tthe information content is not equivelent to entropy, unlike what you are trying to say). This is basic infromation theory here.

 

The fact is, you are using concepts completly different from information theory, entropy and envolution and claiming that they are these things. It is like I was using the religious writings of the Norse to claim things about the bible, it is a completely different set of concepts youa re using, and therefore, the conclusions you draw form them are not what the conclusions that you would reach if you were using the real concepts.

 

Until you are willing to use the actual concepts you claim to beuseing, none of your arguments are going to be persuasive to anyone who knows what these actual concepts are.

 

It is this reason that creationists are unable to convinvce people who know about these things to convert to their beliefs. All they do is keep proposing the same strawman arguments to people who can see how obvious these strawmen are. Of course, if one was ignorant about these concepts, then the arguments put forward sound reasonable, but then if the strawman arguments were true then they would have a point. Bus as these arguments are false, right form the concepts they are using, they just don't hold and weight to people who are not ignorant.

 

Edtharan's appeal to influences beyond the boundary of earth to import low probability states does not apply to my example unless he is able to identify the external source humans use to import information from beyond this planet or beyond ones mind for that matter.

I did identfy one such influence: The Sun.

 

Also it is not the creation of information that requiers energy, but its destruction (see this: http://en.wikipedia.org/wiki/Landauer%27s_principle )

 

Perhaps you can offer an example of a physical system without use of a mind and without import of information that is observed generating new information beyond what is predicted by probability and information theory.

Remember what I said about entropy? Increaseing entropy is only inevitable in a closed system. By just having a system where energy can enter and leave a system you can decrease the entropy. As the Earth gets energy from the Sun and radiates energy, it is therefore NOT a closed system. This by itself is proof enough that entropy can decrease on Earth without the need for an inteligence.

 

But lets look at a system you might be familiar with. Take a galss of water. When in a liqud state, this glass of water is disordered and therefore has a high entropy. However when we freeze it and trun it into ice, it looses entropy as it gains order (rigid ice crystals are more ordered than sloshing water).

 

Now, to cause a glass of water to freeze, you need to get the energy out of it. So, if you have a region of less energy, then the energy in the water causing its disorder can leave and the temperature drops. When the temperature drops low enough the energy causing the bonds between the water molecules is not enough to force them apart and they join together and crystalise, turing into ice.

 

However, where I am sitting now, the temperature is 22 degrees centigrade. This is far too hot for ice to form, but yet, in my freezer in my fridge I have ice. How can thes be? The way it is done is the fridge uses energy to compress gass, cool it and then pipe that into the freezer. As this cooled gass is colder than 0 degrees the freezer looses energy (heat) and the water in the freeze freezes.

 

But, where does this energy come from? Well, the energy comes from the electricity company who get their enegy from coal, oil, wind, etc. All these sources of energy get their energy form the Sun.

 

Remember, I said that the sun is a source of energy that can be used to decrease entropy. Well, there is proof of it. I have used the SUn to reduce the entropy in my glass of water to turn it into ice.

 

Ahh, I hear you say, but fridges are made by inteligent entities. Well there is a natural process that does not requier inteligence and works the same way.

 

The sun heats the surface of the Earth, and this heat the air above it. And this hotter air evaporates a bit of water. The hotter air is less dense than the air around it (as hot air expands) and this causes the hot air to rise. As the air rises the air aorund it becomes less dense and the air is allowed to expand. This causes the air to cool (forceing air to expand cools it). This is the same process that goes on in a fridge to allow it to cool its contense. If the air rises high enough, it cools enough to cause the water vapor in it to first become liquid and then freeze.

 

So we have the same processes going on (and the same original source of energy too) but this time it is not one instigated by an inteligent entity and we are gettign the same result (energy input and a loss of entroy from that part of the system).

 

However if we include the Sun as part of the system, then we do indeed have an increase in entropy as the Sun is converting mass into energy through fusion, an in doing so it is loosing mass. The photons produced from the sun as it converts mass into energy is an increase in entropy because matter is more ordered than photons.

 

Standards for science apply equally to all branches. The theory must make reference to observable processes currently in operation. The processes must be demonstrated repeatably to be capable of generating the results claimed by the theory and not some watered down set of results that don't scale up.

Yes, and that is your problem. You have incorrect physics applied here. You are using concepts that have not been subject to such requierments. We are, and have given references, you have not. We have even given experiments that you can perform that would verify what we are saying. Have you done them?

 

Everything we have been saying is repeatable and has been confirmed by experiment. It is you who are rejecting this.

 

Since you admit that creationist ideology is and has influenced science for some time, it is quite clear that my metaphor is correct (all sides bring their bias into science). Beginning in the 1600's and then accelerating in the 1800's and 1900's, the materialists have added their creation narrative and it too has been heavily influencing science. More honest materialists even admit this. Here is what geneticist Richard Lewontin said about this:

Yes, creationism has influenced science, but when the experiments were done based on these creationist claims were done, the claims were found to be wrong. New hypothisis were put forward and tested, and it is these that have been confirmed by experiment. You seem to be stuck with the creationist claims that were disproved. Time to update by a couple of hundred years now.

 

‘We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have a prior commitment, a commitment to materialism.

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is an absolute, for we cannot allow a Divine Foot in the door.

The eminent Kant scholar Lewis Beck used to say that anyone who could believe in God could believe in anything. To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that Miracles may happen"

 

Lewontin, Richard, "Billions and Billions of Demons", The New York Review, p. 31, 9 January 1997

Yes, reality trumps all. If reality is confirmed to be one way, no matter "the patent absurdity of some of its constructs", then we have to accept that. For instance, it seems absurd that when you heat a gass it expands, but if you expand a gas it cools. But this is the case (you can confirm this with a bike pump). So even though such a "construct" is absurd, it is none the less real.

 

This is the difference between philosophy and science. Science makes these reality checks, and it is only by these reality checks that we can check what we are claiming is real.

 

Actually, it is time you started doing this. You have been making claims about evolution and entropy that is in direct contradiction to what science says about them. In other words, you need to make a reality check about the claims that you say science is making.

 

It is no wonder that we don't agree with your conclusions when you keep trying to twell us that what we know is not what we know, but it is what you think we should believe. Sorry, you can't tell us what we know, and just claiming that you know it differently does not change what is in our heads. Until you understand that what you claim we are claiming is wrong, then your arguments have absolutly no substance to them. Either stick with what you know or learn what we actually are saying.

 

More recently, social tinkerers, primarily in the early 1900's and continuing today, also influence science to promote political and social policy.

Yes, and this doesn't occur in creationism or religion? We are only human and people will always try to influence things. One of the asapects of science that make it hard to corrupt is the need for constant reality checking and the freedom for anyone to attempt to disprove a claim.

 

In religion, there is none of these checks and balances. So it is far more easy for someone to corrupt it to their own ends.

 

Yes, science isn't perfect, but it is far better than other methodologies as any corruption will eventually be found and eliminated. Actually, most of the corruption is science is about who discovered what, and is only rarely applied to theories, and when it is it is usually quickly found out.

 

By making this claim you seem to be contradicting swansont's observation that physical law is amoral.

Morality and Ethics are constructs of human intelect. It is a clasification of certain types of behaviours that are used to maintain social systems in animals. You could grind the universe into atoms and sort them and you will not find a single particle of Morality. You could clasify and measure every field or fluctionation of energy and you would not find a sinlge twitch of Ethics.

 

Morality and Ethics do not exist as a physical law. However, if you look at the way social animals behave, then you will be able to find morality and ethics. they are therefore an emergent property of the interactions of animals in social structures.

 

If you look at mathematics of how such system work (specifically look at game theory), then you can see how and why moral and ethical behaviour will confer an advantage to social animals (but in non social animals it doesn't). ANd, all social aniamls follow these core moral rules for the same reason (evolution selects for groups that apply them and selects against one that do not).

 

Have a look at the "Ultimatum Game" ( http://en.wikipedia.org/wiki/Ultimatum_game ) and research how it is applied when the participants play multiple games and are allowed to exchange information about other players between rounds.

Link to comment
Share on other sites

I can't help but notice that this post has been completely ignored.

 

 

Yes, I am speaking of entropy and the laws derived from probability theory. Conservation of energy does not figure into this discussion, so I see no need to address it.

 

 

"Seem incapable?" There is no law that supports this.

 

Yes seems incapable in the same sense that physical processes seem incapable of drivinng thermodynamic energy states to less propable configurations without an imported source of material that is currently in a low probability state.

Link to comment
Share on other sites

Yes seems incapable in the same sense that physical processes seem incapable of drivinng thermodynamic energy states to less propable configurations without an imported source of material that is currently in a low probability state.

 

With thermodynamic systems, you can do this with the importation of energy (not material), which is part of the second law. There is no corresponding law that applies to information.

Link to comment
Share on other sites

Yes, I am speaking of entropy and the laws derived from probability theory. Conservation of energy does not figure into this discussion, so I see no need to address it.

That post had nothing to do with conservation of energy. It merely stated that conservation of energy energy was part of Dembski's excuse for not proving his 'law' of conservation of information and then Cap'n give one reason why that copout is a load of bollocks-that the conservation of energy has been proven time and time again while conservation of information has not; one concept has been proven ad nausium whilst the other is false. Your post seems to indicate one of two possibilities: you are trying to distract the discussion down a rabbit hole to avoid facing the idea that your argument is mathematically unsound and misrepresents information theory OR you have trouble reading. Based on previous experience with you, neither would surprise me.

 

So, here we have it again:

I believe cypress is referring to Dembski's law of conservation of information, which is mathematically unsound and misinterprets standard information theory. There is no law of conservation of information inside any standard information theory formulation, and Dembski has repeatedly stated "I'm not and never have been in the business of offering a strict mathematical proof for the inability of material mechanisms to generate specified complexity" ("specified complexity" being another one of his invented terms), drawing a comparison with physicists who don't feel obligated to prove the law of conservation of energy. (I have, on several occasions, mathematically proven the law of conservation of energy in a given system.)

 

And as Edtharan pointed out, it is always possible that we merely transform one kind of information to another, rather than generating it.

 

Why did you REALLY ignore this post?

Link to comment
Share on other sites

With thermodynamic systems, you can do this with the importation of energy (not material), which is part of the second law. There is no corresponding law that applies to information.

 

How would the presumed absence of a conservation of information law lead us to conclude that information theory is not constrained by probability considerations alone to conform to the laws of probability theory as expressed by entropy (an application of probability)? Without an answer to this, does your point have any merit? How so?

Link to comment
Share on other sites

That post had nothing to do with conservation of energy. It merely stated that conservation of energy energy was part of Dembski's excuse for not proving his 'law' of conservation of information and then Cap'n give one reason why that copout is a load of bollocks-that the conservation of energy has been proven time and time again while conservation of information has not; one concept has been proven ad nausium whilst the other is false. Your post seems to indicate one of two possibilities: you are trying to distract the discussion down a rabbit hole to avoid facing the idea that your argument is mathematically unsound and misrepresents information theory OR you have trouble reading. Based on previous experience with you, neither would surprise me.

 

So, here we have it again:

 

Why did you REALLY ignore this post?

 

Because I don't see how presence or absence of conservation of information and its parallel to conservation of energy has any bearing on probability theory in this discussion of information theory. Perhaps you or Capt'n or Swansont can describe the relevance. Until then I see no reason to address it. I am sorry that you have difficulty recognizing that it was Capt'n who brought in this red herring he calls conservation of information without demonstrating relevance. If you would be so kind as to demonstrate the relevance I will address the question.

Link to comment
Share on other sites

Because I don't see how presence or absence of conservation of information and its parallel to conservation of energy has any bearing on probability theory in this discussion of information theory. Perhaps you or Capt'n or Swansont can describe the relevance. Until then I see no reason to address it. I am sorry that you have difficulty recognizing that it was Capt'n who brought in this red herring he calls conservation of information without demonstrating relevance. If you would be so kind as to demonstrate the relevance I will address the question.

 

You spoke of humans seeming to be "an exception to laws of information entropy". Having read much of Dembski, who you cite, I recognized the reference to the law of conservation of information. If you're referring to something else, perhaps you could specify what you refer to, because as swansont has noted, the laws of entropy do not help you here. Entropy can easily decrease locally, without violating any law.

 

You referred specifically to "laws of information entropy", but now you dodge to probability theory. Which law exactly is being violated?

 

The analogy to conservation of energy is Dembski's, not mine, and exists as an example.

Link to comment
Share on other sites

Not at all. Information, in this sense is the number and accuacy of variables needed to describe a system. In information theory random systems need greater amounts of information to describe them than ordered systems. This means that a high entropy system has a higher information content than a low entropy system (but low entropy systems can also contain a high amount of information - this is because tthe information content is not equivelent to entropy, unlike what you are trying to say). This is basic infromation theory here.

 

There are several definitions of information, the one you chose is not the same sense as I a speaking. Information content in terms of probability distribution refers to the degree to which alternatives are eliminated or alternatively the degree alternative results are possible. Large amounts of information corresponds to low probability states. Here is a wiki article describing "Information Entropy" and the Measure of Information" I urge you to look at both sections

 

The fact is, you are using concepts completly different from information theory, entropy and envolution and claiming that they are these things. It is like I was using the religious writings of the Norse to claim things about the bible, it is a completely different set of concepts youa re using, and therefore, the conclusions you draw form them are not what the conclusions that you would reach if you were using the real concepts.

 

Sorry, no, It is your definition and concept of information that is the outlier. Please review the article linked.

 

Until you are willing to use the actual concepts you claim to be useing, none of your arguments are going to be persuasive to anyone who knows what these actual concepts are.

 

Please review the wiki page and help me understand where I am misapplying this concept. I may come back to other point in your post once we are on the same page.

 

It is this reason that creationists are unable to convinvce people who know about these things to convert to their beliefs. All they do is keep proposing the same strawman arguments to people who can see how obvious these strawmen are. Of course, if one was ignorant about these concepts, then the arguments put forward sound reasonable, but then if the strawman arguments were true then they would have a point. Bus as these arguments are false, right form the concepts they are using, they just don't hold and weight to people who are not ignorant.

 

I don't mind that you incorrectly label me and imply I am ignorant. But you should first at least google for and perhaps consider read the wiki page to be sure you know what you are talking about.

 

You spoke of humans seeming to be "an exception to laws of information entropy". Having read much of Dembski, who you cite, I recognized the reference to the law of conservation of information. If you're referring to something else, perhaps you could specify what you refer to, because as swansont has noted, the laws of entropy do not help you here. Entropy can easily decrease locally, without violating any law.

 

You referred specifically to "laws of information entropy", but now you dodge to probability theory. Which law exactly is being violated?

 

The analogy to conservation of energy is Dembski's, not mine, and exists as an example.

 

I was not citing Dembski in this thread. Entropy does apply to Information theory and was some time ago. Have a look at the Wiki page for more "information" (specifics, that is reduction in alternatives).

 

I am very aware that entropy can be reduced locally which is why I specifically identified the closed system of interest and included the caveat "without an external source of information".

 

Entropy is derived from probability theory so it is difficult to understand how referring to probability is a dodge.

Link to comment
Share on other sites

Cypress said

 

Just as probability theory predicts that systems undergoing influence from physical only processes will over time migrate to a state with the highest probability distribution, so too would information under the influence of physical only processes. Thus thermodynamic systems relying on physical processes to transfer energy seem incapable of reducing the total system probability distribution to a state that is significantly less probable. Likewise physical systems seem incapable of generating significant amounts of new information.

 

I think what cypress is arguing is that as thermodynamic systems tend towards highest probability distribution i.e systems in which the energy is uniformly distributed and where there are no constraints on the degress of freedom for the molecules of a system. The system is said to be in an higher entopy state and more information is needed to describe the system as the uncertainity in finding the same microstate after some time is more.

 

Now applying the same meaning to brain one can argue that we should tend towards ignorance and loss of information rather than the increase in knowledge we normally see in humans. Since there are no constraints on the degrees of freeom for molecules representing information or knowledge in such systems one can argue how such systems can exist in a lower entropy state.

 

Now one can see that this argument is not true as the brain is not a closed system and it can extract energy from outside to maintain a low entropy state.

Link to comment
Share on other sites

How would the presumed absence of a conservation of information law lead us to conclude that information theory is not constrained by probability considerations alone to conform to the laws of probability theory as expressed by entropy (an application of probability)? Without an answer to this, does your point have any merit? How so?

 

There is no law of conservation of information. The burden of proof is on you to show that there is.

 

Your wikipedia link makes it clear that entropy is used in the context of data storage, i.e. a system with maximum information entropy cannot be compressed. A uniform system can be described with very few bits, because the data can be compressed. It has low thermodynamic entropy and low information entropy. But a chaotic system requires more bits because it has higher entropy, of both sorts, and that can happen spontaneously.

 

Which seems to be the opposite of what you claim is true.

 

Even if what you mean is information content, rather than storage, i.e. the data to describe all of the states rather than the compression and storage required, it still fails because we have bosons, whose numbers are not conserved. I can have an isolated system in which I create an arbitrary number of photons, each of which has an energy and a polarization which need to be described. That represents an increase in information.

Link to comment
Share on other sites

Large amounts of information corresponds to low probability states.

Here I disagree quite strongly with your interperetation of that wikipedia article.

 

Take for example there two sets of numbers:

1, 2, 3, 4, 5, 6, 7, 8, 9, 10

 

and

 

8, 8, 3, 10, 2, 9, 4, 9, 3, 2

 

Which has a higher entropy (lower order) and which has a higher level of information?

 

The first one can be described by a simple algorithm (print A=A+1, repeat 10 times). The second I got by rolling a 10 sided die 10 times.

 

The article you pointed to used work by Shannon. In his work he describes the entropy of a system as being equivelent to the amount of information needed to describe it. If you don't have enough information to describe it, then you get uncertainties (and this is where I think you have misinterpereted the article).

 

The first set has a simple algorithm that describes how to create it (two instructions or 3 if you need to define A first). However the second is a random string of numbers (and therefore needs 10 instructions to recreate it), and as the only way to describe it is to describe each number in turn, then the first set has a much higher information density than the second, however, the second is a much more disordered set and requiers more information to describe it.

 

This is in direct contradiction to what you are claiming, and therefore proves you wrong (as in mathematically you are wrong).

 

Yes, the outputs have the same amou8nt of information, but you can't jsut use outputs of processes as determining the amount of entropy the process starts with or produces (if you only know how much it ends with and not how much it starts with, you can not determine how much it has increased or decreased).

 

Sorry, no, It is your definition and concept of information that is the outlier. Please review the article linked.

When you are using concepts in ways they were not intended, it is like trying to use gravitation theory to explain litterary critacisms. It just does not work.

 

I hav read the article, and understand it quite well (I was educated as a programmer and system designer and have been programming for over 20 years). Based on my education, you have misinterperested that article as I showed you above.

 

Please review the wiki page and help me understand where I am misapplying this concept. I may come back to other point in your post once we are on the same page.

Ok, you are using thermodynamic principals directly as information theory. They are two different things. Information "entropy" is about the amount of informaiton needed to avoid uncertainties in the data. part of this is that processes are important. To apply entropy to information theory correctly, you need to look for the smallest process that will produce the data set you are after. You are not doing this. You are instead just looking at the output. This is a false application of information theory.

 

Is that specific enough.

 

I don't mind that you incorrectly label me and imply I am ignorant. But you should first at least google for and perhaps consider read the wiki page to be sure you know what you are talking about.

As you have demonstraed ignorance of these thnings, then yes, you are ignorant. Ignorance is not a bad thing, it just means you have things to learn. It is only bad when your lack of knowledge is pointed out and you insist that you are right, even in the face of mountains of evidence that you are not. I am quite conversant with information theory (although I do not use this as proof that I am right, which is why I haven't mentioned it before).

 

I was not citing Dembski in this thread. Entropy does apply to Information theory and was some time ago. Have a look at the Wiki page for more "information" (specifics, that is reduction in alternatives).

 

I am very aware that entropy can be reduced locally which is why I specifically identified the closed system of interest and included the caveat "without an external source of information".

 

Entropy is derived from probability theory so it is difficult to understand how referring to probability is a dodge.

No, you were arbitarily closing a system and then trying to us that as proof of your claims.

 

For instance you asked me to provide an external source that would allow information to increase locally. If you had read the post in which your question was directed at, then you would have realised that I did provide a source.

 

You seem to be under the impression that Earth is a closed system, in that you keep making the claim that entropy (whether information or thermodynamic) can not decrease, and hence entropy is violated by evolution. Well, as I keep saying, the Earth is NOT a closed system, and if you arbitarily think of it as one, then you will get the wrong results (which you have).

 

As the Earth is not a closed system (we get plenty of energy from the sun), then you can't think of it as a closed system. As it is not a closed system, then a local descrease in entropy (whether information or thermodynamic), is certainly posible and does not violate any conservation laws.

 

You keep insisting that it does, but your fundamental mistake is that you keep trying to arbitarily close the Earth as a system.

 

It is not a closed system, so there is no voliation of any conservation laws. Because of the energy input from the sun, information can increase and disorder can decrease.

 

Thus, your arguments are proved wrong.

Link to comment
Share on other sites

There is no law of conservation of information. The burden of proof is on you to show that there is.

 

I don't make a claim that there is such a law. To suggest I have is a straw man. My argument relies on probability and entropy only.

 

Your wikipedia link makes it clear that entropy is used in the context of data storage, i.e. a system with maximum information entropy cannot be compressed. A uniform system can be described with very few bits, because the data can be compressed. It has low thermodynamic entropy and low information entropy. But a chaotic system requires more bits because it has higher entropy, of both sorts, and that can happen spontaneously.

 

Which seems to be the opposite of what you claim is true.

 

The link provides context for entropy and measurement of information content as a function of probability as well.

 

Even if what you mean is information content, rather than storage, i.e. the data to describe all of the states rather than the compression and storage required, it still fails because we have bosons, whose numbers are not conserved. I can have an isolated system in which I create an arbitrary number of photons, each of which has an energy and a polarization which need to be described. That represents an increase in information.

 

Shannon was most interested in changes in information content as a result of transformation, transposing and transmitting. His purpose did not require him to address initialization of information or determination of absolute values of information entropy. His primary concern was the change in information content and entropy during these operations. The theory did require him to address the impact of deterministic processes on information and that is covered in the wiki article but it did not require him to address differences in the distribution of information bits. In this limited treatment any configuration of bits of a string that have equal probability given that a random process generated the pattern are equivalent. However, while this assumption may be valid for his purpose, it seems not to be valid in the general case. Consider the the text of an instruction manual as compared to random letters and punctuation. Clearly the instruction manual contains far more information by virtue of the astronomical number of possible configurations eliminated because they are meaningless or convey far different meaning than the text contained in the manual. The analogy to thermodynamic entropy is the initial configuration of energy states in a system of particles. Consider a machine that randomly distributes energy states among the macro particles in this system then the derived configuration is described by a stream of information bits. By the theory promoted by shannon information all the possible configurations of bits represent an equivalent amounts of information and entropy. However the corresponding thermodynamic system configurations does have different amounts of energy or entropy. This difference in initializing the information is not an issue for Shannon's purpose since he was concerned with relative differences after initialization as his information system undergoes changes. Random noise as input was as useful for his theory as encoded conversation despite the dramatic difference between the two data sets, but it represents incompleteness in the general case when generation of the information is of interest like the instruction manual example. This general case must consider that information strings have a degree of functional order unto themselves and the degree of order is represented by probability that differentiates between functional order and random distributions information theory must measure the the amount of functional information.

 

When I offered functional information as an example of something the human mind is apparently capable of generating despite the apparent inability of physical systems and the laws by which these physical systems arise to do the same, I was making a distinction between this functional or specific information (including the words I typed here) and the random noise you have attempted to describe as being the same.

 

Cypress said

 

 

 

I think what cypress is arguing is that as thermodynamic systems tend towards highest probability distribution i.e systems in which the energy is uniformly distributed and where there are no constraints on the degress of freedom for the molecules of a system. The system is said to be in an higher entopy state and more information is needed to describe the system as the uncertainity in finding the same microstate after some time is more.

 

Now applying the same meaning to brain one can argue that we should tend towards ignorance and loss of information rather than the increase in knowledge we normally see in humans. Since there are no constraints on the degrees of freeom for molecules representing information or knowledge in such systems one can argue how such systems can exist in a lower entropy state.

 

Now one can see that this argument is not true as the brain is not a closed system and it can extract energy from outside to maintain a low entropy state.

 

Perhaps if one could show that imported energy is transformed by the mind into information and thermodynamic entropy substituted for information entropy but even this is insufficient until one can show that the mind makes use of known physical processes and laws alone to perform this transformation and if this is the case then one could use these same processes to construct a machine that performs this same function.

 

 

This is in direct contradiction to what you are claiming, and therefore proves you wrong (as in mathematically you are wrong).

 

Think through what I said again. Compression is just one aspect of information theory and measuring the information content of an uncompressed data set by the resulting bit size of the compression algorithm is not the general method of measuring information content. The general method is as I previously described and is discussed in the wiki article also. When measured as described it is consistent with your example. It does not contradict it. The first group of numbers belong to a set with far fewer equivalent permutations than the second and thus have lower probability and yet as you described are more informative than the random noise.

 

I hav read the article, and understand it quite well (I was educated as a programmer and system designer and have been programming for over 20 years). Based on my education, you have misinterperested that article as I showed you above.

 

Let me remind you that you also said you have never heard of "information" entropy.

 

I have been developing and designing and implementing computer control systems and data acquisition and analysis systems for over 25 years, and from initial comments it seems I may I know a thing or two more about this subject.

 

 

Ok, you are using thermodynamic principals directly as information theory. They are two different things. Information "entropy" is about the amount of informaiton needed to avoid uncertainties in the data. part of this is that processes are important. To apply entropy to information theory correctly, you need to look for the smallest process that will produce the data set you are after. You are not doing this. You are instead just looking at the output. This is a false application of information theory.

 

Is that specific enough.

 

No, sorry it is not. How can you be sure that entropy only applies to compression algorithms and uncertainty measurements? You were unaware of information entropy just two posts ago. Information entropy has been applied to cosmology as well. Google for it there are a number of fascinating articles. I think you will find those applications mirror my use of information entropy.

 

 

 

As you have demonstraed ignorance of these thnings, then yes, you are ignorant. Ignorance is not a bad thing, it just means you have things to learn. It is only bad when your lack of knowledge is pointed out and you insist that you are right, even in the face of mountains of evidence that you are not. I am quite conversant with information theory (although I do not use this as proof that I am right, which is why I haven't mentioned it before).

 

Two posts ago you admitted ignorance of "information" entropy, but now you are conversant?

 

 

 

You seem to be under the impression that Earth is a closed system, in that you keep making the claim that entropy (whether information or thermodynamic) can not decrease, and hence entropy is violated by evolution. Well, as I keep saying, the Earth is NOT a closed system, and if you arbitarily think of it as one, then you will get the wrong results (which you have).

 

As the Earth is not a closed system (we get plenty of energy from the sun), then you can't think of it as a closed system. As it is not a closed system, then a local descrease in entropy (whether information or thermodynamic), is certainly posible and does not violate any conservation laws.

 

Does the human mind receive low entropy information from the sun that allow Shakespeare to produce his manuscripts and Mozart to compose his scores?

 

You keep insisting that it does, but your fundamental mistake is that you keep trying to arbitarily close the Earth as a system.

 

I realize the Earth is open with respect to thermal energy. Is it open with respect to functional information? If so describe in precise terms what form this low entropy information from the sun takes and how it is tapped to produce the words I typed.

Link to comment
Share on other sites

Cypress said

 

Perhaps if one could show that imported energy is transformed by the mind into information and thermodynamic entropy substituted for information entropy but even this is insufficient until one can show that the mind makes use of known physical processes and laws alone to perform this transformation and if this is the case then one could use these same processes to construct a machine that performs this same function.

 

No we don't use the imported energy for the production of information we actually use it to maintain the physical system representing the information in an organized state. We extract information from our surroundings via sense organs. Our brains act as detectors and it recieves the input from the sense organs and learns to give an optimized output by fine tuning the structures between the synaptic junctions. A lot of energy is used to organize the detector which is used to differentiate the incoming information and this is why the world appears more ordered to us as we try to reduce the thermodynamic entropy of other systems by extracting information from it and throwing a lot of heat outside to maintain a low entropy state.

 

A better question to ask is whether intuition are just mere guesses of brain or do we really access absolute truths present in their own realm.

Link to comment
Share on other sites

Think through what I said again. Compression is just one aspect of information theory and measuring the information content of an uncompressed data set by the resulting bit size of the compression algorithm is not the general method of measuring information content. The general method is as I previously described and is discussed in the wiki article also. When measured as described it is consistent with your example. It does not contradict it. The first group of numbers belong to a set with far fewer equivalent permutations than the second and thus have lower probability and yet as you described are more informative than the random noise.

You talk about Sannon, have you heard of Shannon Complexity. It is what I was talking aobut. The reason you need to use the smallest algorithm as the measure of complexity in information theory is that small algorithms can produce very complecated outputs. Take for example the Mandelbrot Set. It is infintley complex, but it only has a very simple algorithm to produce it. Does this mean that the mandelbrot set must therefore be infinite in entropy too? No.

 

It is much easier to produce (by random) a small algorithm than it is to produce a large data set. Therefore, if you want a real measure of the probability that an given data set exists, you have to take into account that there might be a more probable algorithm that could produce it.

 

Taking the Mandelbrot Set again, which is more improbable:

1) An infinite data set that perfectly mathces the mandelbrot set

 

or

 

2) The algorithm that produces the mandelbrot set which consists of a few instructions.

 

I know which one would be more probable if I was using a random generator to create bits in computer memory. The algorithm is actually infinitly mroe probable because it is equivelent to a finite data set, where as the Mandelbrot Set itself is infinite.

 

This is why you are wrong. You are saying that because the Mandelbrot set is infinite and the chances of it being created in th whole is also infinite, then it it should be that the algorithm to produce it has to be of the same probability.

 

Clearly it is not as the Algorithm is finite and the Set is infinite.

 

Let me remind you that you also said you have never heard of "information" entropy.

Yes, I had not heard of it, but I did read up on what you posted and learned about it. :o

 

Actually I had heard about it under a differnet name: Shannon Complexity. Yes, the Same Shannon you are talking aobut is the same Shannon that I am basing my arguments on. However, Shannon Complexity clearly states the final data set is not important in determining the complexity (or the chance that it could form from randon events), but wheter the algorithm is smaller than the data set.

 

Sure, in the case where the data set is smaller than any algorithm that would produce it, you are correct, but then as you are ignoreing the posibility that an algorithm can produce data sets and might be smaller, then you are not applying Shannon's theories properly (and that is why you get incorrect results).

 

I have been developing and designing and implementing computer control systems and data acquisition and analysis systems for over 25 years, and from initial comments it seems I may I know a thing or two more about this subject.

And that is why I don't use my expereince as proof of my arguments. It does not matter how much experience you have, you can still be wrong.

 

No, sorry it is not. How can you be sure that entropy only applies to compression algorithms and uncertainty measurements? You were unaware of information entropy just two posts ago. Information entropy has been applied to cosmology as well. Google for it there are a number of fascinating articles. I think you will find those applications mirror my use of information entropy.

I am not talking aobut compression algorithms. I am talking about Shannon Complexity. Yes, Shannon complexity is used when designing compression algorithms, and in that case you would be correct, but as I never mentioned compression algorithms and as it is only one application of what I am talking about, then you are misunderstanding what I am talkingaobut or misrepresenting it.

 

It is as I have said before: It is not the complexity of a final data set that is important as the final data set can be vast (infinitly so in the case of Mandelbrot), but the algorithm can be small (finitely in the case of mandelbrot).

 

Even if you had to look at the physical system to run the algorithm, it is still infinitely more probalble that a machine and the algorithm to produce the Mandelbrot set will form by chance than the Mandelbrot set would.

 

This is proof that your application of information theory is not correct as to apply it like you have you have to ignore the fact that the size of a data set (and hence the probability that that data set could form) produced by an algoritm does not relate to the size (and hence the probability that the algorithm could form by random chance) of the algorithm that produces it.

 

As your "proof" relise on the violation of this fact, then you can be assured that your "proof" is false.

 

Two posts ago you admitted ignorance of "information" entropy, but now you are conversant?

Yes, it is called learning... :doh:

 

Does the human mind receive low entropy information from the sun that allow Shakespeare to produce his manuscripts and Mozart to compose his scores?

No, it recieves low entropy energy which it uses to create these bits of order. It is because the physical matter that makes us up is also a data store, thus organising it one way takes energy (and thus increases entropy), but then if you organise it another way it also take energy. the reason it does is because you erase the organisation that it had before (remember that I said it takes energy - and thus increase entropy - to destroy information).

 

I realize the Earth is open with respect to thermal energy. Is it open with respect to functional information? If so describe in precise terms what form this low entropy information from the sun takes and how it is tapped to produce the words I typed.

Information can be created by processes. Processes requier energy (if they are less than optimal efficiency or destroy information). Therefore running a process to create information will requier energy to do so. Thus it will create entropy.

 

Think back to the Mandelbrot set algorithm. This algorithm can be written in a few instructions. The information contained in that can be directly measured in the number of bits needed to encode it. You can work out the probability that that particular algorithm could randomly come about (as bits can be 1 or 0 you can lable them as a 50% chance of either).

 

This means you know the information content of that algorithm, but yet the output of that algorithm is infinite, and thus infintely unlikely.

 

If you are correct and it is the final output of an algorithm that determine hthe "entropy" of the system, then just writting down the mandelbrot set sould cause an infinite amount of entropy in the universe. As the Mandelbrot set algorithm has been written and even excecuted many time, and the universe is not in a state of infinitel entroy, then you are obviously wrong. :doh:

Link to comment
Share on other sites

I don't make a claim that there is such a law. To suggest I have is a straw man. My argument relies on probability and entropy only.

 

You implied it with your attempt at shifting the burden of proof, and you can't have it both ways. You can't argue that something is true, and also that it is not. If there is no conservation of information, then your argument cannot be valid.

 

Consider the the text of an instruction manual as compared to random letters and punctuation. Clearly the instruction manual contains far more information by virtue of the astronomical number of possible configurations eliminated because they are meaningless or convey far different meaning than the text contained in the manual.

 

I think this is where the argument fails. Information and meaning are not synonymous. A document written in an indecipherable lost language has no meaning, but it is not devoid of information.

Link to comment
Share on other sites

Well this really changed the way I was thinking about disorder,thermodynamic entropy and shanon entropy. Thermodynamic entropy is normally interpreted as the extra Shanon information needed to describe the system. We can observe that a physical system is chaotic and disorder but now this really does'nt mean that there is an increase in thermodynamic entropy or the shanon entropy because if someone comes up with an algorithm of just 2 or 3 instructions which describes the physical system very effectively then one can easily see that increase in disorder does'nt necessarily mean increase in thermodynamic entropy as the amount of extra shanon information required to describe the physical system is very less.

 

The decrease in thermodynamic entropy can be compensated for the increase in entropy that takes place in the process of finding the algorithm.

 

correct me if I am wrong.

Link to comment
Share on other sites

You implied it with your attempt at shifting the burden of proof, and you can't have it both ways. You can't argue that something is true, and also that it is not. If there is no conservation of information, then your argument cannot be valid.

 

Once again I repeat, my argument does not rely on conservation of information. I have not argued that conservation of information is a true concept nor have I argued it is false. Probability theory and information entropy arguments do not require conservation of information.

 

I think this is where the argument fails. Information and meaning are not synonymous. A document written in an indecipherable lost language has no meaning, but it is not devoid of information.

 

Functional information can convey meaning but I agree it is not meaning unto itself. They are not the same and I don't intend to indicate they are the same. Furthermore a language lost to you is not necessarily devoid of meaning simply because you or I do not understand it. Anyway conveyance of meaning is just one example of functional information. There are several other connotations.

 

It is much easier to produce (by random) a small algorithm than it is to produce a large data set. Therefore, if you want a real measure of the probability that an given data set exists, you have to take into account that there might be a more probable algorithm that could produce it.

 

I repeat, information content in the general form is not measured by the number of bits required to produce the minimal compression algorithm that will regenerate the data set.

 

2) The algorithm that produces the mandelbrot set which consists of a few instructions.

 

Yes, this is the compression algorithm required to produce the mandelbrot set. The degree of compression is the difference in bits between the set and the algorithm.

 

I know which one would be more probable if I was using a random generator to create bits in computer memory. The algorithm is actually infinitly mroe probable because it is equivelent to a finite data set, where as the Mandelbrot Set itself is infinite.

 

This is why you are wrong. You are saying that because the Mandelbrot set is infinite and the chances of it being created in th whole is also infinite, then it it should be that the algorithm to produce it has to be of the same probability.

 

No, it is not what I am saying. Information content is measured in the general case by the degree to which alternatives are reduced, not by the number of bits required to generate a compression algorithm or the probability of generating the algorithm by a random process. These are two different constructs. One is the measure of information contained by the algorithm or data set and the other is basis for the measure of compressibility/complexity of the data set (the bits of the algorithm compared to the bits of the output).

 

I am not talking aobut compression algorithms. I am talking about Shannon Complexity. Yes, Shannon complexity is used when designing compression algorithms, and in that case you would be correct, but as I never mentioned compression algorithms and as it is only one application of what I am talking about, then you are misunderstanding what I am talkingaobut or misrepresenting it.

 

A dog by any other name is still a dog. Your examples are indeed forms of compression algorithms. Often the most successful compression methods are to produce an algorithm that, when executed, approximately or precisely reproduces the data set. These are compression algorithms.

 

As your "proof" relise on the violation of this fact, then you can be assured that your "proof" is false.

 

No, it does not. You are stuck in a do loop that seems to only work with compression algorithms.

 

No, it recieves low entropy energy which it uses to create these bits of order. It is because the physical matter that makes us up is also a data store, thus organising it one way takes energy (and thus increases entropy), but then if you organise it another way it also take energy. the reason it does is because you erase the organisation that it had before (remember that I said it takes energy - and thus increase entropy - to destroy information).

 

Interesting speculation, how can you demonstrate this is scientifically valid? How do you substantiate this? How do you test it?

 

Information can be created by processes. Processes requier energy (if they are less than optimal efficiency or destroy information). Therefore running a process to create information will requier energy to do so. Thus it will create entropy.

 

More interesting speculation. What natural process that does not involve the participation of a mind can you point to?

 

Think back to the Mandelbrot set algorithm. This algorithm can be written in a few instructions. The information contained in that can be directly measured in the number of bits needed to encode it. You can work out the probability that that particular algorithm could randomly come about (as bits can be 1 or 0 you can lable them as a 50% chance of either).

 

The instruction is written with a mind (I would be interested to see a random process generate the instruction set, but given sufficient resources it may well be possible) and the machine the instruction set runs on must be designed and constructed by something that also had involvement of a mind. Do you have a natural example?

 

This means you know the information content of that algorithm, but yet the output of that algorithm is infinite, and thus infintely unlikely.

 

Nonsense. The practical output of the algorithm when executed on a physical system is quite finite

 

If you are correct and it is the final output of an algorithm that determine hthe "entropy" of the system, then just writting down the mandelbrot set sould cause an infinite amount of entropy in the universe. As the Mandelbrot set algorithm has been written and even excecuted many time, and the universe is not in a state of infinitel entroy, then you are obviously wrong. :doh:

 

1. Reproducing the same pattern or multiples of a pattern over and over does not generate new information. 2. The information represented by the output of an instruction set is created once when the instruction set and the machine used to process that set are created. Execution does not generate any new information no matter how many loops through the instructions. 3. the fact that a mind seems capable of quickly generating significant quantities of new functional information does seem to be unique and is in stark contrast to what natural systems alone can accomplish as entropy seems to represent a constraint except in the case of functional information outputted from a mind.

Link to comment
Share on other sites

No, it does not. You are stuck in a do loop that seems to only work with compression algorithms.

...

 

...

 

More interesting speculation. What natural process that does not involve the participation of a mind can you point to?

 

The instruction is written with a mind (I would be interested to see a random process generate the instruction set, but given sufficient resources it may well be possible) and the machine the instruction set runs on must be designed and constructed by something that also had involvement of a mind. Do you have a natural example?

Only because compression algorithms are the easiest to explain and demonstrate the concept easily. But I will try and explain an algorithm that can produce new information that is not a compression algorithm:

 

1) Start with an initial data set.

2) With each data set you have make many coppies of that data set.

3) With each copy of the data set make a random number of changes of these types:

a) Remove a Bit

B) Add a Bit

c) Change a Bit from 1 to 0

d) Change a Bit from 0 to 1

4) Test each data set against a set of criteria and remove any that are less than the average score

5) Repeat steps 2 to 4 until you have no data sets or some other criteria is reached

 

Ok. This process can be carried out by natrual means without the need for an inteligence to work it or design it. As the ability to add in new Bits to the data set means that the size of the data set can grow in size. Also, if the data set defines another algorithm, then the output of this secondary algorithm can produce information itself.

 

This is not a comprssion algorithm, and yet, it is capable of produceing information. Also, if the output of this process (the data sets) are an algorithm, then the amount of information this can produce is enormous.

 

You might also note that this is also the algorithm for evolution.

 

Now to prove that natural things can perform this without the need for inteligent intervention:

 

We know that organic chemicals can be produced by natural chemical processes. They can even be found in molecular clouds in space. They are quite easy to produce without external inteligent intervention. Specifically, I am looking at simple fatty acids and simple neucleotides (one in particular that fits the requierment is Phosphoramidate DNA).

 

Fatty Acids will spontainiously form vesicles as one end of them is repelled by water (and is attracted to fatty acids) and the other end is attracted to water. This causes them to line up in a bi-layer. You are familiar with this as soap bubbles do the same thing.

 

With the simpler fatty acids they are permiable to small organic molecule monomers, but not their polymers. This will allow the simple neculeotides to penertrate through the fatty acid bi-layer and if they polomerise inside the fatty acid vesicle, they can then no longer get out. If more neucleotide monomers enter, then they can polymerise into loonger and longer chains.

 

If you look at the algorithm I explained earlier, one of the modifications was to add to the data set. Well this fulfills that requierment.

 

Now, one of the features of Phosphoramidate DNA, is that like DNA that we are used to, it can pair up and form a double strand. It has partner neucleotides that it connects with. When it does this, it encourages the spontainious polymerisation that it does. However, it only does this at lower temperatures, when it is exposed to higher temperatures it will break off from its pair bonds (unzips into the two chains). This means a cycle of heating and cooling (say in a small puddle heated by the warmth of the sun, or by a convection current cycle near a hot volcanic vent) will cause these chains to split and then allow them to bond to a new set of monomers to replicate the matching chain.

 

Here we have replication, another part of the algorithm.

 

When it does this, the pairing is not always exact, sometime a mistake will be made (this would be the same as changing a Bit from 0 to 1 or from 1 to 0, or even not including a monomer that should have been there). This now fulfills all the changes to the data set as described by my algorithm above.

 

ALso, the fatty acid vesicle can grow if they encounter any other fatty acids in the environment by absorbing them into itself. When these vesicles get too big, they can easily be split into two or more vesicles and the way they do this does not cause the vesicles to loose any of it's contents.

 

So now we even have replication. All we need is some way for them to be tested against a criteria.

 

Well, one of the things about the way the fatty acid vesicles absorb other fatty acid vesicle is that a vesicle with a higher osmotic pressure will steal them from one with a lower osmotic pressure. The Neucleotide polymers produce ions that increase the osmotic pressure of the vesicle that they are in. Thus a vesicle with more neucleotide will be able to steal fatty acids from one with less neucleotides.

 

So what we have here now is a selection process that eliminates vesicles with lower osmotic pressure.

 

ALso, some of the neucleotides have some extra chemical effects that can cause other effects that can help or even hinder the vesicle they are in. Some, as individuals or as specific sequences, can catalyse the formation of other chemicals (including fatty acids and even neucleotides), increase or decrease the osmotic pressure, and so on.

 

However, these sequences are not likely by themselves, but if they do form, then the advantage they provide will cause the vesicle they are in (and any vesicle that breaks off from it) an advantage over other vesicles. Again, this is competition, but with a kind of arms race as successful vessicles will becomes more numerous, but then they will end up compeating with each other to both "eat" weaker vesicles, or against each other to be the better vessicle at stealing fatty acids.

 

If you even give this a moments thought, then you will see that only the vesicles that have "information" (that is have sequences of neucleotides that give these advantages) will do better than those that don't. This means that over time more "information" will be created. There is not inteligence, not mind at work here, this is straight forward chemistry and requiers thermodynamics to work (so it doesn't even violate that - remember there is a heat/cool cycle that is needed, so there is energy input and output to the system and thus even though we are getting a more organised system locally, in the larger system there is an increase in disorder/entropy).

 

Does this satify your need for a non-compression, naturally occuring system that increase information and does not violate entropy?

 

Interesting speculation, how can you demonstrate this is scientifically valid? How do you substantiate this? How do you test it?

I think the example I just gave shows this quite nicely, also it has been confirmed in the lab (See Dr Jack Szostak's work for more infrmation)

 

Nonsense. The practical output of the algorithm when executed on a physical system is quite finite

Maybe I should have said: "can know".

 

But, it is possible for many algorithms like this to know the extent of the data set produced, and this would mean that for them you can know many things about the data set, and even calculate the parts of the data set you want as you want it. This would allow you to "know" the data set, it just takes time and effort (remember I never said you would instantly know the data, just that you can know the data set).

 

1. Reproducing the same pattern or multiples of a pattern over and over does not generate new information. 2. The information represented by the output of an instruction set is created once when the instruction set and the machine used to process that set are created. Execution does not generate any new information no matter how many loops through the instructions. 3. the fact that a mind seems capable of quickly generating significant quantities of new functional information does seem to be unique and is in stark contrast to what natural systems alone can accomplish as entropy seems to represent a constraint except in the case of functional information outputted from a mind.

No it does not. You are right there, but as I never said to repeatedly copy the same sub-set of data, this argument against me is a pure strawman.

 

Also, as I showed in the example eariler, it is indees possible to produce new information without the need for a mind, nor violating entropy (information or other wise).

 

What you are forgetting is that the Mind is a process. It takes in information from the environemnt and fro our genes and does some processing on that information and produces an output. If oyu have ever heard of a Neural Network (as in computer program) then you know that the processes that our brains do matches up with these models, and as these neural networks are run as an algorithm on a computer, then we can say for certain that the way our neurons work in our brain is a process and an algorithm (although a very complex one).

 

So, even if you were correct, then the result would end up proving you incorrect. This is a problem for you as the only way you can win is to loose the argument. You have to prove yourself wrong to win. This paradox means that it is imposible for you to be right. It is logically imposible for your position to be correct. You are fighting a loosing battle and your greatest opponent is yourself.

Link to comment
Share on other sites

Once again I repeat, my argument does not rely on conservation of information. I have not argued that conservation of information is a true concept nor have I argued it is false. Probability theory and information entropy arguments do not require conservation of information.

 

 

 

I'm not saying that you are relying on a conservation law. I thought you were trying to show it to be so. Are you, or are you not, arguing that information cannot spontaneously increase?

Link to comment
Share on other sites

Are you, or are you not, arguing that information cannot spontaneously increase?

Cypress, this is what you appear to be saying, but you have it so couched in imprecise terminology and apparently moving goal posts that I continue to suspect you are deliberately obscuring your intent. If that is not the case then I - and a whole tranche of others - are mistaken. You might, therefore, consider trying to be clearer in future. Your present style just doesn't cut it.

Link to comment
Share on other sites

Cypress, this is what you appear to be saying, but you have it so couched in imprecise terminology and apparently moving goal posts that I continue to suspect you are deliberately obscuring your intent. If that is not the case then I - and a whole tranche of others - are mistaken. You might, therefore, consider trying to be clearer in future. Your present style just doesn't cut it.

 

I am sorry you and others consistently find my posts difficult to interpret, as I try to be clear and precise, though I am aware I don't often succeed. In this portion of this thread, I make the point that known physical only processes do not reduce information entropy inclusive of inputs and outputs. I am not making any statement positive or negative about spontaneous generation of unorganized, very high entropy information. Noise is an example of high disorder and I am not suggesting that physical systems with access to high entropy inputs can't output net information configurations that are equal or higher in entropy. I also note that a mind does however seem capable of generating low entropy functional information without an apparent external source of low entropy information.

 

If swansont believes he can offer examples of physical only systems that output high entropy noise that some define and label as a particular kind of information, I am not interested in disputing that point as my argument is not dependent on it or its negation. If he argues that physical only systems take in high entropy information and output lower net entropy information, I am interested in an example because I have not heard of one.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.