Jump to content

cypress

Senior Members
  • Posts

    812
  • Joined

  • Last visited

Everything posted by cypress

  1. Since there are follow on feed forward and feedback effects, the system is not as simple as you make it out to be. We don't understand these follow on effects, so the are additional excluded energy fluxes in the balance that your model neglects. You can be wrong without having to show that mass and energy balances are violated. No you have not included the following: 4) changes to atmospheric layers set up follow on effects that are not accurately characterized. 5) These feed forward and feedback effects may be positive or negative. 6) The final influence is unknown. Empirical data indicates that the final impact of increasing CO2 in the atmosphere from 280 to 390 ppm is likely between -0.2 and +0.4 C. The skeptic augument is that more information is needed to improve this estimate and it may not be effective to attempt to reverse the trend in CO2.
  2. It is a just so story because you are projecting. You point to a very few simple but necessary conditions and imply that they are sufficient to explain life from non-life. I readily accept what chemic systems do accomplish, but this is like claiming that one can walk from California to Hawaii because you have witnessed someone taking the first 50 steps into the Pacific Ocean. I am willing to stipulate that chemic processes alone does account for every one of the hundred or so basic building blocks of biological systems past and present. Even then you have not come close to explaining the first self-replicating chemic system. You moved the goal post and explained something much simpler. A just so story is a potential model of reality too, but both can be hopelessly incorrect. I am looking for validation, but you have nothing to offer that validates the model. I have previously explained why models do not substitute for evidence. No, but your presupposition may be coloring your judgement. Yes and it demonstrates that a model can be constructed to mimic a just so story. It does not demonstrate that the narrative is correct. I do not suspect or allege that hidden code was inserted. I do know that the model's underlying design guaranteed success due to its very nature. They are an implementation of how the designer believes evolution proceeds. It is unknown and unlikely that the processes that resulted in observed biological diversity proceed according to the genetic algorithms currently that have been designed so far. If it does then we should conclude that it was designed since all current algorithms offered thus far rely on information inserted into the systems by the designers. The theory of evolution posits that the process is not teleologic, that it has no purpose and it did not have a designer. Your experiment did have a designer, there was a purpose for it and most critically, it can only succeed because of information inserted into the algorithm by the designer. In addition my issue with evolution is that the processes cited, natural selection acting on genetic errors, do not produce the the results that the theory predicts, and thus it is reasonable to conclude that some other process or processes were involved. Genetic algorithms, by virtue of their design and use of design to inject information into the algorithm, suggest that the other process could be design. I think there may be other processes in play, but I note that self-replicating systems require large amounts of functional information as a blueprint for components and controls and information entropy considerations also demand a source for high order feeds of information. Probability alone is not enough. Entropy considerations argue against this. To claim otherwise is similar to the snake oil merchant claiming to have a stove that can extract energy from the air around it to heat your coffee. Random events are not enough, one must have a source of equal or higher order. Entropy considerations applied to information indicate that random processes acting on large sets can only lead to equal or lower states of order. Design processes are capable of generating new information. This is well known. Computers work because they are designed, algorithms work because they are designed to work. I suppose evolution may have included design processes in the past and that the past design explains observed diversity but now the design element seems absent which could explain why genetic error and natural selection is observed only to produce adaptive variations of existing components. Is this what you are suggesting? You did not. Showing involves producing an actual working system that was not designed but occurred without any aid. Nonsense. A handful of the many required building blocks yes, a natural self replicating system inclusive of a blueprint or plan to replicate from, not even close.
  3. Ok, on the second page you state that there is no shaft work but is it true that there is no work done on the environment surrounding your system? In your setup you defined the system boundary as the volume of the tank and by that definition no work is performed within the tank but there is mass and energy leaving the tank. This definition makes the equations much more difficult because you have mass and energy fluxes that are variable making the equations difficult to solve. You are missing the mass and energy balance equations which is why you describe it as indefinite. I was trying to get you to redefine the system to make mass constant. Try defining your system as the gas regardless of where it resides. When you do this you have the gas expanding into and displacing volume in the space surrounding the tank thus there is work being done.
  4. It is a conservation of energy problem, but it is about recognizing the conditions under which the system is changed. I reviewed your work and it seems the error you made was in properly characterizing the system. I was trying to get you to make a distinctions about the system based on the various scenarios I suggested but perhaps we should return to the initial problem and review the conditions specified by the problem statement. You have already noted that dU = dW + dQ and I suspect you know or have derived that dW = -PdV. Finally you are aware that enthalpy, the total energy in a thermodynamic system, H = U + PV which represents U, the amount of energy required to create the system plus PV, the amount of energy required to make room for the system in the environment that surrounds the system. Thus dH = dU + PdV + VdP which is equal to the heat capacity (at constant pressure) of the system times the temperature change nCpdT. Let's review each of these components in light of the problem statement again. First carefully define your system and identify the boundary of the system. What is the system? Next change in heat energy dQ, from the problem statement is heat energy being added or removed? Third work energy dW, is the volume of the system changing? Does this help?
  5. I agree use of the tables would not be useful for understanding in the long run. Your approximation is not a good approach because it will give you an approximate answer and it does not contribute to understanding the concept being explored by the problem. I would mark you down for it. Let's explore the third approach because I believe it does illustrate the concept. Let's replace the relief valve with a turbo expander which extracts work from the expanding gas. How would that change the final temperature, pressure and volume state of the gas in the tank? Based on the answer to that question, tell me how the system is similar and different from a chamber as described only with a movable piston held by a pin that is allowed to move from one point to another where the final pressure is 350kPa without any heat transfer? What about a container that is separated by a membrane from an evacuated chamber and then the membrane is removed such that at the end, the pressure is 350kPa? Does this give you any ideas of how you might set up the formulas.
  6. I believe the answer to your question lies in an understanding of sampling theory. In DFT, X(t) can be reconstructed by summing the shifted and scaled component parts of the sinc function, this should provide some understanding of the time tn for a given sample value x. Here is a link that may help you understand what is being asked of you.
  7. This can be solved two ways. You can continue to use the ideal gas law and as you note you will have to integrate since temperature is not constant. Or you can use the thermodynamic tables for air which are empirical but they do the work of integrating for you. Are you familiar with the tables and are you allowed to use them for this problem or must you do the math by hand? If you must do it by hand, I can help you find the error in your attempt. Just let me know. A third and easier solution would be to note that the system can be modeled as simpler case. Since the system is to be treated as adiabatic, how is use of the relief valve different thermodynamically from expansion by say a piston? After noting the difference (if any) can you model this by simple adiabatic expansion? Where is the work performed in the relief valve case?
  8. You seem to have some basic idea of the answer but I tend to agree that you should take a step back to the fundamentals of thermodynamics as alpyurtsever implied. These definitions not enough as you suspected, and they are incomplete/inaccurate for this purpose because they don't adequately address the context of your instructor's question. Entropy laws are exact, furthermore they are based on probability theory. The intent is to provide a model for how ordered systems behave when they are driven by random processes. Since interactions of molecular systems are governed by Brownian motion which is a random process, during these interactions the distribution and direction of heat energy flux is prescribed by the behaviors of random processes and the outcomes predicted by probability. In a sense it is, so let's explore this further since it provides a hint at the question being posed. Heat and work are both measures of energy but think about the distinction between work processes and heat transfer processes in the context of randomness. Do you detect any distinction? If you can, do you see why it is not that work was purposefully left out but that work "falls" out due to this distinction? You are correct that the second law, involving entropy, was originated in part to model the direction of flow of heat (or any ordered system under influence from random processes) apparently always in one direction at the macro level (when dealing with sufficiently large sets of particles such that probabilistic norms hold). Given that this is the purpose, have a look at the links offered, find the equation for the relationship between heat energy and entropy and use it to add additional context to the questions I posed in the previous paragraph. The answer lies within these considerations. It is my hope that this will help you answer the question posed by your instructor.
  9. I don't see any way to objectively establish your statement as any more factual than pioneer's. Can you describe a way to show that these animals definitely have the same characteristics of mind as humans including what is widely understood as ability to exercise choice and make contingent plans? Victims of brain damage do act in ways that are more consistent with this description of duality, Alzheimer patients in particular do. Furthermore there is abundant evidence that the mind is able to influence physiological changes to the brain. The placebo effect is also indication that the mind is able to influence the body independently. These are more than whispers. For those who are interested, there is a fair amount of literature in favor of duality but since this topic is not primary to this thread I will leave it at that. Your bias is showing, many many people strongly disagree that some new half-baked neurological speculations paint an accurate picture of the mind. You have constructed a straw man for my response to a poster other than yourself and now you are simply attempting to knock it down. My example demonstrated that edtharan's claim that the mind can only be a physiological product of the brain because there are no alternatives is incorrect. I offered one to demonstrate that Edtharan was incorrect. As I have previously explained, I don't need to show that it is correct to achieve the purpose that the alternative was offered you have admitted that as well. I now add that further discussion takes us off the primary point of this thread. If you go looking through literature you will find it is far better supported than your bias will admit. No, it was introduced to show that Edtharan's truth statement was incorrect. The literature is so full of these discussions, particularly in the past 100 years it is shocking that you would deny it. Again though traipsing further into the idea of duality is off topic, the purpose of introducing this idea was as described now thrice. Something else. Since morality is the sense of right and wrong, it is a belief or thought, while moral behavior is an act. It was to make the point that we don't know if animal behavior that is labeled as "moral" is a result of morality. It may well be a result of programming.
  10. In some ways, they are improvements to edtharan's example but as you confirm, they also have the same primary issue I called out. They begin with a configuration that meets the criteria defined by the algorithms designer as "functional" and thus they fail at that level. Secondarily they fail because as Marks and Dembski point out, the information they import more that adequately accounts for the growth in functional low entropy information as the algorithm proceeds. It is because it demonstrates my point that minds can import low entropy information into systems so that these systems output functional low entropy information but without a mind, physical systems do not output functional low entropy information Entropy is based on probability theory which does not preclude modest probability events such as you describe from occurring beyond that which is expected from the resources available. You read the articles so I am surprised you overlooked this. Entropy law sets the overall direction for a large system of events but it does not prohibit a portion of the system from becoming more ordered at the expense of another part of the system. Your example fits these observations. Can you point to observations that what you described occurs regularly and continually to the entire system? Alternatively the system could be receiving an infusion of low entropy information that allows for the system level increases in order. Can you explain the source of this low entropy information? If you can't, your example seems irrelevant.
  11. Alternatively mind may not be a product of and not constrained by physical law.
  12. I addressed this issue of definition previously and most recently in the post you are replying against. This is not the sense that the OP asked the question. I don't find it a very useful definition to address the question asked. Objective evidence, please. I have discussed the problem of applying math and other models invented by humans to model reality but is not itself reality. I don't need to objectively determine if you are self aware. It is self evident that I am self aware. I can can safely state self awareness by noting I think therefore a am. Morality and moral behaviors are two different things. Again your are confusing morality with behavior. Irrelevant, I don't make this claim. I conclude that you have little interest in the context of the original post. If you chose to wax on about classifying humans according to a definition not intended in the original post, don't let me stop you. I am pretty sure I understand set theory. I acknowledge that you can define a set that has humans in a set called "animals" while still remaining in other sets that don't include other "animals". Set theory is not the question posed in the original post. I'm not sure how this demonstrates bias on my part. I have not claimed mind is better than a long tail. I do claim it is different. There you go talking about behavior again. Let's talk about the cause of that behavior instead. It was offered as an alternative as a way to demonstrate that your truth claim was not truth but instead was just one possibility. I have no need or desire to demonstrate my suggested alternative is correct, as the mere possibility is sufficient for the purpose it was offered. I would be surprised to learn that you don't understand the distinction between soul and body. Though I am quite aware that some wish to claim the distinction is an illusion, the more honest of these people seem at least to admit the dichotomy.
  13. You are moving the goal post. Your example of evolution presupposed a functional system that accomplished a specific criteria. By linking the example to evolution you implied this criteria was reproductive success. This requires a functional system as an input. Your next just so story that chemicals can and do form into a functional self replicating system is not much different from the just so story of the backyard engineer. I understand exactly your just so story. I am asking you to objectively demonstrate it. Metaphysical beliefs rely on logic as a minimum and it is logical devices that you employ in your speculation. Science however requires objective repeatable demonstration by citation to observable processes presently in operation. You have not made a scientific argument because your narrative is metaphysical. I'm sorry, mathematics is a tool invented by the human mind, a model that attempts to describe the universe. Math is from the mind; it is not reality, it models reality. The model you call "Ultimatum Game" is an invention of the designer of the model and it shows only what the designer intended it to show. Models can often mimic reality but they can't demonstrate/show reality. "Experiments" run on models that are designed to produce a particular outcome irrespective of whether the process is correct, most often causes one to come to a conclusion intended by the designer. This is a form of conformational bias. Can you offer an objective test that relies on known and observed processes that confirms an evolutionary source for the studied trait? I am aware of one attempted test for evolution whereby Siberian Silver Foxes were captivity bred to see how long it would take to domesticate them. Turns out that they were able to produce domesticated foxes in just three generations indicating without evolutionary pathways, that the behavior traits existed in the wild foxes and that they did not require evolution but only genetic inheritance to produce it. Even if your game model is correct, it does not tell us the process by which the behaviors are produced. They could be learned, or inherited or evolved. When one uses models (not evidence) as the best "evidence" to support an idea, it seems like a sure sign the idea is weak. The model demonstrates that behaviors that exist in a designer can inserted into a model by careful design of a game. Relevance? In light of the above discussion of models, how does a model show that something physical, something other than a model, does generate a sense of right and wrong? One can develop a model that demonstrates how a creator instills morality into the created and it would reproduce observed behavior precisely, but it would be rejected by opponents despite its predictive capability. The fact that the designer of this model will claim they used the same physical tools your model uses and calls on processes with a source that are no worse explained than the sources your model used to define its behavior indicates the weakness of using models such as these. Your model is no better. Nonsense. I only ask for a scientific answer to the question; one with a causally adequate explanation. I do not say it is impossible or even unlikely, I simply note that in attempts to observe the claimed processes in action, they don't result in the claimed outcome. There are many ways to compress data sets. Often the most efficient is to devise an algorithm that when executed, produces back a data set many orders of magnitude larger than the number of bits in the algorithm. This is one form of compression algorithms. Your challenge is to describe how this algorithm was designed in the first place. Even if it is true (and this is not established), the branch of computing does not tell us how these instructions came to exist in the first place. If we take the computer system as the analog, we would conclude it was designed. Since models do not establish causal explanations (unless you also accept a model that includes a creator, in which case we have two explanations) we don't have an adequate explanation. Have you established that food is a source of low entropy information? Have you established that thermodynamic entropy substitutes for information entropy? Have you established that the sun is a source of low entropy information? If not then it is you who is having difficulty staying on point. Yes, I have, and I find zero examples of any known evolutionary pathway with greater than three selectable steps. Perhaps you can offer one.
  14. Not all distinctions between humans and other biological organisms are arbitrary. Many are objective measurable differences. Introspection/self awareness, morality and altruism are examples of a characteristic that are objectively known only to exist in humans. Assigning differences as arbitrary is a common tool for conformational bias. If one chooses to downplay differences, the human mind is quite capable of deluding oneself into seeing it that way. We can't be sure the animals are deluded, as other explanations are possible. We can only observe behavior and deduce cause based on our knowledge of human behavior. You are projecting. Since you admit that making such distinctions can involve delusions on either side of the argument, how can you be sure that your argument is not a result of delusion? Even one example of an objective difference is enough to show you are incorrect. Physiologically we are the same as some organisms and different from others. If you choose to define "animal" as those organisms that are physiologically the same as humans, then we are indeed animals by that definition, but this approach fails to address the nature of the question posed by the OP. Your bias seems to limit your ability to consider alternatives. Here is one alternative that explains what we observe as well or better than your preferred explanation. If the brian is the medium by which mind interfaces with the environment, and the brain is damaged, then the brain will be unable to project the mind as it is. With a damaged brain, the mind also will be unable perceive and respond as it previously did. It will be impaired by false and incomplete signals and data. Consider this experiment as a test of this idea: The television is not the program but if the television is damaged, the program will be obscured or not come through at all.
  15. The question is poorly framed because there are multiple and inconsistent definitions of information. Some definitions allow random noise to be defined as information, and there are many ways incoherent unstructured high entropy noise can be generated since one can tap into a near endless source of high entropy signals as a source for generating noise. I answered that functional information is not spontaneously generated by processes that are known to reduce to material and physical laws alone and note that it makes sense that it is not spontaneously generated, likely due to entropy/probability considerations. Do you have an observed example of spontaneous generation of coherent, highly ordered and specified functional information except for by a mind? Edtharan's simplistic examples of what he describes as evolution fails on several levels, most notably the fact that it requires functional information to begin, so at best it slightly reconfigures existing functional information. His just so story about how functional information might be first formed is a not much different than the backyard engineer peddling a half-baked design for a machine that supposedly taps into high entropy systems to produce low entropy output and useful work. Edtharan's claim, first promoted by immortal several posts prior that the mind takes in high entropy information from the surroundings and high grades it is a variation on the backyard engineer and the half-baked design. To review where we are with this argument, swansont agreed that Hitler and Stalin acted immorally but also claimed that physical processes and laws are amoral. I responded that morality has no explanation in evolutionary theory and one should conclude that evolution as framed is incorrect. He offered no causal explanation to suggest how morals could have emerged from physical processes and laws but instead asked for an example of an apparent violation of physical laws. I offered the mind as an example since it seems capable of generating large quantities of low entropy information without a known source of the same. This discussion of morals is not intended to imply that evolutionary thinking leads one to behave immorally, instead it is to point out another of many significant inconsistencies in the theory that makes the grand claim that all diversity, including behavior and mind, is accounted for by known and observed evolutionary processes. This claim is simply false as demonstrated by the several related topics discussed in this thread. If one wishes to claim that evolution accounts for adaptations that allow populations of similar organisms to continue in the face of habitat changes, I agree, but at this time the grand claim is not supported by evidence normally required of scientific arguments. Previously the discussion centered around the fact that known and observed evolutionary processes have not produced any examples of the continuous evolutionary pathways that the theory requires despite the 80 years biological researchers have been attempting to identify them. Despite the trillions of trillions of observation opportunities and tens of thousands of generations represented by the organisms produced in labs, we still have no confirmed example of even a four step pathway. This is in stark contrast to the fact that there are millions of substantive differences among the set of all mammals that by geologic time and population genetic studies indicate represent only a very small fraction of the number of organisms observed in the lab, indicating that short pathways should emerge after just a ten or so generations. Those who say severe environment change drives evolutionary change
  16. I am sorry you and others consistently find my posts difficult to interpret, as I try to be clear and precise, though I am aware I don't often succeed. In this portion of this thread, I make the point that known physical only processes do not reduce information entropy inclusive of inputs and outputs. I am not making any statement positive or negative about spontaneous generation of unorganized, very high entropy information. Noise is an example of high disorder and I am not suggesting that physical systems with access to high entropy inputs can't output net information configurations that are equal or higher in entropy. I also note that a mind does however seem capable of generating low entropy functional information without an apparent external source of low entropy information. If swansont believes he can offer examples of physical only systems that output high entropy noise that some define and label as a particular kind of information, I am not interested in disputing that point as my argument is not dependent on it or its negation. If he argues that physical only systems take in high entropy information and output lower net entropy information, I am interested in an example because I have not heard of one.
  17. No, I did not describe CO2 and atmospheric mixtures of CO2 as black bodies. I described CO2 in the atmosphere (distinct from individual CO2 molecules) as capturing radiation of particular frequencies and then re-emitting particular frequencies of radiation whereby the heat flux is guided by formulas involving temperature to the forth power. Here is a link that confirms the behavior for black bodies, grey bodies and also for pure and mixed molecular gases. You are incorrect to suggest atmospheric layers containing various concentrations of of GHG's do not behave as I described. As I previously clarified, layers of atmosphere are not black body radiators. But the behavior is as I described, and your argument is inconclusive just as I previously claimed, not necessarily because what you describe is inaccurate but because it is too simplistic. The earth's energy budget and energy balance is not nearly as simple as you would have us believe. The system is not understood to the level that we can even balance inputs and outputs to generate an energy budget. Since we can't do an energy balance, we can't treat the the system as a black box and reach meaningful conclusions. We do not know or understand all of the significant follow on, feed forward and feedback effects. It doesn't because it employs conformational bias by myopically ignoring the reality that we don't understand the full system and cannot perform a proper energy balance. Tipping points in physics and climate are not described the way you have just done so. They are points beyond which an irreversible change occurs. Since historical climate proxies indicate that Ice ages are reversible, they do not represent tipping points. Sorry you are in error, Ice ages do not represent tipping points. History indicates global climate can change by 15-20 or so degrees, and CO2 concentrations can modulate from near zero to over 2000 ppm and not encounter any tipping point. Then you are speculating. You don't know what magnitude effect or if any effect would occur. You are sounding the alarm without facts. You don't even know if there is a problem. This is alarmism. NO, we can't say this. What we can say is that changes in GHG concentrations could have an impact on net energy retention and thus the surface climate but we don't know what impact it will have or the magnitude of this impact because we don't understand the sum of all the factors that influence the earth's energy budget. We are not able to perform an accurate or meaningful energy balance at this time so we don't know and cannot quantify what contributing factors might be in play to override and therefore erase or magnify and therefore add to the modest notional impact you describe (but can't quantify) due to increasing CO2. We have observed historical changes in the global climate driven by natural events that have resulted in temperatures about 6 C warmer and 12 C cooler than the present. Since the industrial revolution in the early 1800's global temperatures have risen about 0.8 C and thus far natural influencers unrelated to GHG's have been identified that account for between 75 and 100% of this temperature rise, leaving at most between 0 and 0.2 C unaccounted for. This is hardly the problem some alarmists attempt to make it out to be.
  18. Once again I repeat, my argument does not rely on conservation of information. I have not argued that conservation of information is a true concept nor have I argued it is false. Probability theory and information entropy arguments do not require conservation of information. Functional information can convey meaning but I agree it is not meaning unto itself. They are not the same and I don't intend to indicate they are the same. Furthermore a language lost to you is not necessarily devoid of meaning simply because you or I do not understand it. Anyway conveyance of meaning is just one example of functional information. There are several other connotations. I repeat, information content in the general form is not measured by the number of bits required to produce the minimal compression algorithm that will regenerate the data set. Yes, this is the compression algorithm required to produce the mandelbrot set. The degree of compression is the difference in bits between the set and the algorithm. No, it is not what I am saying. Information content is measured in the general case by the degree to which alternatives are reduced, not by the number of bits required to generate a compression algorithm or the probability of generating the algorithm by a random process. These are two different constructs. One is the measure of information contained by the algorithm or data set and the other is basis for the measure of compressibility/complexity of the data set (the bits of the algorithm compared to the bits of the output). A dog by any other name is still a dog. Your examples are indeed forms of compression algorithms. Often the most successful compression methods are to produce an algorithm that, when executed, approximately or precisely reproduces the data set. These are compression algorithms. No, it does not. You are stuck in a do loop that seems to only work with compression algorithms. Interesting speculation, how can you demonstrate this is scientifically valid? How do you substantiate this? How do you test it? More interesting speculation. What natural process that does not involve the participation of a mind can you point to? The instruction is written with a mind (I would be interested to see a random process generate the instruction set, but given sufficient resources it may well be possible) and the machine the instruction set runs on must be designed and constructed by something that also had involvement of a mind. Do you have a natural example? Nonsense. The practical output of the algorithm when executed on a physical system is quite finite 1. Reproducing the same pattern or multiples of a pattern over and over does not generate new information. 2. The information represented by the output of an instruction set is created once when the instruction set and the machine used to process that set are created. Execution does not generate any new information no matter how many loops through the instructions. 3. the fact that a mind seems capable of quickly generating significant quantities of new functional information does seem to be unique and is in stark contrast to what natural systems alone can accomplish as entropy seems to represent a constraint except in the case of functional information outputted from a mind.
  19. It's not anywhere near this simple. Greenhouse gasses do not scatter radiation they adsorb and re-emit particular frequencies of radiation in proportion to temperature to the forth power. Because radiation is a function of temperature, the temperature profile of the atmosphere above the earth where the GHG resides will influence the earth's surface temperature and visa versa and this interaction will not be simple to deconstruct but will generally cause surface temperatures to rise to overcome the adsorbed and back-emitted energy. When the the concentration of these GHG's change, this interaction becomes much more difficult to accurately describe, but one can at the very least predict that changing concentrations could change the temperature profile of the atmosphere. If the change in temperature profile alters other factors that change the adsorption and reflection of incoming solar energy (cloud cover is one possible feedback effect) or induces other feedback effects the situation turns to one that is not understood. The reality is that the actual effects are not understood at the present time. Not all complex systems have this property. Some are inherently stable and self damping. Historical proxies indicate the climate is such a system that is inherently stable and self damping within a broad range of temperatures as much as 12 C colder and 6 C warmer than today irrespective of significant changes in CO2 concentration. In light of the historical record, the null hypothesis should be that the earth's climate is stable with respect to CO2. Perhaps I am wrong but I'm not sure we understand enough about Ice Ages to know that they represent tipping points. How can you substantiate this? Can you identify the tipping point and the causal factor? What caused the system to tip back? What volume of methane is generated by permafrost areas and what rates will be released as a function of global temperature anomaly? What evidence do you have that these rates and volumes will change methane concentration and what effect will it have on surface temperature? Is this demonstrated or speculation?
  20. You might have a look through the old homework help. this question with different constants was answered here a short while ago. If you can't find the thread, suggest how might solve it and someone will help you from there.
  21. I don't make a claim that there is such a law. To suggest I have is a straw man. My argument relies on probability and entropy only. The link provides context for entropy and measurement of information content as a function of probability as well. Shannon was most interested in changes in information content as a result of transformation, transposing and transmitting. His purpose did not require him to address initialization of information or determination of absolute values of information entropy. His primary concern was the change in information content and entropy during these operations. The theory did require him to address the impact of deterministic processes on information and that is covered in the wiki article but it did not require him to address differences in the distribution of information bits. In this limited treatment any configuration of bits of a string that have equal probability given that a random process generated the pattern are equivalent. However, while this assumption may be valid for his purpose, it seems not to be valid in the general case. Consider the the text of an instruction manual as compared to random letters and punctuation. Clearly the instruction manual contains far more information by virtue of the astronomical number of possible configurations eliminated because they are meaningless or convey far different meaning than the text contained in the manual. The analogy to thermodynamic entropy is the initial configuration of energy states in a system of particles. Consider a machine that randomly distributes energy states among the macro particles in this system then the derived configuration is described by a stream of information bits. By the theory promoted by shannon information all the possible configurations of bits represent an equivalent amounts of information and entropy. However the corresponding thermodynamic system configurations does have different amounts of energy or entropy. This difference in initializing the information is not an issue for Shannon's purpose since he was concerned with relative differences after initialization as his information system undergoes changes. Random noise as input was as useful for his theory as encoded conversation despite the dramatic difference between the two data sets, but it represents incompleteness in the general case when generation of the information is of interest like the instruction manual example. This general case must consider that information strings have a degree of functional order unto themselves and the degree of order is represented by probability that differentiates between functional order and random distributions information theory must measure the the amount of functional information. When I offered functional information as an example of something the human mind is apparently capable of generating despite the apparent inability of physical systems and the laws by which these physical systems arise to do the same, I was making a distinction between this functional or specific information (including the words I typed here) and the random noise you have attempted to describe as being the same. Perhaps if one could show that imported energy is transformed by the mind into information and thermodynamic entropy substituted for information entropy but even this is insufficient until one can show that the mind makes use of known physical processes and laws alone to perform this transformation and if this is the case then one could use these same processes to construct a machine that performs this same function. Think through what I said again. Compression is just one aspect of information theory and measuring the information content of an uncompressed data set by the resulting bit size of the compression algorithm is not the general method of measuring information content. The general method is as I previously described and is discussed in the wiki article also. When measured as described it is consistent with your example. It does not contradict it. The first group of numbers belong to a set with far fewer equivalent permutations than the second and thus have lower probability and yet as you described are more informative than the random noise. Let me remind you that you also said you have never heard of "information" entropy. I have been developing and designing and implementing computer control systems and data acquisition and analysis systems for over 25 years, and from initial comments it seems I may I know a thing or two more about this subject. No, sorry it is not. How can you be sure that entropy only applies to compression algorithms and uncertainty measurements? You were unaware of information entropy just two posts ago. Information entropy has been applied to cosmology as well. Google for it there are a number of fascinating articles. I think you will find those applications mirror my use of information entropy. Two posts ago you admitted ignorance of "information" entropy, but now you are conversant? Does the human mind receive low entropy information from the sun that allow Shakespeare to produce his manuscripts and Mozart to compose his scores? I realize the Earth is open with respect to thermal energy. Is it open with respect to functional information? If so describe in precise terms what form this low entropy information from the sun takes and how it is tapped to produce the words I typed.
  22. YodaPs, You asked me to improve on the Skeptic's argument and I responded with this post. Do you ever intend to respond?
  23. I appreciate your opening statements. I suspect I am equally frustrated that even you seem to miss the clear statements I have made about what I believe. I don't believe the observed evolutionary processes sufficiently explain biological diversity. Instead I suspect other more capable processes are involved. I don't know what they are but I believe we should be looking for them. I make reference to design because design is an example of a more capable process that is in operation today that may or may not be this elusive process, however if evolutionary biologists don't get busy looking for a more capable process, designers may soon develop a new life form from scratch and when they do, they may hand design advocates a near insurmountable lead in explaining life and diversity of life.
  24. There are several definitions of information, the one you chose is not the same sense as I a speaking. Information content in terms of probability distribution refers to the degree to which alternatives are eliminated or alternatively the degree alternative results are possible. Large amounts of information corresponds to low probability states. Here is a wiki article describing "Information Entropy" and the Measure of Information" I urge you to look at both sections Sorry, no, It is your definition and concept of information that is the outlier. Please review the article linked. Please review the wiki page and help me understand where I am misapplying this concept. I may come back to other point in your post once we are on the same page. I don't mind that you incorrectly label me and imply I am ignorant. But you should first at least google for and perhaps consider read the wiki page to be sure you know what you are talking about. I was not citing Dembski in this thread. Entropy does apply to Information theory and was some time ago. Have a look at the Wiki page for more "information" (specifics, that is reduction in alternatives). I am very aware that entropy can be reduced locally which is why I specifically identified the closed system of interest and included the caveat "without an external source of information". Entropy is derived from probability theory so it is difficult to understand how referring to probability is a dodge.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.