Jump to content


Senior Members
  • Posts

  • Joined

  • Last visited

Everything posted by cypress

  1. This discussion is about what processes are capable of generating new coherent information since it is a necessary component of the observed diversity. I do not dispute that diversification happened, but I do note that random mutation and selection does not generate the requisite precursor components in a timeframe sufficient to account for the diversity recorded in the fossil record. Yes I am aware of genetic algorithms. I have studied them in detail and I note that every one I have looked at the designers have imported active information sufficient to allow the system to function. In other words they work because the designer designed them to work. If you doubt this let's deconstruct one and I will show you where the designer has imported information. Should we conclude from this that evolution was designed? Unfortunately your evidence points to design by virtue of smuggled in design as I explained above. It can only certain if your prior commitment has eliminated all other possibilities because unfortunately for you your strong evidence was designed. Your other evidence of retroviruses and supposed suboptimal design is off topic and unconvincing because both could be exactly as you imply and yet neither offer any method for natural processes to generate coherent information. Edit: It is also somewhat useless to presuppose what might or might not be a "good" design as in the case of the larygeanal nerve. If the nerve is suboptimal as you suppose and if natural selection selects advantageous systems then you would not expect to see it if selection had the capability the narrative supposes it does. One can play this game endlessly, so I suggest we focus on the arguments at hand rather than raising red herrings. I suppose I bolded the wrong word. I took exception to characterizing it as "observing new" since metabolization of citrate is not new. It has been observed in the past and is even metabolized by the bacteria under other environmental circumstances so it is more properly characterized as modification of existing function. It is an adaptation. Furthermore that particular example does not fall outside the boundaries of what should be expected of a blind search given the very modest increase in coherent information and thus small decrease in entropy more than covered by the resources brought to bear on the problem. This thread is primarily about evidence for a creator of the universe and life and the evidence is generation of coherent information. Lenski's work with metabolism of Citric acid in an oxygenated environment is one such example of the limits of mutation and selection. Axe's work outlining the rarity of functional protein systems is another that provides evidence of severe limitations on stepwise searches through protein sequence space. There are many others but I think they are more appropriate for another thread. Start one on experimental evidence suggesting limits on random mutation and select ion and I will be happy to contribute.
  2. Organisms indeed have and do change over time and have diversified over time. I am not certain that we have observed new behaviors and abilities. Experimental results indicate that random mutation and natural selection does not explain how diversification of life occurred.
  3. I see no reason to agree to disagree. We are not at a stalemate where neither argument provides a more parsimonious explanation. Your argument is based on a logical fallacy held together by carefully redefining contingency and choice and by improperly assigning the contingency to the wrong causal agent. Perhaps I did not describe your error accurately enough. In your final post you repeat the error when you claim a chess program makes choices when in reality its moves are fixed, they are predetermined by the game engine, the software designer, and your past moves. The computer software function entirely on deterministic laws whereas the contingent portion of the computer's moves (if that is the way you choose to describe it) are a product of the software designer and the human competitor. So if you want to continue to assign the choices that the software designer and the human player made to the computer as you have done, then you will need to regress their choices as well. Either way you are in need of a new example.
  4. I have read the talk origins articles and I find they fall short of addressing square on the actual issue. They are fine for what they do explain, They just don't explain the substance of the issue. It seems to be a common approach for that site. I also noted the topic change. Previously I attempted to stay with pure random processes but it is clear that most here accept the limits of random chance so if you wish to raise it again that's fine. Edit: If you take a close look at Lenski's ecoli you will find that it involves no more than three mutations one of which is not advantageous. It does fit in the class of events that random chance can accomplish given the resources available. It does not violate the rules of information enthalpy. Since it does not fit the request I am asking for, I properly dismiss it for that reason. It is interesting in that it actually confirms my claims regarding what natural systems are an are not capable of accomplishing. The examples are incomplete and contain large gaps in the steps. It is like coming to a wide river and noting three exposed rocks near the center but then claiming that since there are three rocks one can cross using only the rocks without getting wet. The examples aare nothing more than the usual observation that similarities exist in biological components. They do not explain how the differences occurred and it is the how that we are discussing. I do not dispute that changes happened, but i note that the how remains unaccounted for. Random error and selection seems to lack the ability to generate the changes detailed in the articles. I have read it a while ago. Shannon information theory deals with preservation/degradation of data sequences through communication networks. It is interesting, but it is not relevant to our discussion. The discussion here is the source of meaningful, functional coherent information used to generate and control functional systems. A very different situation. In the article the authors address this difference and make clear that they avoid this topic intentionally. Here is what they say. The paper once again is interesting for what it does address, it just does not address the topic here. Actually there are several good reasons to doubt the neo-Darwinian narrative. The fact that mind is capable of generating coherent, encoded, digital information and that all biological systems require coherent digitally encoded information to generate and mange the processes, and that we do not observe any natural processes currently in operation generating this kind of information is a strong indicator that there was a mind behind the cause of life. The same is true about systems that are fine tuned. An earlier post claimed that life is fine tuned and thus we are discussing whether or not it has been established that biological systems were derived from natural processes. I contend the answer is that we don't know, but we do know that thus far only mind is known to be capable of generating the fundamental aspects of 1) fine tuned systems and 2) Biological systems. Now there are several other indicators of mind behind both this universe and life and we may yet come to those, but currently we are working these issues. It is only useful if it is functional. Biological systems must be functional or it is a waste of critical resources expressing junk. Ten years ago we did not know if every or even many mutation generates functional components. The theory predicted that a reasonable percentage of mutations should be functional and at least some should be advantageous. However research has continued in this front and has answered this question1. It turns out that in a protein of 150 amino acids (small for most proteins) the ratio of functional to non-functional sequences is 1 in 1074. While any sequence may be expressed, it is far from true that any sequence has the the potential to be biologically active. It is an understatement to say the most aren't and it makes your argument vacuous. The ratio continues to drop as sequence size increases as Axe demonstrates in his work. 1) Douglas Axe, "Estimating the Prevalence of Protein Sequencing", Journal of Molecular Biology, 2004, plus several other from Axe in earlier releases of this journal and also Proceeding of the Nation Academy of Sciences between 1996 and 2004. I find this argument completely ridiculous. Never mind the reality that biological systems quickly trim away junk to conserve energy, to equate junk with functional systems stretches credulity to the breaking point. I understand why talk Origins would do this. After all they are advocates of the neo-Darwinian evolutionary theory and present not only points that build the case but also do their best to obfuscate any argument against their presuppositions. They are hardly the fair minded unbiased site they purport to be. It's ok to be an advocate but when they try to shift the substance of the argument as in this example, it comes across as slimy. Let's stay focused on encoded information that drives functional systems please, since coherent information is a necessary requirement while noise (junk) is not only not required, it is detrimental.
  5. It's not clear we are "going" anywhere or not. Your words are easy to write but without clear demonstration that this is true they are empty. I have looked very hard for an example of a known (documented as opposed to presumed) contiguous evolutionary pathway of greater than three stepwise mutations where each step has selectable advantage. Can you offer an example to demonstrate that your narrative is correct? Yes and I offer the same challenge to you as I did to Cap'n. Yes, I specified the kind of information (coherent, useful) earlier in the thread. I don't see anything in the post worthy of a long reply as I have already clarified the kind of information we are discussing. Information entropy is an area of acute attention these days. It is now generally agreed that the search for the explanation of life from non-life is esntially a search to explain the information content of biological systems. As an additional note, we've quickly moved past fine tuning and the inability to demonstrate that any other than mind can and do fine tune. We discussed many reasons why random processes don't cause fine tuned systems and into the inability of random processes to generate coherent encoded information. Random mutation was suggested but then rejected in favor of mutation (a random process) along with selection (a deterministic process). I will assume that most here accept that physical constants are fine tuned though some dispute how they came to be that way. I will assume that most here agree random process alone don't generate fine tune systems and don't generate coherent information. If these assumptions are correct we can examine deterministic mechanisms. I have asked Cap'n and Skeptic for a known observed case of natural selection driving an evolutionary pathway greater than three contiguous steps (no steps skipped, each documented) because while I accept the narrative, I question if it actually occurred the way the narrative describes. I suspect there are other processes involved.
  6. to say "a choice is made" in your sense is using "choice" to mean "fixed selection, based on deterministic factors where no other outcome is possible". However in context with this thread, when we say there is a choice to be made we mean that the outcome is not fixed until after the contingency is executed. When only one outcome is possible, it does not involve choice because no choice can be made since only one possibility exists. In addition, I have provided a positive definition. Choice is deliberate (planned/intentional) contingency. Design is an excellent example of planned contingency. In contrast, random events are unplanned. Nonsense again I note that a positive definition was offered from near the beginning of this discussion and has been repeated. What is logically incoherent is for someone to redefine terms as an attempt to make points as you have done. Deterministic causation is clear and so is random causation, so you have no need to redefine choice to fit these causes except to serve your ill fated argument. I have asked you numerous times to demonstrate that planned contingency is reducible to determinism and chance but you are not able to do so. You cannot because it is not reducible and that is the trouble you are having. The balance of your argument seems nothing more than an attempt to shift the burden away from you. It is self evident that humans choose and execute planned contingent outcomes. Even you know this. If this were not factually accurate then we would have to believe that computers and airplanes and every other system humans have ever planned developed constructed and employed are reducible to, and an inevitable outcome of physical laws of nature and dumb luck But this is not the only support. In addition, uniform experience and repeated observation informs us that intelligent agents with the ability to make planned choices can and do design information rich systems (computers for example). We also do not observe natural physical laws and chance conspire to do anything of the sort. In contrast, I remind you that you have offered nothing to support your position and I also will state again you have not because you cannot.
  7. Information theory has information entropy measured based on the the degree of order and thus the number of possibilities ruled out by the information provided. Thus the amount of information generated by random processes such as a blind search would be based on the resources available to it and that in turn would indicate what numbers are in reach. Given the amount of information required, even billions of billions of years are not close to the timescales one would need by random processes. Information generation requires a much more efficient process.
  8. You are conflating choice where more than one outcome is possible, and a search for the deterministic selection where the outcome is fixed. Computers make fixed selections. Again you are redefining choice to favor your presuppositions. It is clear that you want to reduce free will an illusion, a trick of the mind whereby we think we are making choices but in reality we are making fixed selections based on bast conditions. It is a metaphysical belief you hold with not objective support. This was the original claim I made and after several replies it is now clear that I was correct in that claim.
  9. It is somewhat arbitrary. Still a small number by our standards but also something completely out of reach to random processes. Generally I would make the distinction at active use of the existing information, not simply the existence of it. I did mean to exclude processes that import information directly as part of the process of generating new coherent information. Most Evolutionary algorithms sneak in new active information by virtue of the design and execution of the code. The challenge for biological systems is to explain how the information got there in the first place but even existing biological systems with pre-existing information don't seem to generate new information beyond the rate of a blind search. Perhaps you know of a process that does generate new information efficiently.
  10. Possibilities or choices by intentional contingent actions are vital as you say to a fine tuning argument, however while these possibilities may be in play, one cannot assume the same is true for random natural causes and this is the distinction. I am questioning the presumption that there exists a natural mechanism capable of generating the appearance of fine tuning. I do so because thus far in the 1000's of years of documented history, we have not yet seen any observable demonstration where known natural processes generate a fine tuned system. I have provided references and writings that i can only assume you have no intention of reviewing. The subject is complex and drawn out, and does not lend itself to this format. I urge the interested person to review Penrose and Hoyle's writings. Your description may work very well for random processes but we don't know if random processes are capable of generating universes or varying the constants. The fine tuning argument notes that designers are often capable of setting parameters over a large range of values. I have no problem with the general analysis but I note that we have no basis for speculating on the distribution between the Y's and therefore I don't find the model very revealing or particularly useful. Fine tuning is a reality regardless of the process by which it occurred. It makes sense to discuss how and why fine tuning occurred but the fact of fine tuning is so well established I find it uninteresting to discussed if it is fine tuned. I provided two good examples in the initial entropy and gravitational flatness of the universe and saw nobody objecting to them. Your example of snowflakes is hopelessly flawed, snowflakes form deterministically from physical processes. They are not improbable (since the are destined to form) and they form in an environment that guarantees they will form not an environment hostile to formation. I find that when people run out of substance they begin to personal attacks. This is a special case of negation because it states that P is necessary and sufficient to produce Q. In the general case it is not true that all conditions are known. I don't dispute either for what they do explain, but I question both for what they don't explain. Surely you are not implying I think they are flat out wrong. It is an elegant narrative but evolution does not "explain" fine tuning because the theory even now lacks causal adequacy. There is as yet no identified process that is known and observed to generate the attributes that the narrative assigns to evolution. The evidence is based on observed similarities but these similarities do not explain how the similarities occur and more importantly how the differences came to be. If you have an issue with that tread you should raise it there. Individual binding sites and controls can in theory be generated by random error and selection but the time required for this process far exceeds (by a factor of ten) the amount of time available by geological measures. When you consider that thousands of these small changes are required to generate novel form and function, the argument falls apart completely. Nice try. Sorry no it is not. You are changing the definition. A snowflake split in half is now two snowflakes and performs the same primary function as before. Take this example to the other thread if you wish to discuss IC more. This feature also does not meet the definition of IC. A copy machine does not generate new information, it only duplicates it. No net increase. Mutations generate very small amounts of new information occasionally but only within the bounds of the probabilistic resources available. There exist no examples of large amounts of (>500 bits) of coherent information generated by random processes. Perhaps you can show where I am wrong. Citation for this please. I will state now that you are wrong and you cannot offer any example of any system that does not use imported information or design to demonstrate your claim.
  11. If there is more than one outcome possible then it is not deterministic by definition. Deterministic outcomes are "determined"; they are fixed with no other outcome possible. Sometimes when causal modes are not understood, it may seem as if the outcome is not fixed, but that is due to lack of data. The chess program was designed by an intelligent agent who inserted information into the program. It is a product past design. This is why we must be able to trace out past causes so we can understand what modes are actually in play. As you indicate, the chess program makes fixed moves based on evaluation of the information and the rules provided by the designer and since it is fixed, there are no choices being made by the program. The computer and program is not capable of making any choices at all, rather it is constrained to the fixed moves determined by the rules, the information and the choices of the human competitor. No that is incorrect. Deterministic outcomes do not have multiple possible choices available. Only one outcome is possible from deterministic causes; there are no choices and no other possibilities. The difference is that you seem to be mixing terms. Introspection, on the other hand, provides the means and the data to allow us to know that intelligent agents with free will are capable of and do make purposed choices. Introspection informs us that there are at least three modes of causes, namely necessity, chance, and design.
  12. Rock salt is needed to maintain porosity so the air can flow around the salt. You could use a lift and drop system to increase exposure to the air or a fan blowing over the salt on a tarp or very large shallow pan.
  13. No that is incorrect. Think about the question they are asking. If you pay 590 per month for 134 months, what is the total paid?
  14. I provided references to two extensive writings on the subject from Penrose and also Hawking and Page. I also offer Fred Hoyle and his writings as well. In contrast, you have no way to substantiate your presumption that there are any "possible universes" beyond the one we exist through random processes and no way to demonstrate your claim of near certainty regarding where ours might fall into these imagined random universes. Imagining existence of alternate universes is not a demonstration that there is any possibility that other imagined configurations might be capable of harboring life. Imagination is nice but it does not demonstrate any real possibility. Your speculations cannot substitute for evidence. You have suggested that there are an infinite number of configurations of universes possible by random processes. If they are not actualized then it is without substance. False, most of our world remains unexplained. Predictions are not observations. At this time there is no known process other than design capable of generating systems that meet the strict definition of Irreducible Complexity, just as there is no known process other than design capable of fine tuning systems. Both of these sytes require a substantial amount of coherent information in order to derive them in the first place and likewise, only intelligence is known to generate information. I look forward to a demonstration that other processes are able to generate these systems.
  15. You are simply denying what is obvious. Your analogy is false because intentional choice is not deterministic since choice requires that multiple outcomes are possible and deterministic outcomes are fixed regardless if the outcome is known. Contingency means more than one outcome is possible prior to the causal event. We are back to the original point that you are indeed claiming free will is an illusion. Just because a causal agent has a rationale for a choice does not mean that there was no choice (more than one possible outcome) to be made and that in reality the outcome was fixed. Most often, each possible choice has competing rationales, and the intelligent agent guides the outcome based on the favored choice. Once the choice is made it many of the follow on events could be deterministic, but I know by introspection that the outcome of my choices are not fixed prior to making the choice. Since they are not fixed, and more than one outcome is possible prior to making the choice it is simply false to claim the outcome is deterministic. You have failed to show that all contingent events are random. Uniform experience and introspection demonstrates they are not. The burden is on you to show that experience and introspection is wrong on this point.
  16. I provided an example in the observed distances within for example galaxies don't seem to be expanding as should be the case under a inflationary expansion model. I find the other issues more significant.
  17. Occams razor trims away at these superfluous additional constants you wish to add. I would propose you stick to sound scientific principles in your arguments so as to maintain some credibility. You continue to argue that the physical constants are not fine tuned and I have avoided a detailed discussion demonstrating mostely because, in the world of cosmology, fine tuning is a forgone conclusion. Iinstead the concentration is on answering why and how the universe was fine tuned. Let's start with an easy example to limit the controversy. Let's start with the fact that the observable universe is both homogenous and that the gravitational field is very nearly flat. The initial low entropy state of the universe (according to Roger Penrose in "The Road to Reality") is fine tuned to 1 part in 10(10123). this value is necessary to account for the two conditions I mentioned and these two conditions are in turn required to have allowed for a universe that had a lifespan long enough to allow for three states of matter (solid, liquid, gas) and also provide the ability for matter to form stars and planets. Can we all agree these are basic conditions required for life? Again I should advise you to use sound principles in your questions. The existence of our universe is causally adequate. It is self evident that we exist. The existence of any universe other than ours is pure speculation. There is no scientific reason to postulate it, only metaphysical reasons. Furthermore there is a huge epistemological cost in proposing infinite universes. Once we permit inflationary cosmology as a possible explanation for anything (as you are doing) it destroys scientific reasoning about everything. It does so because now you can explain the origin of all events no matter how improbable by reference to chance because of the infinite probabilistic resources it purports to generate. With your model in place we must allow that an exquisitely designed machine could have just as likely have been produced by the quantum fluctuations as by a human. But it is worse, it implies that we regard explanations that are extremely improbable as more likely than explanations that we normally accept because the events that explain these improbable outcomes as likely to occur as normal events. This is simply nonsensical.
  18. Ok good points. I'll add some detail in the future. Perhaps you might indicate where you would like some detail.
  19. If this analogy is to work then we must note that these two people have a major problem in that they don't know which way to jump to get out of the way. It may be best simply to stand there.
  20. Actually the fact that we don't observe that short distances (for example distances in our solar system and galaxy) don't seem to expand is a problem for Inflationary cosmology. It is an inconsistency that advocates attempt to answer by citing gravity, but the inconsistency remains. This current expansion model and these peculiar inconsistencies arise from the assumptions that go into the model in the first place which is to account for the observation that our universe seems to have been almost perfectly uniform very near the beginning. In addition, our universe is very nearly flat with respect to gravitational collapse and eternal expansion due to the apparent near perfect balance between actual and critical mass density. To explain the fine tuning of these two characteristics, modern day cosmologists have proposed inflationary models that don't require the initial configuration of the universe in its singularity to be so precisely assembled. However, this model also requires causal systems and one of them is the need for inflationary fields. Inflationary fields are proposed to explain the homogeneity and flatness problems, but we don't know if they exist and there is reason to be suspicious because they don't seem to explain the features of the universe very well at all. This is because one must make elaborate and gratuitous assumptions about the initial configuration of the singularity as well. Roger Penrose in his book "The Road to Reality, A Complete Guide to the Laws of the Universe" has discussed these issues and has noted that inflation alone does not solve these problems but rely on many additional assumptions. In addition and here is the part that is relevant to this discussion, Stephen Hawking and Don Page have both noted that it is not currently possible to explain why inflation fields and gravitational fields should work together to produce the homogeneity of the background radiation and flatness of our observable universe. When the fields are linked there is nothing to guarantee that inflation will take place as they point out in "How Probable is Inflation?" There is also a problem with causal adequacy. Inflationary cosmology relies on entirely unknown entities for its causal powers and it does so to explain mysterious effects for metaphysical reasons. This model also has a major problem in explaining the origin of information necessary to produce the hypothetical inflationary fields and the numerous fields to which it must be coupled in order to produce a universe such as ours. The mechanism required to shut off inflation at just the right time, itself must have been fine tuned to between one part in 1053 and 10123 and this is just one example. Inflation makes the already acute fine tuning problem with entropy exponentially worse according to Penrose. Thus it is hardly any better to propose inflationary cosmology than to accept that the universe was precisely configured to begin with.
  21. In addition to the water concentration or density differential there is also typically a pressure differential providing the driver for preferential flow of water out of the concentrated side of the membrane and into the dilluted side.
  22. Of course it is necessary. It is part of the process of validating scientific theories and in some cases it is more serious than simply that there are some things that have not been figured out. Whether Behe is correct or not about why diversification of life occurred and regardless of how you care to describe his metaphysical beliefs or how you characterize his motivation, there is a serious issue with the present description of how diversification might have occurred. With Irreducible Complexity, Behe provides a simple illustration of the problem that the modern evolutionary theory faces namely that it seems to have wrong, at least some of the processes under which significant diversification occurred. A good scientific explanation of past events requires a causally adequate mechanism, which requires that a process in observation today must be known to have accomplished similar events. As Behe notes, the current processes do not explain how these molecular systems came to be. Homology and/or similarity does not explain how and in the case of Behe he even agrees that these similar systems provide evidence of the feedstock for the systems, he correctly points out the these similarities are silent about the process under which the hypothetical feedstock was employed. I agree that genetic error and natural selection is a good mechanism to generate adaptive systems but it seems not able to derive the types of precursors required for novel form and function. Behe make this case. There must be a different more causally adequate process that has not been uncovered. ID proponents note that genetic engineers are getting very close to demonstrating that design is a causally adequate explanation, but I think there may well be a natural mechanism that is able to derive the kinds of systems Behe describes.
  23. I presume that in your scenario these two species are substantially different in in form. That being the case, it is not possible primarily because funtional systems require multitudes of coordinating systems to perform these functions. The gene expression and regulation and developmental controls must be specifically matched to the systems being constructed and operated. the cell infrastructure must also match so when you throw two radically different systems together without redesigning the infrastructure and control system, it will fail. The same issues would apply to supposedly dormant components. though in life there are very few such systems despite what you might hear. Your novel is fiction so have fun with it and don't worry about whether or not it is actually realistic.
  24. I agree. It provides an excellent opportunity to discuss many of the difficulties with the current models of both common ancestry and descent with modification as solutions to the diversity we observe in biology. The argument that holology or similarities of proteins in current biological organisms provides demonstration that Behe is wrong about the nature of complex biomoloecular systems is very very incomplete and it exposes a much bigger weakness in arguments that selection acting on genetic errors accounts for all of life based on homology of organisms and sequence similarity in genes. A solution to the question of how biological diversity occurred cannot be answered if there is not an identified process or processes that are known to have accomplished all of the modes of changes involved. Behe agrees that there is good evidence for common descent but he argues that the mechanisms or processes that are cited are not capable of generating many of the structures we observe at the molecular level. His argument is based on the molecular processes occurring within the cell. These processes were unknown back in 1920-1930 so it is silly to argue that this was dismissed 80 years ago. Systems of multiple molecular components containing specific well fitted shapes along with multiple specific coherent properly place binding sites attached together to form complex molecular machines that perform cell level functions is what he is talking about. When you properly discuss the strict definition of what Behe calls irreducible complexity such that if certain (but not just any) components is removed, the system can no longer perform at least one of its primary functions then the idea that citing sequence similarity and homology as opposed to processes one should see that the argument is far from over. Because it is far from over, and because the current models do not provide a mechanism to accomplish these molecular level changes, we can't say the mechanism for diversity is known. In fact the mechanisms currently cited are being tested experimentally and they do not generate the kinds of molecular level alterations required to derive new tertiary structures, protein-protein binding sites, gene expression, regulatory and developmental controls, and a number of other low level changes required to generate a novel functioning system of interacting protein components in a time period cited by the geologic sequence whether or not one has a ready supply of these components with similar primary amino acid sequences or not. The first argument that different flagellum systems are observed with different components and therefore one can remove some components and the system will still function. Clearly that is true but that is not Behe's argument. Functioning systems can and do contain parts that improve function or reliability but are not critical to primary function. An auto engine has many of these parts, however if you remove the crank shaft you will no longer have an engine. Same with flagellum. Some components are extra but there exists a subsystem of components that are critical. Focusing on extra components and noting that extra components are extra fails to address Behe's argument. The second argument makes an appeal to homologies. But noting that particular proteins display sequence similarities not not provide any insight into the process by which novel protein tertiary structures, and binding sites are formed and it is the processes that Behe argues against. The third argument returns to the definition of critical subcomponents thus demonstrating that the author of the post recognizes that the first argument is logically incorrect. It note that the axle should be considered a critical sub-component and then provides a example where an ATP based hydrogen pump system lacking an axle still rotates whereby the turbine asembly serves one of the functions of an axle but only when there is no connected load. But this is not new information, engineers know of this limited and special case. However when the axle is removed, another critical function is eliminated, namely the ability to have a connected load attached and therefore the system still does not meet Behe's definition. The forth argument returns to homologies, again straying very far from the actual argument, which is about the process of developing and assembling the required components as opposed to locating possible sources of similar gene sequences. And then the post ends, never having addressed the argument behe makes against the inability to identify processes capable of generating such systems.
  25. Indeed. There are a number of serious problems with the expansion model. You have identified one of them. Here is a summary of the issues with various models of Big Bang cosmology including the inflationary model you mention. Big Bang Cosmology Summary of Issues
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.