Jump to content

Atheistic intelligent design


cabinintheforest

Recommended Posts

However, evolution starts off with a premise of a living cell, with its cellular processes such that there is an evolutionary algorithm (mutation and selection).

 

The debate in this portion of the thread is wether or not mutation and selection is an adequate evolutionary algorithm for the purpose of driving all observed biological diversity. I argue it is not and have provided many lines of evidence to support this argument. I have stipulated that it does produce limited adaptation of existing function to allow for some flexibility in fluctuating environments, but what evidence exists to confirm that mutation and selection can break free of the barriers imposed by physical laws including entropy and the barriers imposed by information theory that constrains physical only processes from access to active information?

 

The information in the cell must be considered functional, prescriptive, and whatever other attributes you want such that it produces a living cell. Therefore, even if you could show that these things are necessary to have, evolution already has them.

 

Mutation and selection can work with existing information and even reconfigure it to produce variations of existing function so long as information order is not permitted to degrade by any significant amount. New functional information and new order is required for new form and function. What is the source of new functional information in evolutionary processes?

 

 

That your arguments are limited to the creation of the processes that evolution presupposes leads me to believe that you have given up arguing evolution and instead are arguing against abiogenesis only. But perhaps I missed something.

 

Indeed you have if you can't see it in the descriptions I have provided. If you will review my posts and let me know what it is that you can't see regarding the necessity of new functional information and new order to produce the new form and function posited by evolutionary theory when it is described as accounting for all observed diversity and that diversity is described as a progression from a single living organism branching out to the ones observed today. To accept your argument one must argue that the first life form contained as much or more functional information and order as that represented by

all life forms that have ever existed. Is this your claim?

Link to comment
Share on other sites

The debate in this portion of the thread is wether or not mutation and selection is an adequate evolutionary algorithm for the purpose of driving all observed biological diversity. I argue it is not and have provided many lines of evidence to support this argument. I have stipulated that it does produce limited adaptation of existing function to allow for some flexibility in fluctuating environments, but what evidence exists to confirm that mutation and selection can break free of the barriers imposed by physical laws including entropy and the barriers imposed by information theory that constrains physical only processes from access to active information?

 

Perhaps it would be most productive at this point to just look at the historical evidence rather than for you to try to figure it out theoretically. Either fossil evidence or bacterial evolution experiments if you prefer digital DNA evidence. After all, the very best theoretical arguments just simply can't override what has been observed.

 

Indeed you have if you can't see it in the descriptions I have provided. If you will review my posts and let me know what it is that you can't see regarding the necessity of new functional information and new order to produce the new form and function posited by evolutionary theory when it is described as accounting for all observed diversity and that diversity is described as a progression from a single living organism branching out to the ones observed today. To accept your argument one must argue that the first life form contained as much or more functional information and order as that represented by

all life forms that have ever existed. Is this your claim?

 

No. Depending on the definition of information, perhaps it could be argued that all the information in the world that has ever existed already existed in the various particles making up the universe. Some physicists argue that information cannot be destroyed. If what we mean is the information contained in DNA, then no the first organism did not contain all the current DNA information (what a silly idea, let's not play the strawman game please). If we go with the definition you quoted, I = -log(P), then the new information was created by

 

Mutation and selection can work with existing information and even reconfigure it to produce variations of existing function so long as information order is not permitted to degrade by any significant amount. New functional information and new order is required for new form and function. What is the source of new functional information in evolutionary processes?

 

energy acting on a living system. Energy is the source for increasing the information contained in DNA. Do you doubt this? If you like I can design an experiment so you can verify it for yourself.

Link to comment
Share on other sites

You have made this claim on several occasions and once offered an article raising informal objections, but you have not offered formal proof.

Please see D. Wolpert and W. G. Macready, "No free lunch theorems for optimization," IEEE Trans. Evolutionary Computation, Vol.1, No.1, pp. 67-82, 1997, as cited by Demsbki. Specifically, page 68, which states that the work is limited to a "cost function" or "objective function" to be optimized (also known as a "fitness function" in our application) that is either static or time-dependent. Functions that vary depending on the current location in the search space are not included.

 

How though did natural selection obtain the information you claim is endogenous? This was the question I asked, could you answer it please? Your hypothetical example addresses how adaptation does locate local optimums but to show that natural selection acting on a random walk (random genetic errors) can account for all diversity, you must demonstrate that natural selection contains information on how to guide the random walk over or around vast crevasses of low fitness via long evolutionary pathways involving many countless steps. You must also show that there are in actual existence such pathways.

There are many open hypotheses for this question. One potential answer is that time and an abrupt change in the fitness landscape (such as a change in the environment or in competitors) forced adaptation to proceed in a certain direction.

 

Annu. Rev. Earth Planet. Sci. 2006.34:355-384 provides an interesting overview of the possibilities in fitness landscape change in one particular event, but it is by no means an exhaustive report on this subject.

 

 

I think we are saying the same thing, and that is why I previously said I am consistent with this article. The peer reviewed articles from Evolutionary Informatics Lab, including the one you previously cited do demonstrate a limit on the net total amount of information in relation to the probabilistic resources (and information) brought to bear plus the quantity of active information imported.

I said "not bounded."

 

As I previously said, the designers admit they designed the algorithm, and the design documents and links you provided make it clear that the evolutionary algorithm was designed. The human designers designed a system that designed an antenna. without the humans, the antenna design would not come into existence so the humans were a necessary component of the design. They designed the design. They caused the design.

 

cite [sayht]

  1. to quote (a passage, book, author, etc.), esp. as an authority: He cited the Constitution in his defense.

The list of papers are a specific list and these papers collectively and a couple individually demonstrate my claim. I am at a loss to understand how I can be more specific in terms of what list of papers was intended by my statement other that the list I provided. If your personal opinion contradicts the conclusions of these papers, so be it, I am not likely to change your opinion with a different list. On the other hand if you find the conclusions to be factually in error, I would be interested in the factual errors.

By specific, I meant "list the actual peer-reviewed papers that support this particular conclusion, rather than a list of hodgepodge book chapters, conference proceedings and actual papers that cover a variety of topics, many of which don't cover the exact statement I asked you to cite."

 

As a bonus, many of the 335 citations in this paper also serve to support the claims I have made in this thread.

Fair enough. We'll resume this discussion once I've read them all.

Link to comment
Share on other sites

I dont really understand what the posters above are saying but as I see it:

 

there is no doubt that, given an infinite world and an infinite number of generations, anything is possible.

 

bacteria exist in incredible numbers and can reproduce every 20 minutes.

But even after billions of years the most advanced organism on earth was little more than a bag of chemicals.

 

then the cambrian explosion took place.

 

despite existing in far fewer numbers and reproducing at a much slower rate, multicellular organisms

have managed to evolve many very complex structures in only a few hundreds of millions of years.

 

I cant help but feel that there is something that we are missing.

Link to comment
Share on other sites

Please see D. Wolpert and W. G. Macready, "No free lunch theorems for optimization," IEEE Trans. Evolutionary Computation, Vol.1, No.1, pp. 67-82, 1997, as cited by Demsbki. Specifically, page 68, which states that the work is limited to a "cost function" or "objective function" to be optimized (also known as a "fitness function" in our application) that is either static or time-dependent. Functions that vary depending on the current location in the search space are not included.

 

I requested a formal mathematical proof that NFL theorem does not and cannot apply to the case described by Dembski and Marks. Their peer reviewed and published article seems to indicate that it does apply. You have once again provided the same informal response of which I previously objected.

 

cite [sayht]

  1. to quote (a passage, book, author, etc.), esp. as an authority: He cited the Constitution in his defense.

 

The use of evolutionary programming techniques to automate the design of antennas has recently garnered much attention. Considerable research has been focused on determining whether evolutionary techniques can be used to automatically design and optimize antennas so that they outperform those designed by expert antenna designers, and even whether evolutionary techniques can be used to design antennas in cases where humans are simply unable to.

 

They make it clear that humans are the primary cause of these antenna.

 

By specific, I meant "list the actual peer-reviewed papers that support this particular conclusion, rather than a list of hodgepodge book chapters, conference proceedings and actual papers that cover a variety of topics, many of which don't cover the exact statement I asked you to cite."

 

My statement regarding references was not to a specific conclusion. It was to the general arguments as a whole regarding the inability of genetic algorithms based solely on physical systems with no design or designer involved to self generate prescriptive information. This was the general argument being made in this portion of the thread. I provided this list of supportive references.

 

I dont really understand what the posters above are saying but as I see it:

 

there is no doubt that, given an infinite world and an infinite number of generations, anything is possible.

 

This physically impossible is still impossible no matter how may random attempts are made to overcome it.

 

bacteria exist in incredible numbers and can reproduce every 20 minutes.

But even after billions of years the most advanced organism on earth was little more than a bag of chemicals.

 

Bacteria is far, far, far, far more than a bag of chemicals.

Edited by cypress
Link to comment
Share on other sites

I requested a formal mathematical proof that NFL theorem does not and cannot apply to the case described by Dembski and Marks. Their peer reviewed and published article seems to indicate that it does apply. You have once again provided the same informal response of which I previously objected.

I thought I said that the NFL theorems don't apply to natural selection as it occurs in the wild. Since you agree that the paper I cited agrees with Dembski and Marks, you'll also have to agree that Dembski and Marks' work can't apply to a fitness function that can change independently of time.

 

There's a difference between "primary cause" and "supplying information for the design".

 

My statement regarding references was not to a specific conclusion. It was to the general arguments as a whole regarding the inability of genetic algorithms based solely on physical systems with no design or designer involved to self generate prescriptive information. This was the general argument being made in this portion of the thread. I provided this list of supportive references.

I asked you to cite specific conclusions.

 

But never mind, because I'll be rather busy reading for a while and may not be able to respond. I'll get back to you once I've read all those papers.

Link to comment
Share on other sites

There's a difference between "primary cause" and "supplying information for the design".

 

They supply information for the design as well.

 

"we mapped the structure of the antenna into a 14-element byte encoded representation scheme. Each element contained two floating point values, a length and a spacing value. Each floating point value was encoded as three bytes, yielding a resolution of (1/2)^24 for each value. The first pair of values encoded the reflector unit, the second pair of values encoded the driven element, and the remaining 12 pairs encoded the directors. Wire radius values were constrained to 2, 3, 4, 5, or 6 mm. Mutation was applied to individual bytes, and one point crossover was used."

 

I thought I said that the NFL theorems don't apply to natural selection as it occurs in the wild. Since you agree that the paper I cited agrees with Dembski and Marks, you'll also have to agree that Dembski and Marks' work can't apply to a fitness function that can change independently of time.

 

What you said was this: "Furthermore, I will note yet again that the NFL restrictions do not apply here." At the time you and I were both discussing the evolutionary algorithms referenced in Marks and Dembski's papers so I hope you can see why we might have a different understanding.

 

Dembski and Marks' primary point is that evolutionary algorithms are poor simulations or analogs for the natural process of evolution. If you are correct that NFL does not apply to natural evolution but also agree that NFL does apply to evolutionary algorithms then it would seem that Dembski and Marks' claim is supported either way.

Link to comment
Share on other sites

Perhaps it would be most productive at this point to just look at the historical evidence rather than for you to try to figure it out theoretically. Either fossil evidence or bacterial evolution experiments if you prefer digital DNA evidence. After all, the very best theoretical arguments just simply can't override what has been observed.

 

Are you now acknowledging that the answer to the question, "what evidence exists to confirm that mutation and selection can break free of the barriers imposed by physical laws including entropy and the barriers imposed by information theory that constrains physical only processes from access to active information?" is that here is no evidence? Is this why you prefer to move the goal post?

 

The reason why this is a shift of topic is because the historical evidence tells us what we already know, namely that life is diverse, but it does not tell us anything about the processes by which that diversity occurred. Fossil, and DNA evidence simply inform us that life forms are different in some ways and similar in others.

 

Bacteria evolution experiments demonstrate the ability to adapt, damage and destroy existing function to defeat changing environmental threats (antibiotic and pesticide resistance are examples as is sickle cell trait) . They also demonstrate limited adaptations of existing functions to leverage a niche in environmental conditions (nylase and other single and double step enzyme and enzyme expression modifications are examples), but there are no indications of the posited stepwise evolutionary pathways greater than 3 steps that would eventually lead to novel functional prescriptive information gains required for new form and function.

 

energy acting on a living system. Energy is the source for increasing the information contained in DNA. Do you doubt this? If you like I can design an experiment so you can verify it for yourself.

 

Even random processes of mutations bring a source of information to alter the information content of DNA. While energy may well be the medium which transmits or imports information, as discussed, entropy considerations render random processes incapable on their own of increasing the order of a macro system over a long progression of discrete steps. Natural selection is posited to provide a mechanism to discern and differentiate between noise that degrades order, and stepwise alterations that could increase order. Within the range of discrete differences that represent functional alternative configurations, as defined by the applicable fitness function, in the vicinity of a pre-existing functional configuration, experimentation confirms that genetic error and selection can account for limited adaptation of existing function. However, in order to derive new form and function, significant cumulative change and new functional information is required. For this, the fitness function must contain continuous, smooth pathways from one functional system to another whereby the pathway is not breached by fitness gaps wider than the step distance, otherwise the pathway is cut off.

 

Here is an example of a landscape that includes smooth passable pathways:

 

img_347.gif

 

Here is one that does not:

 

img_348.gif

 

Molecular biology experimentation, as described above, tends to indicate that the fitness landscape mutation and selection are operating on is more similar to the one with impassable pathways. In either of these situations, a successful search requires information about search space so as to match the steps and process with the landscape. Natural selection and mutation seems to be designed (and likely contains active information) to find localized shifting maxima's in response to changing environmental conditions, but does not appear to be designed to migrate from one local maxima to another for situations where stepwise pathways between local maxima are breached.

 

You will have explained nothing new by setting up some uninteresting experiment whereby information is transmitted by energy and imported into DNA. The current context of this thread is the nature of the information available to natural selection and random gene mutation to navigate the fitness landscape and thereby increase the quantity and order of the functional prescriptive information contained in DNA by physical processes alone without violating entropy laws.

Link to comment
Share on other sites

The debate in this portion of the thread is wether or not mutation and selection is an adequate evolutionary algorithm for the purpose of driving all observed biological diversity. I argue it is not and have provided many lines of evidence to support this argument. I have stipulated that it does produce limited adaptation of existing function to allow for some flexibility in fluctuating environments, but what evidence exists to confirm that mutation and selection can break free of the barriers imposed by physical laws including entropy and the barriers imposed by information theory that constrains physical only processes from access to active information?

No one is claiming that evolution allows systems to break free of physical laws (including entropy) except for you. thus this arugment by you is an obvious strawman.

 

In fact, we have repeatedly shown that evolution fits with all physical laws (inlcuding entropy).

 

Mutation and selection can work with existing information and even reconfigure it to produce variations of existing function so long as information order is not permitted to degrade by any significant amount. New functional information and new order is required for new form and function. What is the source of new functional information in evolutionary processes?

As selection prevents the information from degrading by any significant amount (becasue if it did degrade then it would not be as fit and not reproduce and that selects it out of the gene pool).

 

Actually, if you are willing to accept that information crated by evolutionary algorithms is at least "functional", and are willing to accept that bad information is discarded, then you have to accept that any change to the information that is not bad constituts as new information and thus would fulfil your criteria of "New Functional Information".

 

As I have explained (in this thread and in others) that entropy does not need a Source of low entropy to create a local decrease in entropy. It can do so with a sink of high entropy.

 

This was explained long ago in the threads, right when you first mentioned entropy. I know, as I was one of the ones that did mention it. So your argument here ignores this fact.

 

Indeed you have if you can't see it in the descriptions I have provided. If you will review my posts and let me know what it is that you can't see regarding the necessity of new functional information and new order to produce the new form and function posited by evolutionary theory when it is described as accounting for all observed diversity and that diversity is described as a progression from a single living organism branching out to the ones observed today. To accept your argument one must argue that the first life form contained as much or more functional information and order as that represented by

all life forms that have ever existed. Is this your claim?

There are chemicals we use to kill bacteria (Antibiotics), some of which have never before existed in nature. When these chemicals were introduced, bacteria had no defense against them and they were extremely effective. However, bacteria now have developed resistance to these chemicals.

 

This means that the bacteria had to evolve new, functional information in order to resist these chemicals. This is because, since these chemicals never existed in nature before, there was no need for the information needed to confer resistance to them. Once the chemicals were introduced, any bacteria that had a mutation that confers even a small amount of resistance would have had an advantage in that more of them would have survived. As the others would ahve been killed, there was more space and resources for these surviving bacteria to use to reproduce, thus the mutation would be passed on to the new generations. Then any further mutations in the offspring of the survivers that confered even more resistance would mean that these doubley resistant bacteria would survive in greater numbers and dominate the bacterial population.

 

In other words, evolution is a ratchet for information. It doesn't need to start off with more functional information, as random processes when applied to a ratchet type system will drive the system in a particular direction (in the case of living systems: towards low information entropy in the context of increasing reproduction sucess rates).

Link to comment
Share on other sites

No one is claiming that evolution allows systems to break free of physical laws (including entropy) except for you. thus this arugment by you is an obvious strawman.

 

In fact, we have repeatedly shown that evolution fits with all physical laws (inlcuding entropy).

 

The claim that evolution accounts for all observed diversity and proceeds by physical processes alone without ever having been provided any active guidance by any intelligent agent but instead generates information order is effectively a claim that evolutionary processes are not constrained by the laws of entropy.

 

You have not factually established that this grand claim that known evolutionary processes account for all observed diversity. It is little more than a prior commitment you hold.

 

As I have explained (in this thread and in others) that entropy does not need a Source of low entropy to create a local decrease in entropy. It can do so with a sink of high entropy.

 

I previously addressed this. Novel form and function requires new order. Please provide a specific known biological example of new function due to removal of non-functional noise in DNA sequences.

 

There are chemicals we use to kill bacteria (Antibiotics), some of which have never before existed in nature. When these chemicals were introduced, bacteria had no defense against them and they were extremely effective. However, bacteria now have developed resistance to these chemicals.

 

Previously addressed. They do so by damaging or otherwise modify specific components that these antibiotics exploit. When the function is damaged the chemical is no longer effective since that avenue is no longer available.

 

In other words, evolution is a ratchet for information. It doesn't need to start off with more functional information, as random processes when applied to a ratchet type system will drive the system in a particular direction (in the case of living systems: towards low information entropy in the context of increasing reproduction sucess rates).

 

Addressed by reference to the fitness functions. What evidence do you have that the fitness function in play for random genetic error and selection is smooth and continuous?

Link to comment
Share on other sites

They supply information for the design as well.

 

"we mapped the structure of the antenna into a 14-element byte encoded representation scheme. Each element contained two floating point values, a length and a spacing value. Each floating point value was encoded as three bytes, yielding a resolution of (1/2)^24 for each value. The first pair of values encoded the reflector unit, the second pair of values encoded the driven element, and the remaining 12 pairs encoded the directors. Wire radius values were constrained to 2, 3, 4, 5, or 6 mm. Mutation was applied to individual bytes, and one point crossover was used."

 

If that's the case, then I think it would be fair to say that any one cell contains all the information needed for any other cell of any species. After all, they each contain a map, mapping from information to physical system, in the form of a three nucleotide sequence mapping to an amino acid, and a mapping of the order of said codons to the order of the amino acids. So if that is all you meant was necessary, then now you have nothing you can say about life needing the info, since life has this sort of thing and it applies to all life and all the functions used by life. And here I thought that by information we were talking about the actual design of something rather than the mapping of the design to the physical world. It's so much easier to prove now that you've made it clear you only meant this. And if that's not what you meant, why bring it up?

 

 

Are you now acknowledging that the answer to the question, "what evidence exists to confirm that mutation and selection can break free of the barriers imposed by physical laws including entropy and the barriers imposed by information theory that constrains physical only processes from access to active information?" is that here is no evidence? Is this why you prefer to move the goal post?

 

The reason why this is a shift of topic is because the historical evidence tells us what we already know, namely that life is diverse, but it does not tell us anything about the processes by which that diversity occurred. Fossil, and DNA evidence simply inform us that life forms are different in some ways and similar in others.

 

Since when does "just look at reality rather than trying to figure it out in your head" equate to "there's no evidence"? Here's an experiment you can do: make an agar plate which includes random DNA sequences and food (ie, energy). Put a bacterium on it. After a while, the information entropy of the sequences of DNA will decrease. What have you to say about that?

 

Bacteria evolution experiments demonstrate the ability to adapt, damage and destroy existing function to defeat changing environmental threats (antibiotic and pesticide resistance are examples as is sickle cell trait) . They also demonstrate limited adaptations of existing functions to leverage a niche in environmental conditions (nylase and other single and double step enzyme and enzyme expression modifications are examples), but there are no indications of the posited stepwise evolutionary pathways greater than 3 steps that would eventually lead to novel functional prescriptive information gains required for new form and function.

 

Nevertheless, all are examples of new and improved function, also of new information. All you are doing is arbitrarily deciding one is function and one is not, that one is worse and the other better, without any valid reason and against any sense. The function of the organism is to survive and reproduce in an environment, and these traits are clearly an improvement in function by that objective measure in a given environment. That you are using intelligent design ideas (an intent-oriented definition of function) to judge functionality where there is no intent would be like saying that an example of design couldn't have evolved and therefore was not designed. It would make no sense. You cannot redefine functionality arbitrarily to suit your argument and expect people to take you seriously.

 

Even random processes of mutations bring a source of information to alter the information content of DNA. While energy may well be the medium which transmits or imports information, as discussed, entropy considerations render random processes incapable on their own of increasing the order of a macro system over a long progression of discrete steps. Natural selection is posited to provide a mechanism to discern and differentiate between noise that degrades order, and stepwise alterations that could increase order. Within the range of discrete differences that represent functional alternative configurations, as defined by the applicable fitness function, in the vicinity of a pre-existing functional configuration, experimentation confirms that genetic error and selection can account for limited adaptation of existing function. However, in order to derive new form and function, significant cumulative change and new functional information is required. For this, the fitness function must contain continuous, smooth pathways from one functional system to another whereby the pathway is not breached by fitness gaps wider than the step distance, otherwise the pathway is cut off.

 

Well, if that is true then it is a good thing life is not a random process. If you like I can demonstrate life reducing entropy on a macro system over a long progression of steps. Try looking at your lawn, for example.

 

As for natural selection decreasing information entropy, I think that is neither necessary nor useful, at least not to an extreme. A DNA strand of identical nucleotides only is definitely not what selection is selecting for, but it would be the minimum information entropy and maximum order. I do recall reading about natural selection selecting for certain codons over others coding for the same amino acid, which would indeed be a specific example of reducing information entropy. The key here is that some codons are more resistant to a mutation changing them into a codon coding for a different amino acid (either just different or different having different attributes like polarity or charge). Selecting the codons less likely to undergo these changes makes the organism more resistant to mutation, and so this example of reducing information entropy increases fitness. But that doesn't mean that because natural selection can reduce information entropy in a particular example that it would continuously do so leading to a DNA strand of only one letter type. Natural selection is about fitness, whatever that might mean at the time and whatever the other consequences may be.

 

Here is an example of a landscape that includes smooth passable pathways:

 

img_347.gif

 

Here is one that does not:

 

img_348.gif

 

Molecular biology experimentation, as described above, tends to indicate that the fitness landscape mutation and selection are operating on is more similar to the one with impassable pathways. In either of these situations, a successful search requires information about search space so as to match the steps and process with the landscape. Natural selection and mutation seems to be designed (and likely contains active information) to find localized shifting maxima's in response to changing environmental conditions, but does not appear to be designed to migrate from one local maxima to another for situations where stepwise pathways between local maxima are breached.

 

I don't think you can quite show this as a 3D graph -- there are multiple dimensions of functionality. You'd need millions of dimensions or more, one for each possible function. Your graph cannot account for the possibility of changing from one functionality to one of the millions or more possible. In fact, I'm fairly certain there's billions of dimensions of functionality relating to disease alone (due to the way some diseases exploit a specific protein structure, randomly changing that structure functions in preventing that disease).

 

You will have explained nothing new by setting up some uninteresting experiment whereby information is transmitted by energy and imported into DNA. The current context of this thread is the nature of the information available to natural selection and random gene mutation to navigate the fitness landscape and thereby increase the quantity and order of the functional prescriptive information contained in DNA by physical processes alone without violating entropy laws.

 

Yeah, all I did is demonstrate that information can increase and information entropy can decrease, which I think were kind of important to your argument. For an example of using energy to decrease information entropy in DNA, a bacterium on an agar plate with random DNA. For an example of increasing information in DNA, a bacterium on an agar plate and you count the total information of the DNA before and after.

Link to comment
Share on other sites

And if that's not what you meant, why bring it up?

 

It was not what was meant. It was a simple illustration and answer to the question asked of me. Nothing more. Your poor attempt to extend it, is a logical fallacy.

 

Here's an experiment you can do: make an agar plate which includes random DNA sequences and food (ie, energy). Put a bacterium on it. After a while, the information entropy of the sequences of DNA will decrease. What have you to say about that?

 

I look forward to your formal proof.

 

Nevertheless, all are examples of new and improved function, also of new information. All you are doing is arbitrarily deciding one is function and one is not, that one is worse and the other better, without any valid reason and against any sense. The function of the organism is to survive and reproduce in an environment, and these traits are clearly an improvement in function by that objective measure in a given environment.

 

How is it that you are a better arbiter of what is new and what is an adaptation of an existing function than I? Improved and reduced function both involve adaptations of an existing function and involves modification of an existing plan. New form and function involves a new plan independent of any existing plan for existing function. This is so obvious I am surprised you seem to repeatedly deny it.

 

Genetic Load is an objective concept that addresses this question.

 

Well, if that is true then it is a good thing life is not a random process. If you like I can demonstrate life reducing entropy on a macro system over a long progression of steps. Try looking at your lawn, for example.

 

When inputs and outputs are considered entropy is increased. You are changing the question, moving the goal post.

 

 

I don't think you can quite show this as a 3D graph -- there are multiple dimensions of functionality. You'd need millions of dimensions or more, one for each possible function. Your graph cannot account for the possibility of changing from one functionality to one of the millions or more possible. In fact, I'm fairly certain there's billions of dimensions of functionality relating to disease alone (due to the way some diseases exploit a specific protein structure, randomly changing that structure functions in preventing that disease).

 

Can you show that the actual fitness landscape includes traversable pathways from one organism to another? If you can't you are speculating. For my part , the images served to illustrate the issue.

 

Yeah, all I did is demonstrate that information can increase and information entropy can decrease, which I think were kind of important to your argument. For an example of using energy to decrease information entropy in DNA, a bacterium on an agar plate with random DNA. For an example of increasing information in DNA, a bacterium on an agar plate and you count the total information of the DNA before and after.

 

It is not much of an accomplishment when you must move the goal post in order to make your demonstration work. You are so clever to drop consideration of inputs and outputs. Logical fallacies, I am told are a violation of site rules.

Link to comment
Share on other sites

It was not what was meant. It was a simple illustration and answer to the question asked of me. Nothing more. Your poor attempt to extend it, is a logical fallacy.

 

Do you wish to change your answer then? Or to explain what it is you meant if you think I misunderstood you? Is a mapping from design information to the physical world, the same as the information needed to design something, or is it not?

 

I look forward to your formal proof.

 

I did the experiment in my head, and it worked as I said. You are free to try to find fault with it, or to experimentally test it with a real world experiment. I guarantee you, most of the random strands of DNA will end up as multiple (nearly exact) copies of the bacterium's DNA. Which has less information entropy than random DNA. Tell me, are you incredibly stupid or lying? Do you seriously expect me to believe you don't know what would happen in the above example?

 

Now stop acting like you live in a make-believe world where no one can tell what will happen in the real world without seeing it with their very own eyes. With the above you lost pretty much all the respect I had for you. I'm not here to play your games.

 

How is it that you are a better arbiter of what is new and what is an adaptation of an existing function than I? Improved and reduced function both involve adaptations of an existing function and involves modification of an existing plan. New form and function involves a new plan independent of any existing plan for existing function. This is so obvious I am surprised you seem to repeatedly deny it.

 

I just told you how to measure the functionality of any biological function -- how well it helps the organism survive/reproduce. That they do this via a specific means is not really relevant, since it is the effects on fitness and not your measure of functionality that is relevant to evolution. That you presuppose intelligent design and and that the intelligent designer is a reductionist and then find that this conflicts with evolution, is no surprise.

 

 

When inputs and outputs are considered entropy is increased. You are changing the question, moving the goal post.

 

Your goalposts are a strawman. Quit pretending that anyone but you believes evolution breaks the laws of thermodynamics. Do you or do you not agree that the local entropy of living creatures may decrease given an energy input?

 

 

Can you show that the actual fitness landscape includes traversable pathways from one organism to another? If you can't you are speculating. For my part , the images served to illustrate the issue.

 

Yes. Mutations can change DNA, natural selection is simply a general probabilistic trend and not exactly true of every organism (see for example genetic drift). Traversing the fitness landscape is thus necessarily possible. It's just a question of probability.

 

It is not much of an accomplishment when you must move the goal post in order to make your demonstration work. You are so clever to drop consideration of inputs and outputs. Logical fallacies, I am told are a violation of site rules.

 

So stop using logical fallacies then, and quit moving the goal posts, and quit inventing strawmen and pretending those are the goalposts.

 

OK then, I shall ask the questions:

1) Can local entropy decrease given energy inputs, or can it not? If not, how do you explain plants growing?

2) Can information entropy decrease given energy inputs, or can it not? If not, how do you explain the information entropy decrease when a bacterium grown in a sample containing random DNA digests the DNA and reassembles it into copies of its own DNA?

3) Can information be created by living organisms, or can it not? If not, then how do you explain the effects of occasional mutation of a bacterium using your measure of information, I = - log(P) rather than made-up nonsense?

4) Can new function be created by mutation or can it not? If not, how do you explain the new functionality of the hemoglobin gene due to the change of one nucleotide from an A to a U, which has the new function of forming insoluble fibers but without alluding to some sort of non-existent intentionality? Or using the definition I used, that increases the fitness of the person in question within the context of malaria? Or show that this mutation can't happen?

Link to comment
Share on other sites

The claim that evolution accounts for all observed diversity and proceeds by physical processes alone without ever having been provided any active guidance by any intelligent agent but instead generates information order is effectively a claim that evolutionary processes are not constrained by the laws of entropy.

:doh: Look, let me explain yet again.

 

You don't need a source of low entropy. You only need to increase total entropy. As evolutionary system decrease entropy locally and increase it globally, it does not violate entropy. This applies to information as well as thermal entropy.

 

This is a fact, and it is a fact that you are repeatedly ignoring. I have shown that organisms that don't breed (that fail) act as in increase of global information entropy as they contain inforamtion and it is degraded from the source. As many more organisms will degrade compared to the ones that give an decrease in entropy, then this means that no violation of entropy is caused.

 

It is perfectly valid with the laws of entropy and does not violate them at all. Local decrease in entropy (information or otherwise) has to be conpensated by a global increase in entropy at least equal to the amount of decrease.

 

You have not factually established that this grand claim that known evolutionary processes account for all observed diversity. It is little more than a prior commitment you hold.

If DNA acts like a bule print for an organism, then changes to that blueprint will change the organism. As you have not shown that there are parts of an organism that is not dependent on inheritable features (ie: that are external to the organism) then you have no proof of your claim.

 

But, it hs been well established that DNA does provide the blueprints for an organism, so by this fact alone we can conclude that changes to the DNA equal changes to the organism.

 

If the entire organims is dependent on its DNA for its structure, then this proves my arugment. If the phenotype of an organism is based on its genotype and not on an external inteliegnt agent activly sculpting it, then you have to accept that evolution can account for all observed diversity.

 

It is not so much a question about the existance of evoution, but if you think that for an organism to grow form an egg into its adult form it requiers an external agent to make it do so. As the growth of an organism can be shown to orriginiate in its DNA, and that changes to this DNA cause predictable changes in the structure of the organism means that DNA is what governs the diversity of the organism's growth.

 

But if you are willing to accept that, then where can you insert an intleigent agent into that? You seem to want to insert it into the changes to the DNA. Fine, but that still will not change the fact that the DNA directs the growth of the organism and therefore the variations in DNA can account for all the dirversity of organisms.

 

You have gotten confused between the growth of organisms and evolution.

 

If you are willing to accept that DNA directs the grow of organisms it doesn't matter how the DNA is changed. IF the DNA is changed, then the organism is changed and you get diversity. This is the nature of "Functional Prescritpive Information". When it comes to how changes to this information affects the processes that use it, it doesn't matter how it is changed, it only matters that it is changed.

 

So your insistance on Functional Prescritpive Information has worked against you because it proves that you don't need to know how information is changed, only that it was changed.

 

Your second part of the argument is that this change couldn't occur without an outside inteligent agent has been disproved. You have even posted that random processes can change information, even introduce it to a system. So you are accepting that information does not need an outside inteligent agent to change it, which counters your second part of your argument. You are doing our work for us, you just refuse to accept that you havae disproven your own arguments.

 

I previously addressed this. Novel form and function requires new order. Please provide a specific known biological example of new function due to removal of non-functional noise in DNA sequences.

The noise is not specific to a particular DNA sequence. Evolution works on a species level and it is the collective DNA sequences of the species that we should be talking about. And as such you have accepted one such example. See the next quote from you:

 

Previously addressed. They do so by damaging or otherwise modify specific components that these antibiotics exploit. When the function is damaged the chemical is no longer effective since that avenue is no longer available.

And you just answered your own question.

 

However, think about this. If that bacteria could remove part of itself "by damaging or otherwise modify specific components that these antibiotics exploit" but the organism is still a viable organism, then it will have effectivly increased the amount of information entropy. So again you are arguing against your own claims.

 

Addressed by reference to the fitness functions. What evidence do you have that the fitness function in play for random genetic error and selection is smooth and continuous?

We have a function for calculating the gravitational attraction between two bodies. Does this mean that gravity can not exist until we have formally described such a function? Did everything have to hold on to the surface of the Earth to prevent being flung off untill Newton wrote down his formula for gravity?

 

No. It would be ridiculous to suggest that.

 

But this is what you are doing. You are saying that because we can create a fitness function for selection, that selection can not occur without it.

 

As I have repeateld shown, the act of replication can create a fitness function without the need for an external inteligent agent to create it.

 

With replication you get exponential increase in populations. This means that if a particular population has a slight advantage in numbers you can end up with a vast difference in final population numbers.

 

Take this for example:

 

start with the number 100 and 101. Start doubling them, and then after 20 doublings, how much of a difference inthe values are there?

 

Lets see:

Starting with 100 we get: 104,857,600

Startign with 101 we get: 105,906,176

 

That is a difference of over 1 million. A single bacteria can have as many as 50 doublings in a single day. What do you think the difference in populations over a singel day would be if these were bacteria. What would the difference be if it was over 1,000,000 years (ie: 365,250,000 days, or 18,262,500,000 doublings)?

 

Because even a small advantage will be increased exponentially this means that only a small difference in survival or replication rates will give the ones with the advantage a massive dominance.

 

So what we have here is selection without recourse to an external inteligent agent. And, specifically, one without a "fitness function". Of course, we can describe a fitness function, that would not be hard. However this would be the same as the function for gravity, in that the existance of the selection is not dependent on our knowing the function to describe it.

 

 

So, lets look at the criteria you have requiered:

 

1) Evolution violates entropy.

 

As the laws of entropy don't requier a source of low entropy, only a place where entropy can increase globally, then a local increase in entropy due to evolution (or any cause actually) does not violate entropy.

 

2) Can evolution account for all diversity?

 

Well as it is the growth of an organism from its DNA that gives the organsim its form and makes it different from other organisms, then all we need to know is that the DNA has changed. It doesn't matter how the DNA has changed, only that it has.

 

But, can evolution change DNA? Mutation can and selection means that bad changes won't keep getting replicated. SO Yes, Evolution can account for the changes to DNA.

 

3) Novel form and function

 

As any change to a data set constitutes new information, then the generation of novel information is simply a fact of change. As this change does not have to be ordered to produce, it can be crated from a random process, we don't need order to generate novel inforamtion.

 

But what about function? As random change could lead to a section of the data becomeing useless (as you pointed out can happen), then this can allow changes to that section that has no further impact on the outcome. However, it is possible for that section to produce something that is useful in another way.

 

As a direct example: http://www.youtube.com/user/cdk007#p/c/F626DD5B2C1F0A87/1/SdwTwNPyR9w

 

We know that mutations that can occur to DNA causes a section of that DNA to be duplicated. This frees up that section of duplicated DNA to be changed without it causing further impact on the organism. Such a case is with the DNA that allows an organism to make Ascorbic Acid (Vitamin C ). In humans, because we can get this vitamin from our diet, it can safely be disable these genes without it impacting us (as has occured).

 

In humans, this change ahs not evolved into something else (yet), but it has freeded it up to allow changes to occur and that most of these changes will be neutral rather than harmful (where as if we still needed the ability to produce Ascorbic Acid it would be a harmful mutation). As experiment I have provided experiments that have shown (if you bothered to do them), when mutations are neutral, it allows the sequence to be changed to any other sequence (so long as it does not cause a harmful effect, but then that mutation would be selected out as it would prevent that organism from reproducing).

 

Because of the selective advantage of exponential growth, and muation that caused a slight positive effect would be exponentially increased. This means that novel form and function can emerge from nutral mutations. This is called Genetic Drift and it has been explained to you before this.

 

4) Fitness Functions

 

Although predesigned fitness functions can and do exist, not all selection relies on them. Just as Gravity existed before we described a Function for it, so too can selection exist before we describe a function for it. One example is the selection caused by exponential population growth caused by replication. A small advantage in replication (either thorugh rate of replication or by starting numbers) can lead to a massive advantage in numbers afdter even a small number of replication (as I showed above just 20 replications were enough to create an advantage of over 1 million from a starting difference of 1).

 

All your concerns have been answered, and as such you will have to shift the goal posts to continue yourt arguments.

Link to comment
Share on other sites

Do you wish to change your answer then? Or to explain what it is you meant if you think I misunderstood you? Is a mapping from design information to the physical world, the same as the information needed to design something, or is it not?

 

No change of answer is required. You seem to have missed the context and purpose for the answer I gave to Cap'n question.

 

 

I did the experiment in my head, and it worked as I said. You are free to try to find fault with it, or to experimentally test it with a real world experiment. I guarantee you, most of the random strands of DNA will end up as multiple (nearly exact) copies of the bacterium's DNA. Which has less information entropy than random DNA. Tell me, are you incredibly stupid or lying? Do you seriously expect me to believe you don't know what would happen in the above example?

 

I seriously believe that you prefer to change the question to something you can answer and then answer it instead. It is clear that you are not able to provide serious answers to the questions posed, and it indicates that the points I make are valid.

 

Your goalposts are a strawman. Quit pretending that anyone but you believes evolution breaks the laws of thermodynamics. Do you or do you not agree that the local entropy of living creatures may decrease given an energy input?

 

I suspect that even local entropy of living organisms increase over time. Thermodynamic cycles have no net entropy change when they go full circle. Living organisms operate on biological cycles that individually have no net entropy change for each cycle (not including inputs and outputs). Biological processes include irreversible inefficiencies and thus net entropy should increase. Wen inputs and outputs are included net entropy clearly increases. But this is a different question than considering the posited process of evolution. The issue with evolutionary processes is they are not cyclical processes that come full circle. The posit is that evolution drives fundamental, significant and permanent change. These posited processes of large change is the issue. Wholesale functional biological change requires new functional biological information. Where did the information order come from is the question.

 

 

OK then, I shall ask the questions:

1) Can local entropy decrease given energy inputs, or can it not? If not, how do you explain plants growing?

 

In a complete cycle, local entropy is unchanged. It would seem that growth would not involve a decrease in entropy, how could it, with mass increasing discrete probability states must increase, correct?

 

2) Can information entropy decrease given energy inputs, or can it not? If not, how do you explain the information entropy decrease when a bacterium grown in a sample containing random DNA digests the DNA and reassembles it into copies of its own DNA?

 

Copies do not represent any net change in probabilities of discrete states. It would seem that information entropy is not decreased with offspring.

 

3) Can information be created by living organisms, or can it not? If not, then how do you explain the effects of occasional mutation of a bacterium using your measure of information, I = - log(P) rather than made-up nonsense?

 

I have repeatedly said that random processes import small quantities of information in proportion to the probabilistic resource employed.

 

4) Can new function be created by mutation or can it not? If not, how do you explain the new functionality of the hemoglobin gene due to the change of one nucleotide from an A to a U, which has the new function of forming insoluble fibers but without alluding to some sort of non-existent intentionality? Or using the definition I used, that increases the fitness of the person in question within the context of malaria? Or show that this mutation can't happen?

 

Building on the previous answer, this is an example of modified and degraded previous function due to substitution and replacement with a small amount of imported information by random mutation. The degraded function causes the protein structures to partially collapse into a glob of dysfunctional muck that has no function, thus functional information is degraded. The presence of malaria contributes to this collapse and the spleen destroys and removes these infected and broken dysfunctional blood cells. This is an excellent example of the adaptive limits of evolution. There is no evolutionary path forward, it's a dead end.

Link to comment
Share on other sites

In a complete cycle, local entropy is unchanged. It would seem that growth would not involve a decrease in entropy, how could it, with mass increasing discrete probability states must increase, correct?

 

So you're claiming that local entropy cannot decrease? Perhaps you'd best study the laws of thermodynamics again.

 

Plants growing is not a complete cycle. Yes, if you grow plants and let them decay that would be a different story but that is not what I asked. If chemistry is too complicated for you, how about if we simplify? Does the local entropy of the inside of a refrigerator decrease when the refrigerator is turned on? This is basic thermodynamics.

 

Copies do not represent any net change in probabilities of discrete states. It would seem that information entropy is not decreased with offspring.

 

Really? Can you give an example where replacing random data with repeating data does not decrease the information entropy? Preferably using either formulas or a computer program that measures entropy of strings. Or think it through logically: are there more possible states or less if the data is restricted to repetition?

 

I have repeatedly said that random processes import small quantities of information in proportion to the probabilistic resource employed.

 

Good, then the increase of information is not a problem.

 

Building on the previous answer, this is an example of modified and degraded previous function due to substitution and replacement with a small amount of imported information by random mutation. The degraded function causes the protein structures to partially collapse into a glob of dysfunctional muck that has no function, thus functional information is degraded. The presence of malaria contributes to this collapse and the spleen destroys and removes these infected and broken dysfunctional blood cells. This is an excellent example of the adaptive limits of evolution. There is no evolutionary path forward, it's a dead end.

 

Except that it is indeed a function, and it works. You arbitrarily chose to call its function something it is not, and it is unsurprising that you find it to be non functional. This is no more than a circular argument -- you ask for new function, and then judge the new function as if it were the original function and act surprised that it isn't. You said you wanted a new function but are instead looking for improvements to the original function. So by new function perhaps you mean nothing more than improving existing function?

 

Also, the above is an example where your assumption of a reductionist intelligent designer fails you. As you point out, when looked at as a whole the mutation provides superior function (the organism lives) via a different method (aka new function, a "scorched earth" defense against malaria). Note however that evolution is not reductionist, the organism as a whole functions better in the environment and this is what is selected for. And not just that, but the original genes are kept too, so that when considered as a whole at the population or species level it is even better, just another tool in the adaptability toolkit, that will be automatically reduced or increased as needed (on average). Your reductionist intelligent designer is not as intelligent as evolution, in the quest for perfect components he fails to see that they must work as a whole and in various environments!

Link to comment
Share on other sites

So you're claiming that local entropy cannot decrease? Perhaps you'd best study the laws of thermodynamics again.

 

If a local process goes through a cycle, for each cycle, conditions return to the previous state. My understanding is that biological processes involve cycles. I am more concerned about your ability to apply the entropy law. You know perfectly well I am not making that claim here but you can't resist yet another attempt to discredit even while violating site rules with the logical fallacy of putting words in my mouth.

 

Plants growing is not a complete cycle. Yes, if you grow plants and let them decay that would be a different story but that is not what I asked. If chemistry is too complicated for you, how about if we simplify? Does the local entropy of the inside of a refrigerator decrease when the refrigerator is turned on? This is basic thermodynamics.

 

Growth involves additional mass and higher net entropy, correct? Your claim was that grass growing is an example of a net reduction in entropy. I have questioned that statement and you have been unable to demonstrate how it is true.

 

Really? Can you give an example where replacing random data with repeating data does not decrease the information entropy? Preferably using either formulas or a computer program that measures entropy of strings. Or think it through logically: are there more possible states or less if the data is restricted to repetition?

 

Deterministic processes do not change probability and do not change entropy. Deterministic processes have the ability to replace random data with regularly repeating data and is an example of what you asked. Entropy change is a function of the processes involved.

 

Good, then the increase of information is not a problem.

 

You are changing my words again. Random processes import small amounts of information and only in proportion to the probabilistic resources available. For example a random function that imports on average 10^-30 bits of information per cycle with a cycle time of 15 years cannot be expected to form new function requiring 1000 bits of new information in any reasonable time.

 

Except that it is indeed a function, and it works. You arbitrarily chose to call its function something it is not, and it is unsurprising that you find it to be non functional. This is no more than a circular argument -- you ask for new function, and then judge the new function as if it were the original function and act surprised that it isn't. You said you wanted a new function but are instead looking for improvements to the original function. So by new function perhaps you mean nothing more than improving existing function?

 

When one asks for new function, it is not unreasonable that the new function actually involve a functional system as opposed to breaking an existing functional system and then allowing body's trash compost system to dispose of the broken component. It is not functional and it is not new. It is a striking example of adaptation through component damage. I am not being arbitrary in insisting that a function be functional in the common sense of the word. It is a testament to the weakness of your argument that, despite the diversity observed in the biological world, involving countless trillions of exquisitely functional systems, your examples of evolutionary adaptation involve damage to one of these fabulously functional systems, to the point of death for those unlucky enough to inherit two broken genes, as a mechanism to stave off a scourge that evolutionary process are unable to defeat through development of new functional processes despite the long years this parasite has been ravaging the human population.

 

Also, the above is an example where your assumption of a reductionist intelligent designer fails you. As you point out, when looked at as a whole the mutation provides superior function (the organism lives) via a different method (aka new function, a "scorched earth" defense against malaria). Note however that evolution is not reductionist, the organism as a whole functions better in the environment and this is what is selected for. And not just that, but the original genes are kept too, so that when considered as a whole at the population or species level it is even better, just another tool in the adaptability toolkit, that will be automatically reduced or increased as needed (on average). Your reductionist intelligent designer is not as intelligent as evolution, in the quest for perfect components he fails to see that they must work as a whole and in various environments!

 

I have repeatedly acknowledge the capability of evolutionary processes to allow for adaptation of existing suites of function by damage of redundant and semi-redundant functions in order to defeat biological and chemic threats. It is a fabulous example of adaptive advantage, but there is no evidence that this process is a step or two in a longer stepwise evolutionary pathway to novel form and function. Your example is yet another case of moving the goal post and answering a question different from what was asked of you.

 

Let me make my argument more clear when I said Inteligent processes I meant those processes which account for the already existing functional forms and not a design process which account for the newly created novel forms. I said those Intelligent processes which account for the already existing forms must be outside of science.

 

I don't see how this is a problem. Advocates of design predict that one day soon, human genetic engineers will design and construct novel life forms from scratch. If and when this prediction is confirmed, design will account for pre-existing function as well as novel form and function.

 

I also want a clear picture of what the Intelligent Design idea claim to explain. Does it claim to account for how the functional life forms arosed with the diversity we see today or Is it just a new field of science which deals with how genetic engineers generate novel new functional forms?

 

Design advocates claim that life and biological processes were designed including any and all processes that allow for, enabled, or caused diversification. Design advocates say life appears designed because it was designed, and alien seeding of life on earth is but one mechanism by which it can be explained.

Link to comment
Share on other sites

 

I don't see how this is a problem. Advocates of design predict that one day soon, human genetic engineers will design and construct novel life forms from scratch. If and when this prediction is confirmed, design will account for pre-existing function as well as novel form and function.

 

Design advocates claim that life and biological processes were designed including any and all processes that allow for, enabled, or caused diversification. Design advocates say life appears designed because it was designed, and alien seeding of life on earth is but one mechanism by which it can be explained.

 

The main problem is, from where this low entropy functional information is coming from or what is the source of this low entropy information. As you said the low entropy information may be coming from the human mind but how do we testify it and also irrespective of whether it comes from human genetic engineers, aliens or from some other things there may be a universal physical low entropy information source so the claim of design advocates can only survive and make sense only if one provides a way to testify that there is indeed a universal physical low entropy information which helps in the process of desigining novel functional forms. This is the way I see it.

 

Otherwise it just doesn't make any sense to say that predictions of human engineers capable of designing novel forms from scratch also accounts for the origin of prior existing functional desings.

Link to comment
Share on other sites

If a local process goes through a cycle, for each cycle, conditions return to the previous state. My understanding is that biological processes involve cycles. I am more concerned about your ability to apply the entropy law. You know perfectly well I am not making that claim here but you can't resist yet another attempt to discredit even while violating site rules with the logical fallacy of putting words in my mouth.

 

I asked a yes/no question, and the answer you gave seemed to be the no. Care to answer again then?

1) Can local entropy decrease given energy inputs, or can it not? If not, how do you explain plants growing?

Please note that growing is not a complete cycle.

 

It seems to me that you'd rather not answer clearly on this one, because "yes" would undermine your arguments but "no" would profess ignorance of basic thermodynamics. So you're trying to change the question, as you have done so many times before. This time you're not getting away from it. There's no cycle in the question. Answer my question not your own please.

 

Growth involves additional mass and higher net entropy, correct? Your claim was that grass growing is an example of a net reduction in entropy. I have questioned that statement and you have been unable to demonstrate how it is true.

 

No, growth involves taking mass from outside sources and incorporating into self. The plant is not making its own atoms, it is taking them from its environment. If you don't think that this is reducing entropy, that is equivalent to claiming that the entropy of these atoms in plant form is the same or greater than the same atoms in the environment (eg after the plant decays). Is this what you are saying? If not, then you agree with me that it is less.

 

Deterministic processes do not change probability and do not change entropy. Deterministic processes have the ability to replace random data with regularly repeating data and is an example of what you asked. Entropy change is a function of the processes involved.

 

It doesn't quite answer the question. I'd like to see specifically how the entropy is measured.

 

But taking your claim to the logical conclusion, then you are also claiming that the entropy in DNA does not increase when it is randomly changed, because when changed back its entropy is the same as it was before and therefore they are of equivalent entropy. While I don't accept that, it still does negate your claim of information entropy being a problem since all the strings have equivalent entropy per your claim.

 

You are changing my words again. Random processes import small amounts of information and only in proportion to the probabilistic resources available. For example a random function that imports on average 10^-30 bits of information per cycle with a cycle time of 15 years cannot be expected to form new function requiring 1000 bits of new information in any reasonable time.

 

So information can be created/imported, as you say, so it is not a problem that information increases, qualitatively. Now had I let you get away with answering my question in a qualitative manner rather than a qualitative manner. If you wish to turn around and use numbers, then please answer my question again, qualitatively this time.

3) Can information be created by living organisms, or can it not? If not, then how do you explain the effects of occasional mutation of a bacterium using your measure of information, I = - log(P) rather than made-up nonsense?

I think you'll find that your idea of "10^-30 bits of information per cycle with a cycle time of 15 years" is completely baseless in the context of my question.

 

When one asks for new function, it is not unreasonable that the new function actually involve a functional system as opposed to breaking an existing functional system and then allowing body's trash compost system to dispose of the broken component. It is not functional and it is not new. It is a striking example of adaptation through component damage. I am not being arbitrary in insisting that a function be functional in the common sense of the word. It is a testament to the weakness of your argument that, despite the diversity observed in the biological world, involving countless trillions of exquisitely functional systems, your examples of evolutionary adaptation involve damage to one of these fabulously functional systems, to the point of death for those unlucky enough to inherit two broken genes, as a mechanism to stave off a scourge that evolutionary process are unable to defeat through development of new functional processes despite the long years this parasite has been ravaging the human population.

 

There's plenty of examples of new function. Your eyes for example. However, I was under the impression that you wanted new function that could very clearly be seen to be the result of evolutionary processes, because you don't seem to accept the ones that would have taken longer to evolve than can be directly observed. In addition, you have previously complained about other new functions I mentioned as being similar to the original function. You're trying to demand something impossible and use the failure to meet that demand as if it were evidence of something other than that you can make impossible demands. Whenever I offer an example you change your demand. Let me guess, when I offer examples with slight changes as necessitated by you need to be able to see the results must be a possible by evolution, you'll complain that the changes are too slight and you want an example with sufficient changes that you can complain that you can't see how it would be a product of evolution. Nevertheless, small changes still give new function. But to make new function with very little changes quite frequently involves "breaking" things, and only occasionally provides new function in a way that doesn't seem broken, which I'm sure is no surprise if you think it through.

 

Now I've given also examples of riboswitches which can detect different chemicals with just one noteable change. http://www.pdb.org/pdb/static.do?p=education_discussion/molecule_of_the_month/pdb130_1.html Is the detection of one chemical (adenine) over another (guinine) not new function?

 

Another example, it takes only a few mutations to convert soluble guanylate cyclase (sGC) from binding to NO to binding to O2. http://www.jbc.org/content/early/2010/03/15/jbc.M109.098269.abstract

 

I have repeatedly acknowledge the capability of evolutionary processes to allow for adaptation of existing suites of function by damage of redundant and semi-redundant functions in order to defeat biological and chemic threats. It is a fabulous example of adaptive advantage, but there is no evidence that this process is a step or two in a longer stepwise evolutionary pathway to novel form and function. Your example is yet another case of moving the goal post and answering a question different from what was asked of you.

 

I noticed you specifically go out of your way to call such new function, despite acknowledging that it is new, beneficial (in a given context), and not the original function. What else can it be but new function? It's not just that the hemoglobin is broken, but it is in such a way that it marks infected RBCs for destruction, effectively switching part of its function from that of oxygen supply (which it still slightly does) to that of disease defense (which it didn't do before). You've yet to make a case as to why something with a new function must keep the original function at the same efficiency as before to be able to be called new function.

 

And as for the accusation of moving goalposts and answering different questions, this is is in reply to your answer to my question #4, in which you did not ask any questions. While I do intend to use these questions/answers later in the debate, I will of course give you a fair chance to point out if it doesn't apply to what you are saying, although perhaps I will just ignore that and build a case for evolution. But for now, these are independent of that and simply an attempt to pin down exactly what you believe can be done or not, so that we don't talk past each other (as much). For example, pretty much everyone but you thinks you are the one moving goalposts, but my guess is no one is quite sure where who's goalposts are. With all the accusations of moving goalposts, I'd really like to know where everyone thinks they are.

Link to comment
Share on other sites

The main problem is, from where this low entropy functional information is coming from or what is the source of this low entropy information. As you said the low entropy information may be coming from the human mind but how do we testify it and also irrespective of whether it comes from human genetic engineers, aliens or from some other things there may be a universal physical low entropy information source so the claim of design advocates can only survive and make sense only if one provides a way to testify that there is indeed a universal physical low entropy information which helps in the process of desigining novel functional forms. This is the way I see it.

 

Otherwise it just doesn't make any sense to say that predictions of human engineers capable of designing novel forms from scratch also accounts for the origin of prior existing functional desings.

 

Demonstration that human designers are capable of designing life would confirm that design, as a process, does account for life, irrespective of the designer.

 

Human designers are quite capable of organizing input information into low entropy functional prescriptive information. It is demonstrated regularly and it is not in contradiction to physical laws. Laws of probability, of which entropy is based, are predicated on random and deterministic processes alone. The design process is contingent but is not random or deterministic and therefore don't seem to be subject to the constraints imposed on physical processes by the laws of probability.

Link to comment
Share on other sites

There's plenty of examples of new function. Your eyes for example. However, I was under the impression that you wanted new function that could very clearly be seen to be the result of evolutionary processes, because you don't seem to accept the ones that would have taken longer to evolve than can be directly observed. In addition, you have previously complained about other new functions I mentioned as being similar to the original function. You're trying to demand something impossible and use the failure to meet that demand as if it were evidence of something other than that you can make impossible demands. Whenever I offer an example you change your demand. Let me guess, when I offer examples with slight changes as necessitated by you need to be able to see the results must be a possible by evolution, you'll complain that the changes are too slight and you want an example with sufficient changes that you can complain that you can't see how it would be a product of evolution. Nevertheless, small changes still give new function. But to make new function with very little changes quite frequently involves "breaking" things, and only occasionally provides new function in a way that doesn't seem broken, which I'm sure is no surprise if you think it through.

 

Now I've given also examples of riboswitches which can detect different chemicals with just one noteable change. http://www.pdb.org/p...h/pdb130_1.html Is the detection of one chemical (adenine) over another (guinine) not new function?

 

Another example, it takes only a few mutations to convert soluble guanylate cyclase (sGC) from binding to NO to binding to O2. http://www.jbc.org/c...098269.abstract

 

Another relevant example is that a 1 nucleotide mutation, [ce]GAG -> GTG[/ce], is the cause for sickle-cell anemia. The codon mutation causes all the Glu residues a certain position being subbed out for Val's which totally changes the quaternary structure of hemoglobin. Turns out sickle cell was a favorable trait in west Africa at one time because it made one more immune to malaria which would result in a shorter lifespan than sickle-cell. That's just a one nucleotide mutation.

Link to comment
Share on other sites

I asked a yes/no question, and the answer you gave seemed to be the no. Care to answer again then?

1) Can local entropy decrease given energy inputs, or can it not? If not, how do you explain plants growing?

Please note that growing is not a complete cycle.

 

It seems to me that you'd rather not answer clearly on this one, because "yes" would undermine your arguments but "no" would profess ignorance of basic thermodynamics. So you're trying to change the question, as you have done so many times before. This time you're not getting away from it. There's no cycle in the question. Answer my question not your own please.

 

Originally you asked this question in context to biological processes. Now you have changed it to be generic to any single physical process step of any kind. Thermodynamic cycles include steps that have entropy of one component drop while another rises and equal or greater amount. The net change is greater or equal to zero. I don't answer the question "yes" or " no" because both answers are misleading in one way or another.

 

No, growth involves taking mass from outside sources and incorporating into self. The plant is not making its own atoms, it is taking them from its environment. If you don't think that this is reducing entropy, that is equivalent to claiming that the entropy of these atoms in plant form is the same or greater than the same atoms in the environment (eg after the plant decays). Is this what you are saying? If not, then you agree with me that it is less.

 

Originally your question seemed to involve thermal entropy, but now you have changed the context to molecular entropy. However since biological processes follow a blueprint infused into them from their causal agent, the growth and developmental processes are now largely deterministic and do not seem to involve any significant change in probability states.

 

When you originally asked about this case I said I would love to see you prove your claim. That challenge still stands.

 

 

It doesn't quite answer the question. I'd like to see specifically how the entropy is measured.

 

Since the probability of a deterministic process is 1.0 entropy change by definition is zero based on the formulation.

 

But taking your claim to the logical conclusion, then you are also claiming that the entropy in DNA does not increase when it is randomly changed, because when changed back its entropy is the same as it was before and therefore they are of equivalent entropy. While I don't accept that, it still does negate your claim of information entropy being a problem since all the strings have equivalent entropy per your claim.

 

Outcomes of random processes have probabilities less than one. At a macro scale, for random processes, over large numbers of discrete events, entropy laws identify the direction of energy flow, molecular flow and information flow, from probability states to higher probability states. Deterministic processes have the highest probability.

 

So information can be created/imported, as you say, so it is not a problem that information increases, qualitatively. Now had I let you get away with answering my question in a qualitative manner rather than a qualitative manner. If you wish to turn around and use numbers, then please answer my question again, qualitatively this time.

3) Can information be created by living organisms, or can it not? If not, then how do you explain the effects of occasional mutation of a bacterium using your measure of information, I = - log(P) rather than made-up nonsense?

I think you'll find that your idea of "10^-30 bits of information per cycle with a cycle time of 15 years" is completely baseless in the context of my question.

 

I don't know that information can be created. I do know it can be imported. It has not been established that living organisms create information. Random processes such as mutations can import small amounts of information into sub systems, though it most often takes the form of noise that damages functional systems that are derived from this prescriptive information. This reality does not need a quantitative hypothetical example to see that what I say is correct. If you disagree and wish to demonstrate your disagreement with a real quantitative example, please do so. I note that you have yet to respond to any of my requests for quantitative examples even when quantifying it is required to support your points.

 

 

There's plenty of examples of new function. Your eyes for example. However, I was under the impression that you wanted new function that could very clearly be seen to be the result of evolutionary processes, because you don't seem to accept the ones that would have taken longer to evolve than can be directly observed.

 

I am aware that at one time in the past vertebrate eyes did not exist and now they do. The fact of this diversification is accepted, but it is not known how it occurred. You do not improve your case by stating the obvious and then speculating that a particular assumed process caused it. Instead show that your speculations have causal power. Show that they can derive the required precursors. Offer an four step evolutionary pathway.

 

In addition, you have previously complained about other new functions I mentioned as being similar to the original function. You're trying to demand something impossible

 

I am only asking for what the theory posits. Does the theory posit the impossible?

 

Now I've given also examples of riboswitches which can detect different chemicals with just one noteable change. http://www.pdb.org/pdb/static.do?p=education_discussion/molecule_of_the_month/pdb130_1.html Is the detection of one chemical (adenine) over another (guinine) not new function?

 

What is the significance of this observation? Is it known that one was derived from the other? Are both functional in the same organism? Do they both serve a function? Is there an extended pathway involving this step?

 

Another example, it takes only a few mutations to convert soluble guanylate cyclase (sGC) from binding to NO to binding to O2. http://www.jbc.org/content/early/2010/03/15/jbc.M109.098269.abstract

 

Were these mutations directed by the researchers or did they occur by natural selection?

Link to comment
Share on other sites

Originally you asked this question in context to biological processes. Now you have changed it to be generic to any single physical process step of any kind. Thermodynamic cycles include steps that have entropy of one component drop while another rises and equal or greater amount. The net change is greater or equal to zero. I don't answer the question "yes" or " no" because both answers are misleading in one way or another.

 

Re-read the question -- it does not mention anything biological. That restriction is only in your imagination. The question immediately following it only applies if you answered no to the previous. Also, no cycles are mentioned. Would you prefer if I asked it slightly differently?

1) Can local entropy of a system at least temporarily decrease given energy inputs, or can it not? If not, how do you explain the entropy inside a refrigerator box when it is switched on?

 

HINT: The answer is "yes", and there is nothing misleading about it other than that you don't like the answer.

 

Originally your question seemed to involve thermal entropy, but now you have changed the context to molecular entropy. However since biological processes follow a blueprint infused into them from their causal agent, the growth and developmental processes are now largely deterministic and do not seem to involve any significant change in probability states.

 

I neither said molecular nor thermal entropy. I said entropy. Don't you know what entropy is? Here, please learn and then come back: http://en.wikipedia.org/wiki/Entropy

 

When you originally asked about this case I said I would love to see you prove your claim. That challenge still stands.

 

Are you just wasting my time? Alright, but first tell me: do you think the entropy of a plant is greater or lesser than that of its material components (air/water/land)? I noticed you avoid answering clear simple questions as much as possible. Is that because if you answer either way your answer will harm your argument? I think that just shows you have a weak argument.

 

Since the probability of a deterministic process is 1.0 entropy change by definition is zero based on the formulation.

 

If the entropy is zero, then surely the entropy must have changed to a lower value from that of random information? Also, what exactly is this formulation you speak of? You aren't confusing the formula for total entropy and the one for change in entropy, are you?

 

Outcomes of random processes have probabilities less than one. At a macro scale, for random processes, over large numbers of discrete events, entropy laws identify the direction of energy flow, molecular flow and information flow, from probability states to higher probability states. Deterministic processes have the highest probability.

 

So you say the randomly created string has a high entropy but elsewhere you say changing it to a deterministically created string doesn't change the entropy? I don't think that makes sense.

 

I don't know that information can be created. I do know it can be imported. It has not been established that living organisms create information. Random processes such as mutations can import small amounts of information into sub systems, though it most often takes the form of noise that damages functional systems that are derived from this prescriptive information. This reality does not need a quantitative hypothetical example to see that what I say is correct. If you disagree and wish to demonstrate your disagreement with a real quantitative example, please do so. I note that you have yet to respond to any of my requests for quantitative examples even when quantifying it is required to support your points.

 

OK, created or imported into the system from the environment work equally well for my purposes. I'm not asking about useful information, I'm asking about information in this question. OK, let me demostrate with an example:

String Set 1: ABCDE ABCDE ABCDE

String Set 2: ABCDE ABCDE ABCDF

#1 is an example of a repeated string, analogous to copies correctly made by a deterministic process. #2 is similar, but contains one different string, analogous to deterministic copies with occasional mutation. Using your measure of information, the second set has more information than the first. Therefore information was created (imported, if you prefer).

 

It can be seen that the second string has more information than the first. As for quantitative values, your measure of information does not seem to provide me with enough data to calculate the value (I'd have to know the probability of the data, and if I knew that calculating the information content would be rather pointless), however if we use the mutual information of the information with itself, that allows calculation of the information content based on the string itself.

 

I am aware that at one time in the past vertebrate eyes did not exist and now they do. The fact of this diversification is accepted, but it is not known how it occurred. You do not improve your case by stating the obvious and then speculating that a particular assumed process caused it. Instead show that your speculations have causal power. Show that they can derive the required precursors. Offer an four step evolutionary pathway.

 

 

 

I am only asking for what the theory posits. Does the theory posit the impossible?

 

No. I gave the example, of eyes, and mentioned that you would complain that you cannot see how evolution could be responsible, which I was correct about. There's plenty of good literature about the evolution of eyes, but it is too complicated for you. But I also included much simpler examples of new function, and mentioned that you'd find the opposite reason as being a fault, ie it is too simple a change. You're welcome to prove me right on that aspect as well.

 

What is the significance of this observation? Is it known that one was derived from the other? Are both functional in the same organism? Do they both serve a function? Is there an extended pathway involving this step?

 

The significance is new function. Detecting adenine is a different function than detecting guanine, both of which are vital cell components. If you're curious about the other aspects, feel free to read up to your heart's content.

 

As for whether it is known to have been derived from the other, I don't see how that is relevant. It seems just a ploy to limit our options to include only function that happened in the last ~50 years and then only while a researcher was looking at it under a microscope, or something like that. What matters to me is that since the changes are slight then it is extremely likely to have happened via known processes of mutation and so should be considered to have happened like all the others we've seen absent any evidence to the contrary (which you haven't and almost certainly can't offer). Again, if a tree fell in a forest and no one was there to see it do we assume the fallen tree and damaged vegetation were poofed into existence by some god or do we say the tree fell just like any other tree we've seen fall? Or are you saying that unless someone saw the tree fall they can't say it fell via deduction from other things they know and should instead assume some intelligent agent planted a fallen tree there?

 

Were these mutations directed by the researchers or did they occur by natural selection?

 

Does it matter? Surely you know how to calculate the odds of randomly creating specific mutations. Simply from a statistical point of view if something must have happened then you should say it happened, even if it was not observed. Alternately, there's no reason to believe anything you say because either you haven't observed it or its subjective. You do realize we have a pretty good understanding of reality and of maths, don't you? Enough so that someone observing something is frequently less credible than a theoretical calculation of what must have happened? Or are you saying that because researchers have made 2 amino acid substitutions that therefore it is impossible or unlikely for a random process to change those same 2 amino acids? If not then how is your question relevant? Anyhow, the answer to your question is "yes". The mutations happened in nature in one species, and the researchers also added the mutation to another to verify that those specific mutations were responsible.

Link to comment
Share on other sites

Demonstration that human designers are capable of designing life would confirm that design, as a process, does account for life, irrespective of the designer.

 

Are you claiming that those processes which are used by human designers to produce novel forms are the same processes that generated prior existing functional forms?

 

Human designers are quite capable of organizing input information into low entropy functional prescriptive information. It is demonstrated regularly and it is not in contradiction to physical laws.

 

Now I have to assume that this is organized by what we call the "MIND" because earlier you argued that the human brain can only store existing information but it can not generate new functional information. So can you please demonstrate how the human mind organizes this.

 

The design process is contingent but is not random or deterministic and therefore don't seem to be subject to the constraints imposed on physical processes by the laws of probability.

 

If a process is neither random nor it is determinstic then it might be non-computable (Penrose's non-computability). So are you claiming that design processes are non-computable? Is there a non-computable physical process? I would be very interested in it if you can show one.

Link to comment
Share on other sites

Re-read the question -- it does not mention anything biological. That restriction is only in your imagination. The question immediately following it only applies if you answered no to the previous. Also, no cycles are mentioned. Would you prefer if I asked it slightly differently?

1) Can local entropy of a system at least temporarily decrease given energy inputs, or can it not? If not, how do you explain the entropy inside a refrigerator box when it is switched on?

 

HINT: The answer is "yes", and there is nothing misleading about it other than that you don't like the answer.

 

The answer depends on how one defines "a system" and that is why a "yes or no" answer would be misleading. Traditionally the system includes inputs and outputs, as you have implied, and thus the answer is as I have now given it twice. This is now the third time: Entropy change of a system is zero or greater when physical processes only are in play for the situations you describe when all parts of the system and inputs and outputs are included .

 

Are you just wasting my time? Alright, but first tell me: do you think the entropy of a plant is greater or lesser than that of its material components (air/water/land)? I noticed you avoid answering clear simple questions as much as possible. Is that because if you answer either way your answer will harm your argument? I think that just shows you have a weak argument.

 

Equal or greater. Plant growth is largely a deterministic process carefully managed and controlled by process that follow prescriptive instruction sets.

 

 

If the entropy is zero, then surely the entropy must have changed to a lower value from that of random information? Also, what exactly is this formulation you speak of? You aren't confusing the formula for total entropy and the one for change in entropy, are you?

 

The change in entropy is zero for deterministic processes. Absolute entropy remains at its previous value. I m not confused, but you seem to be. Reread my previous statement.

 

So you say the randomly created string has a high entropy but elsewhere you say changing it to a deterministically created string doesn't change the entropy? I don't think that makes sense.

 

Reread my statement. It seems that you continually attempt to change what I am saying. Entropy law defines the direction of flow for system events that are guided by physical processes. Random processes will drive ordered systems to the state of highest probability over time. A single macro event driven by a random process involving large numbers of discrete micro events will result in a net entropy change of zero or greater. A deterministic process has only one outcome and a probability of 1 so no change in entropy occurs. Here again I speak of change in entropy while you wish to imply I speak of absolute entropy values.

 

OK, created or imported into the system from the environment work equally well for my purposes. I'm not asking about useful information, I'm asking about information in this question. OK, let me demostrate with an example:

String Set 1: ABCDE ABCDE ABCDE

String Set 2: ABCDE ABCDE ABCDF

#1 is an example of a repeated string, analogous to copies correctly made by a deterministic process. #2 is similar, but contains one different string, analogous to deterministic copies with occasional mutation. Using your measure of information, the second set has more information than the first. Therefore information was created (imported, if you prefer).

 

Random process can import information, I have stipulated this months ago and repeated that in the previous post. You have added nothing to this understanding. I do not see where information was created.

 

No. I gave the example, of eyes, and mentioned that you would complain that you cannot see how evolution could be responsible, which I was correct about. There's plenty of good literature about the evolution of eyes, but it is too complicated for you. But I also included much simpler examples of new function, and mentioned that you'd find the opposite reason as being a fault, ie it is too simple a change. You're welcome to prove me right on that aspect as well.

 

I am gratified that even you can see the weakness in your arguments to the point that you are able to predict the response. The scientific process requires testable and repeatable demonstrations of what is posited, not watered down versions, that demonstrate something else, but that those who have drunk the cool-aid blindly accept.

 

As for whether it is known to have been derived from the other, I don't see how that is relevant. It seems just a ploy to limit our options to include only function that happened in the last ~50 years and then only while a researcher was looking at it under a microscope, or something like that. What matters to me is that since the changes are slight then it is extremely likely to have happened via known processes of mutation and so should be considered to have happened like all the others we've seen absent any evidence to the contrary (which you haven't and almost certainly can't offer).

 

If it is "extremely likely to have happened" then the precursors will have happened millions of times over in experimental biology over the past 50 years. . Probability theory and entropy considerations on the other hand inform us that it does not happen by the posited processes. Other processes must be involved.

 

Again, if a tree fell in a forest and no one was there to see it do we assume the fallen tree and damaged vegetation were poofed into existence by some god or do we say the tree fell just like any other tree we've seen fall? Or are you saying that unless someone saw the tree fall they can't say it fell via deduction from other things they know and should instead assume some intelligent agent planted a fallen tree there?

 

When one sees a fallen tree in a forest one develops a hypothesis for how it may have gotten that way and then sets up a repeatable experiment to test the hypothesis. When the experiments confirm the hypothesis, one declares with a high degree of confidence that they understand how fallen trees become so.

 

Does it matter?

 

Of course it matters.

 

Surely you know how to calculate the odds of randomly creating specific mutations. Simply from a statistical point of view if something must have happened then you should say it happened, even if it was not observed. Alternately, there's no reason to believe anything you say because either you haven't observed it or its subjective. You do realize we have a pretty good understanding of reality and of maths, don't you? Enough so that someone observing something is frequently less credible than a theoretical calculation of what must have happened?

 

We are debating How it happened not that it happened. Once again you commit a logical fallacy. Are there different rules for staff members on this site?

 

Or are you saying that because researchers have made 2 amino acid substitutions that therefore it is impossible or unlikely for a random process to change those same 2 amino acids? If not then how is your question relevant?

 

It is relevant because the debate centers around the process, not the fact of its existence. Since the researchers directed the mutations, and since design directed mutations are specifically excluded from your posit, one cannot conclude that random mutation and natural selection is able to accomplish what this experiment accomplished. It is apples and oranges to make this claim, and yet another logical fallacy.

 

Anyhow, the answer to your question is "yes". The mutations happened in nature in one species, and the researchers also added the mutation to another to verify that those specific mutations were responsible.

 

But you don't know how the changes occurred in nature..... I see.

 

Another relevant example is that a 1 nucleotide mutation, [ce]GAG -> GTG[/ce], is the cause for sickle-cell anemia. The codon mutation causes all the Glu residues a certain position being subbed out for Val's which totally changes the quaternary structure of hemoglobin. Turns out sickle cell was a favorable trait in west Africa at one time because it made one more immune to malaria which would result in a shorter lifespan than sickle-cell. That's just a one nucleotide mutation.

 

This example was recently discussed in the immediate prior several posts. Sickle cell trait provides a slight advantage to those who contract malaria because malaria further weakens the red blood cell to the point that the spleen detects and destroys the infected cell prior to the catastrophic and often fatal anemia and seizure induced oxygen deprivation precipitated by the active immune system's final attempt to eradicate the out of control replicating parasite pathogen. However full sickle cell anemia is more detrimental to replication than even malaria. Regardless of the path one takes sickle cell trait seems to be yet another evolutionary dead end that provides some modest adaptive advantage while in the process severely destroying a slightly redundant functional process. The theory of evolution posits extended evolutionary pathways made up of these observed single and two step adaptations of functional systems leading to novel form and function. While there are a modest number of examples of singe, and two step adaptations, there are zero examples of even four step extensions leading to the events posited by the theory. The predictions made by this theory are failing in biological experimentation.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.