Jump to content

What is information?


Mr Skeptic

Recommended Posts

So, cypress: do you have any evidence that natural selection functions differently, and cannot generate new information where a genetic algorithm can? Or that natural selection's fitness functions are somehow invalid, or incorrect, or unusable? Or that the maximum rate of evolutionary change, as calculated using information theory, fitness functions, and accommodations for millions of simultaneous reproducing and breeding organisms, cannot match the rate expected?

 

I struggle to see how these question matter to the discussion against genetic algorithms unless it is shown that natural selection does consistently generate functional (or as Marks and Dembski describe "specified") information faster and more efficiently than a blind search.

 

I would think that one taking the position that evolutionary processes do generate large quantities of novel functional information would want to demonstrate that known evolutionary process do generate functional information at the rates expected.

Link to comment
Share on other sites

I did not ask for the mathematical probability of generating a particular string of digital values. I asked for an actual case of a natural random process generating for example the construction plans for a functional system. Do you have a process in operation today that is known to have accomplished such a task?

 

 

 

Is a mathematical model reality or does it sometimes model reality? Are all mathematical models realistic and accurate?

 

Math is more certain than reality. That you insist on something less reliable rather than something more reliable shows you are not interested in the truth, but rather in playing games. If you have trouble believing things that are more certain than your own existence, that is your problem, not mine. As I said, a random number generator can generate any information you specify, and I can give you the odds that it will generate it.

 

Or are you questioning the existence of random number generators?

 

You have thus far not provided what was requested. Your point is moot.

 

If you cannot accept the truth, then why do you pretend to seek it?

 

What if? Sounds like speculation. I have made a distinction between different kinds of information. The fact that one can discern a construction blueprint from a table of data and lines of computer source code indicates I am correct.

 

What is not speculation, is that you have not despite several requests, failed to provide an example of data that is not information. This is evidence that you are purposely avoiding the issue, which is often due to being wrong about it and not wanting to admit it.

 

wiki provides a summary of the branches and different kinds of information in this article.

 

See also the link below.

 

In particular these areas: Computer Science, Source coding, Linguistics, Crypography, Informatics, Electrical Engineering, and others.

 

I searched these and none of them contain the phrase "digitally encoded coherent functional information". Sorry.

 

Were that true we could also say there is not point in talking about the information content in digital source code. Source code is processed by the compiler and the results are executed to derive function. You will have a hard time convincing Bill Gates for example that source code is not information since it is not addressed to anyone and no one seems particularly surprised by the messages and yet he finds the digital code contained in biological systems uncannily computer-like only far more sophisticated than anything human designers have written to date.

 

Your point is false and you have not demonstrated that natural random processes do generate functional digital code of the size observed in computer systems and biological systems.

 

So then you admit that I am right, that information need not be a message?

 

The functional output of 90% of the computer systems I write code for is analog also. This is particularly true for control systems.

 

Your code will work just fine whether connected to the analogue systems or the information is just sent to /dev/null. The output of your code, incidentally, is digital.

 

As near as I can tell, every genetic algorithm offered thus far succeeds because the designer designed them to succeed.

 

Nope, the genetic algorithm does not care about the intentions of the programmer. As you say, though, the results are essentially inevitable, which is excellent evidence for the power of evolution. This is why there can be so many arguments about evolution without even looking at the data -- simply understanding the process itself is powerful evidence in its own right.

 

They are neither random nor natural. Perhaps some day someone will offer one up that that succeeds on its own accord and does not import an information source that allows the system to succeed. Robert Marks and William Dembski have published several peer-reviewed papers on this point.

 

In short, your response does not seem to uncover anything to indicate the power of random processes with respect these unique kinds of information generated by humans and found in biological systems. It would be fascinating to hear of some natural process or system other than mind accomplishing such tasks, it appears that random systems can and do shuffle the information around but thus far, only a mind has been shown to be capable of generating this kind of information.

 

Whatever you may think about man-made genetic algorithms, the original one was not designed by humans and follows from the laws of nature. Some people do think the laws of nature were designed, but that's a different argument.

Link to comment
Share on other sites

I struggle to see how these question matter to the discussion against genetic algorithms unless it is shown that natural selection does consistently generate functional (or as Marks and Dembski describe "specified") information faster and more efficiently than a blind search.

 

I would think that one taking the position that evolutionary processes do generate large quantities of novel functional information would want to demonstrate that known evolutionary process do generate functional information at the rates expected.

 

Dembski's paper you cited states that genetic algorithms work faster than a blind search to achieve optimization against a given fitness function. "When subjected to rounds of selection and variation, the [naturally selected] agents can demonstrate remarkable success at resolving the problem in question."

 

On the other hand, Dembski cites other work as showing that search algorithms cannot generate new information "without problem-specific information about the search." This is unsurprising. The problem-specific information in genetic algorithms is the fitness function, which "knows" about the desired result. With this information, genetic algorithms can create information, according to Dembski's paper. Several examples are given of genetic algorithms that search faster than a "blind" search.

 

Again, do you have any evidence that natural selection functions differently, and cannot generate new information where a genetic algorithm can? Or that natural selection's fitness functions are somehow invalid, or incorrect, or unusable?

Link to comment
Share on other sites

Dembski's paper you cited states that genetic algorithms work faster than a blind search to achieve optimization against a given fitness function. "When subjected to rounds of selection and variation, the [naturally selected] agents can demonstrate remarkable success at resolving the problem in question."

 

On the other hand, Dembski cites other work as showing that search algorithms cannot generate new information "without problem-specific information about the search." This is unsurprising. The problem-specific information in genetic algorithms is the fitness function, which "knows" about the desired result. With this information, genetic algorithms can create information, according to Dembski's paper. Several examples are given of genetic algorithms that search faster than a "blind" search.

 

Of course they do and that is my point. They do so because the designer designed them to. Marks and Dembski show in the series of articles that they accomplish this by making use of imported active information and by use of a single target and hamming oracle whereas natural selection has no unique target.

 

Again, do you have any evidence that natural selection functions differently, and cannot generate new information where a genetic algorithm can? Or that natural selection's fitness functions are somehow invalid, or incorrect, or unusable?

 

Absence of a target seems to be one key difference. Another is import by the designer of active information. Dembski and Marks point these and other differences out in the series of articles.

 

Recall though that my argument remains one of critique of genetic algorithms not of natural selection so I don't see that I have any obligation to make a case against it.

Link to comment
Share on other sites

Of course they do and that is my point. They do so because the designer designed them to. Marks and Dembski show in the series of articles that they accomplish this by making use of imported active information and by use of a single target and hamming oracle whereas natural selection has no unique target.

By your phrasing, you imply that the imported active information from the "designer" contains information on how to solve the problem. This is not so. The imported information is the fitness function, which specifies the optimal goal of the algorithm. How the genetic algorithm reaches this goal, and the mechanisms developed through random mutation, are up to random chance. The solution information, such as the specific arrangement of chemicals to complete a certain reaction, is indeed generated by the genetic algorithm. The problem-specific information is the fitness function that evaluates the arrangements of chemicals as they converge on an ideal.

 

In short, the randomness of the genetic algorithm is responsible for generating the solution, while the fitness function is responsible for defining what solutions are acceptable.

 

But in the end, yes, genetic algorithms require fitness functions to be defined for them to succeed. This is not a surprise. With a defined fitness function, genetic algorithms are indeed examples of what you requested: "random processes generating functional digital code." The implied question is whether completely natural, non-designed processes can replicate this success and generate information.

 

Absence of a target seems to be one key difference. Another is import by the designer of active information. Dembski and Marks point these and other differences out in the series of articles.

 

Recall though that my argument remains one of critique of genetic algorithms not of natural selection so I don't see that I have any obligation to make a case against it.

Natural selection is a specific case of a genetic algorithm, so it is useful as an example.

 

"Absence of a target" is misleading. Natural selection defines targets -- reproductive success, for a start -- which can be met. These targets can be broken down into simpler, more specific targets, such as "metabolizing sugar" or "breathing oxygen" and so on. Now, natural selection is at a disadvantage compared to most genetic algorithms: its fitness function is comparatively complex, situation-dependent, and unpredictable. However, recall the advantages I also pointed out, such as parallel development and gene transfer through breeding.

 

Dembski and others have not yet demonstrated that a fitness function such as those defined by natural selection cannot succeed in the same way that those in a simpler genetic algorithm can. (The characteristics required of fitness functions, such as problem-specific information about what will or will not succeed, seem to be met. Natural selection kills what will not succeed by definition.) They also have not demonstrated that a natural system sharing the characteristics of an intelligently designed fitness function -- like natural selection -- cannot behave in the same way as a designed function. In other words, is the problem-specific information necessarily defined? Can "knowledge" of the solution arise simply from "whatever isn't a solution dies naturally"?

Link to comment
Share on other sites

If you look at natural selection, this is based on lowest potential. For example, if the food source is high in a tree, natural selection will favor those animals which can get the food with the least expenditure of energy. This may mean a longer neck or the ability to climb, both of which allow the result with the least waste of energy. A short neck needs to work harder, while poor climbing skills means more falling and injury. If we take 1000 random genetic information changes, geared around this food in the tree potential, the most efficient will have natural selection. The misinformation in the DNA, will be removed from nature by selective disadvantage.

 

In human culture, this may not always be true, since humans can "lie" and "spin"; misinformation of the potentials. For example, the smart consumer wishes to spend their money efficiently. The used car salesman will spin the quality of the junk car to make it look like this is a good deal. The consumer acts on this misinformation, as being real, to create an efficiency in their mind, but an inefficiency in reality. This does not happen in nature, since nature does not spin. Selective advantage uses the reality of the lowest environmental potential to help sort random mutation information.

Edited by pioneer
Link to comment
Share on other sites

By your phrasing, you imply that the imported active information from the "designer" contains information on how to solve the problem. This is not so.

 

The imported information is the fitness function, which specifies the optimal goal of the algorithm. How the genetic algorithm reaches this goal, and the mechanisms developed through random mutation, are up to random chance. The solution information, such as the specific arrangement of chemicals to complete a certain reaction, is indeed generated by the genetic algorithm. The problem-specific information is the fitness function that evaluates the arrangements of chemicals as they converge on an ideal.

 

The designers often import information on how to solve the problem, even if the only active information imported serves to define an artificial or contrived fitness function that guarantees success, this is design. However, in subsequent analysis of popular algorithms, the researchers show that information is also often added in other ways, including use of efficient search queries, and prior knowledge of search space, to improve the performance of the search routine through various methods, including intermediate rewards of functions not immediately useful but known to be required for the final product, and elimination of useful intermediates that are known to be deleterious to the final product.

 

In short, the randomness of the genetic algorithm is responsible for generating the solution, while the fitness function is responsible for defining what solutions are acceptable.

 

Careful selection of the fitness function is critical though for success, and in genetic algorithms, they are selected and defined by the programmer deliberately to ensure that there are selectable evolutionary pathways from start to finish even if the shape of the function has to be contrived. In other words, careful selection of the fitness function makes success a foregone conclusion, all that is left is to ensure that the search routine is efficient enough to find a pathway without running out of computing resources. In the successful examples, it turns out the designer imports prior knowledge to do both.

 

But in the end, yes, genetic algorithms require fitness functions to be defined for them to succeed. This is not a surprise. With a defined fitness function, genetic algorithms are indeed examples of what you requested: "random processes generating functional digital code." The implied question is whether completely natural, non-designed processes can replicate this success and generate information.

 

Except that to date all successful genetic algorithms include design in the process of generating information. It is the design and not the random algorithm that drives success. So we can say that a designing mind can import design into a genetic algorithm that is capable of generating functional information faster than blind search, often but not always slower than had the designer used a more conventional approach.

 

I am anxious to hear of an algorithm that does not make use of what Marks calls "active information" and successfully generates functional information faster than a blind search.

 

Natural selection is a specific case of a genetic algorithm, so it is useful as an example.

 

"Absence of a target" is misleading. Natural selection defines targets -- reproductive success, for a start -- which can be met. These targets can be broken down into simpler, more specific targets, such as "metabolizing sugar" or "breathing oxygen" and so on. Now, natural selection is at a disadvantage compared to most genetic algorithms: its fitness function is comparatively complex, situation-dependent, and unpredictable. However, recall the advantages I also pointed out, such as parallel development and gene transfer through breeding.

 

To better understand the difference or absence of a target, note that natural selection begins with a functional system that is reproducing and therefore accomplishing the goal of natural selection, and ends with a functional system that is reproducing even if it does nothing except meaner around the original configuration. Natural selection cannot begin with a non-functional system whereas all the genetic algorithms except one (stylus) that I know of can and do begin with nonfunctional configurations and by virtue of the designers choice of fitness function and search efficiency, converge on the target defined by the designer. Furthermore the fitness functions in genetic algorithms reward the algorithm for intermediate solutions that are also non-functional and therefor not something natural selection does. They (except stylus) are clearly different in this respect.

 

The stylus algorithm uses a fitness function and goal more similar to what is understood of natural selection. One can adjust the fitness criteria and mutation step distance to observe the effects of modifying the character of the fitness functions. It is interesting that when one adjusts the model to be similar to what we observe about natural selection, the model makes adaptations but it does not generate any new functional forms.

 

Dembski and others have not yet demonstrated that a fitness function such as those defined by natural selection cannot succeed in the same way that those in a simpler genetic algorithm can.(The characteristics required of fitness functions, such as problem-specific information about what will or will not succeed, seem to be met. Natural selection kills what will not succeed by definition.) They also have not demonstrated that a natural system sharing the characteristics of an intelligently designed fitness function -- like natural selection -- cannot behave in the same way as a designed function.

 

The Stylus program provides a great deal of insight into why no observations have been made of natural selection generating novel form and function or even the small components. It makes sense since the subcomponents (for example a new tertiary fold that generates a new protein structure, or new binding sites, or expression control) don't generally increase reproductive advantage until all components are in place, and even these subcomponents generally require four or more specific alterations and one realizes that many novel functions require many integrated proteins and multitudes of specific control sequences, the gap between advantageous configurations appears to be too wide. Stylus allows one to see how this works.

 

What is key for this discussion of generating particular kinds of information namely the kind of digitally encoded functional or as Dembski and Marks describe it "complex specified" information is that thus far, only mind has been observed to have generated information greater than what would be predicted by blind search.

Link to comment
Share on other sites

Random information is connected to entropy. Entropy needs energy or it can't increase. Throwing dice involves the addition of some form of energy. If we stop adding energy to the dice, whatever side the dice fell previously, is the final top we would see. To further drive the random dice process, we need an energy source. For a computer, how many watts of energy does it take to run a random number generator and will it work without energy?

 

If you look at the human mind, there is an energy connection to information processing. The unknown creates a tension or potentiates the mind. The primitive fear of novelty is one irrational way the tension of the unknown is expressed in the short term. The potential within the tension can not only cause the entropy of primitive irrationality, but it can also cause the mind to generate random explanations to help lower the tension. The potential creates the energy needed to increase the entropy. The best explanations lower potential.

 

For example, I walk in the woods and see a new animal I have never seen. There is an immediate excitement due to the potential in the tension. Within this tension, the brain is trying to associate the animal with my memory but can't overlap anything. I might then try to explain this new animal by equating features to other animals that I know. This helps, somewhat. Someone with me says, that is a "so and so". Since the unknown has been resolved, the tension is resolved. The entropy of my mind lowers since there is less energy for entropy.

 

In the chemical world of biology, there are also tensions and potentials. Once a potential is set, we have the energy needed for the entropy within chemical brain storming, such as mutations. The brain works the way it does, because it is an extension of biochemistry. It is not unique to chemistry but simply extrapolates the energy/entropy laws.

 

.

Edited by pioneer
Link to comment
Share on other sites

Let us go back to this claim

As near as I can tell, every genetic algorithm offered thus far succeeds because the designer designed them to succeed. They are neither random nor natural. Perhaps some day someone will offer one up that that succeeds on its own accord and does not import an information source that allows the system to succeed. Robert Marks and William Dembski have published several peer-reviewed papers on this point.

 

Note that at this point we are talking about information content of search algorithms. Not about information per se. That being said it is true that averaged over all possible landscape no algorithm outperforms a random walk. A rather obvious observation. Algorithm only performs well in specific search spaces- no surprise there. Averaging over all search spaces results in identical outcomes to each node and thus each choice would lead to the same result. So what it basically boils down to is that certain search algorithm perform better in specific search spaces and not at all in others. Surprise! It has been postulated much earlier that search algorithms do not generate information but merely transform it. This does not really provides deep insights, the only interesting bits were the attempts to quantify the content, in which a rather abysmal job was done. There must be a place for dull papers, too, I guess.

 

Now what he tries to do (not in peer-reviewed publications from what I can see) is to apply this to evolution and here is where he fails. Even disregarding the problems of trying to model evolution as a search, Dembski overlooks some very basic facts.

The single most important element is that evolution does not work on all possible search spaces but in specific, ordered ones. Essentially, it works in a landscape in which each search step in the landscape informs upon the structure of the landscape. To translate this into a biological equivalent is that this particular search works, if the survival rate informs upon the fitness (i.e. offspring with variation and natural selection) . Of course, nature has much more upon her sleeves, but we shall leave it as that in order to keep things simple. However, another important aspect about evolution is that (again if we want to discuss it as a form of search) it is not a static search (Dembski mostly discusses static searches). The search is constantly changing by incorporating information about the current landscape. Thus you do not need to put the information into the algorithm beforehand, but let it learn by reading the landscape it walks over.

Edited by CharonY
Link to comment
Share on other sites

Cypress, from what I see your objections are basically as follows:

1) The algorithm contains information or "design".

2) Man-made genetic algorithms also contain optimizations.

3) Genetic algorithms use some functioning components as their ingredients.

 

And furthermore, an acknowledgment that the existence of such an algorithm essentially guarantees the results.

4) The algorithm by its nature guarantees success.

 

Please tell me if I missed any.

 

My answers:

1) Yes, genetic algorithms contain information. So does evolution. In any case, it doesn't matter where the information comes from, only that it is there, and once the information is there the algorithm will function regardless of its source.

 

2) Optimizations are not necessary, only useful for reducing the amount of computational power needed. The same or similar results can be gotten without the optimizations by simply taking much longer.

 

3) Evolution presupposes functioning life, and life also has some functioning components it can rearrange or modify.

 

As for 4), I find it intriguing how you can simultaneously hold that the result of genetic algorithms is inevitable and yet deny that the original one can function.

Link to comment
Share on other sites

Even disregarding the problems of trying to model evolution as a search, Dembski overlooks some very basic facts.

The single most important element is that evolution does not work on all possible search spaces but in specific, ordered ones. Essentially, it works in a landscape in which each search step in the landscape informs upon the structure of the landscape. To translate this into a biological equivalent is that this particular search works, if the survival rate informs upon the fitness (i.e. offspring with variation and natural selection) .

 

I don't see how this observation is relevant to the discussion. Help me understand. I do see how the authors did previously address it. In genetic algorithms, the designer exploits this characteristic in specifying the contrived fitness function to guarantee pathways with selectable characteristics for each step. The researchers show how this constitutes imported active information.

 

Of course, nature has much more upon her sleeves, but we shall leave it as that in order to keep things simple. However, another important aspect about evolution is that (again if we want to discuss it as a form of search) it is not a static search (Dembski mostly discusses static searches). The search is constantly changing by incorporating information about the current landscape. Thus you do not need to put the information into the algorithm beforehand, but let it learn by reading the landscape it walks over.

 

I have no doubt that one can devise a vast narrative of how natural selection might work and what it might accomplish, but can you show that these presumed evolutionary pathways are real? Can you offer even one example of an actual contiguous multiple step pathway that generates functional information faster that a blind search? Can you even provide an example that is greater than five steps?

Link to comment
Share on other sites

Cypress, from what I see your objections are basically as follows:

1) The algorithm contains information or "design".

2) Man-made genetic algorithms also contain optimizations.

3) Genetic algorithms use some functioning components as their ingredients.

 

And furthermore, an acknowledgment that the existence of such an algorithm essentially guarantees the results.

4) The algorithm by its nature guarantees success.

 

Please tell me if I missed any.

 

I don't think these accurately summarize my observations (not objections) about genetic algorithms. Please re-read what I said and let me know what you wish me to clarify.

 

As for 4), I find it intriguing how you can simultaneously hold that the result of genetic algorithms is inevitable and yet deny that the original one can function.

 

Human designers are pretty good at tweaking their designs to demonstrate the purpose they intend. You have mischaracterized my words. I note that natural selection has not been shown to function as predicted. I suspect there is a different process involved.

Link to comment
Share on other sites

1) Yes, genetic algorithms contain information. So does evolution. In any case, it doesn't matter where the information comes from, only that it is there, and once the information is there the algorithm will function regardless of its source.

 

It makes a great deal of difference because the difference is between the product of a mind and a product of material processes. Should we conclude that if evolutionary processes do generate functional information it is because a mind infused information into the process at the beginning?

 

3) Evolution presupposes functioning life, and life also has some functioning components it can rearrange or modify.

 

Perhaps so, but then only Stylus is a valid example of an evolutionary algorithm because only Stylus begins with a functional system. But when you adjust the fitness function and mutation steps to be realistic, it only generates adaptations and never generates a novel system.

Link to comment
Share on other sites

It makes a great deal of difference because the difference is between the product of a mind and a product of material processes.

 

As far as I know, the laws of nature and of math function exactly the same on the same item, and do not check what the source of the item was. Were a computer identical to the ones humans made to appear due to some freak accident, it would function identically to the one made by humans. A rock made by humans functions the same as a rock made by natural processes, etc. I see no reason why it should matter where an item came from -- it will function the same.

 

Should we conclude that if evolutionary processes do generate functional information it is because a mind infused information into the process at the beginning?

 

You could instead conclude:

1) Information can come from non-intelligent processes

2) Genetic algorithms are intelligent

 

Perhaps so, but then only Stylus is a valid example of an evolutionary algorithm because only Stylus begins with a functional system. But when you adjust the fitness function and mutation steps to be realistic, it only generates adaptations and never generates a novel system.

 

You treat functional as if it were a binary descriptor. Some things are more functional than others, some so poor that they can barely be called functional.

Link to comment
Share on other sites

 

You could instead conclude:

1) Information can come from non-intelligent processes

2) Genetic algorithms are intelligent

 

I don't see how either of these conclusions are justified. Uniform experience argues against both conclusions.

 

You treat functional as if it were a binary descriptor. Some things are more functional than others, some so poor that they can barely be called functional.

 

I don't think I do. Poor function is most often due to a break or damage of the design. Sometimes it is due to poor design.

Link to comment
Share on other sites

I don't see how either of these conclusions are justified. Uniform experience argues against both conclusions.

 

The algorithm of evolution is a type of genetic algorithm, but it is based off the laws of nature. Since there is no evidence of intelligent entities ever making laws of nature, there is no reason to expect the algorithm of evolution to have been designed. There is however evidence of algorithms that demonstrate intelligence.

 

I don't think I do. Poor function is most often due to a break or damage of the design. Sometimes it is due to poor design.

 

In disagreeing with me you proved my point. You cannot use "poor" as a modifier for a binary state. "That's a poor 1" "That's a very good zero" don't make sense. So if function is binary how can some things function better than others?

Link to comment
Share on other sites

Cypress, from what I see your objections are basically as follows:

1) The algorithm contains information or "design".

2) Man-made genetic algorithms also contain optimizations.

3) Genetic algorithms use some functioning components as their ingredients.

 

And furthermore, an acknowledgment that the existence of such an algorithm essentially guarantees the results.

4) The algorithm by its nature guarantees success.

 

Please tell me if I missed any.

 

I think I'll try to improve my response to your request. It looks as if some people do not think I addressed it adequately previously.

 

1) Genetic algorithms that are successful at increasing functional information faster than a blind search use imported or active information to succeed. This active information is inserted by the designer of the algorithm and without this information, the algorithm would do no better than a blind search.

 

5) Designed algorithms include specifically selected fitness functions and an artificial definition of what it means to be more fit that are coordinated with the mutation step distance to ensure smooth contiguous evolutionary pathways from any beginning point to the predefined target.

 

6) Designed algorithms reward intermediate results that are not not necessarily functional and thus create artificial pathways that shortcut or bridge what would otherwise represent gaps in functional pathways. They can do this because the designer has foresight into what direction leads to success. The arbitrary definition of fitness that does not require function ensures that all possible combinations can be considered functional and thus ensures that the fitness function contains a multitude of artificial pathways to success.

 

7) Designed algorithms have a carefully defined goal/target that avoids localized intermediate optimums.

 

Are you suggesting that natural selection or any other natural process share these attributes? If so, what evidence suggests this is true?

Link to comment
Share on other sites

I think I'll try to improve my response to your request. It looks as if some people do not think I addressed it adequately previously.

 

1) Genetic algorithms that are successful at increasing functional information faster than a blind search use imported or active information to succeed. This active information is inserted by the designer of the algorithm and without this information, the algorithm would do no better than a blind search.

 

Indeed, the fitness function is a necessary component of a genetic algorithm. Evolution has a fitness function.

 

5) Designed algorithms include specifically selected fitness functions and an artificial definition of what it means to be more fit that are coordinated with the mutation step distance to ensure smooth contiguous evolutionary pathways from any beginning point to the predefined target.

 

6) Designed algorithms reward intermediate results that are not not necessarily functional and thus create artificial pathways that shortcut or bridge what would otherwise represent gaps in functional pathways. They can do this because the designer has foresight into what direction leads to success. The arbitrary definition of fitness that does not require function ensures that all possible combinations can be considered functional and thus ensures that the fitness function contains a multitude of artificial pathways to success.

 

7) Designed algorithms have a carefully defined goal/target that avoids localized intermediate optimums.

 

Are you suggesting that natural selection or any other natural process share these attributes? If so, what evidence suggests this is true?

 

These are optimizations, and not strictly necessary. They limit the scope of the search and if done properly will result in faster results. On the other hand, it means any solutions that would include different intermediate steps are not included.

 

Evolution doesn't need these intermediary steps in the algorithm because it already starts with all the intermediate steps it needs to have a functional organism. The above criticism would work for abiogenesis theories, but note that these theories have proposed intermediate steps as well.

Link to comment
Share on other sites

Indeed, the fitness function is a necessary component of a genetic algorithm. Evolution has a fitness function.

 

Successful Genetic algorithms have very specific fitness functions that are required in order for the algorithm to succeed as I explained. Are you claiming that Natural Selection includes a very specific fitness function of the same character? If so do you have evidence that your claim is correct?

 

These are optimizations, and not strictly necessary. They limit the scope of the search and if done properly will result in faster results. On the other hand, it means any solutions that would include different intermediate steps are not included.

 

In the case of genetic algorithms they seem to be more than simply optimizations, they seem to be necessary in that when they are removed the algorithms fail to converge on a solution faster than blind search. Can you tell us of an example that succeeds without these features?

 

Evolution doesn't need these intermediary steps in the algorithm because it already starts with all the intermediate steps it needs to have a functional organism.

 

Evolutionary algorithms that begin with functional system and don't contain the attributes described in 5-7 or optimizers previously described, don't generate novel function faster than bind search. Perhaps you have an observed, confirmed example of natural selection generating novel function faster than blind search. If so I would like to hear of it. I know of generation of nylase but both known cases involved a single step change easily attained by blind search.

 

The above criticism would work for abiogenesis theories, but note that these theories have proposed intermediate steps as well.

 

Proposals are little more than interesting.

Link to comment
Share on other sites

Successful Genetic algorithms have very specific fitness functions that are required in order for the algorithm to succeed as I explained. Are you claiming that Natural Selection includes a very specific fitness function of the same character? If so do you have evidence that your claim is correct?

 

Nobody cares whether you think it works, the fact is that evolution can generate information. Just to give you a simple and guaranteed example, make a point mutation such that it disables an important but non-vital gene. Evolution will rather quickly revert it back to the functional state. This can only happen because natural selection can recognize the more functional protein as better than the less functional protein. What this means in your own little world I don't know, but to me it means the fitness function works just fine.

 

In the case of genetic algorithms they seem to be more than simply optimizations, they seem to be necessary in that when they are removed the algorithms fail to converge on a solution faster than blind search. Can you tell us of an example that succeeds without these features?

 

 

 

Evolutionary algorithms that begin with functional system and don't contain the attributes described in 5-7 or optimizers previously described, don't generate novel function faster than bind search. Perhaps you have an observed, confirmed example of natural selection generating novel function faster than blind search. If so I would like to hear of it. I know of generation of nylase but both known cases involved a single step change easily attained by blind search.

 

I'm not sure what your obsession is with a blind search. Nevertheless, my above example shows a genetic algorithm (evolution itself) successfully finding an optimization faster than by blind search.

 

Proposals are little more than interesting.

 

Yet more interesting than non-proposals such as Intelligent Design.

Link to comment
Share on other sites

Nobody cares whether you think it works, the fact is that evolution can generate information. Just to give you a simple and guaranteed example, make a point mutation such that it disables an important but non-vital gene. Evolution will rather quickly revert it back to the functional state. This can only happen because natural selection can recognize the more functional protein as better than the less functional protein. What this means in your own little world I don't know, but to me it means the fitness function works just fine.

 

You have described what I have already acknowledged. Natural selection is capable of generating adaptations by meandering around a pre-existing function. But rediscovering existing function is like plagiarism. It is not generating novel information. Stylus provides a good illustration of this behavior of localized fitness. Generation of novel information involves movement to new function. Do you have any examples of mutation and natural selection deriving new information faster than a blind search or random walk?

 

I'm not sure what your obsession is with a blind search. Nevertheless, my above example shows a genetic algorithm (evolution itself) successfully finding an optimization faster than by blind search.

 

In considering undirected causal modes, information entropy must be figured in. By Information Theory, one can demonstrates that information is generated by random processes on average at a rate equal to or less than pure random processes such as a blind search or random walk in proportion to the amount of resources available to perform a blind search or random walk.

 

Yet more interesting than non-proposals such as Intelligent Design.

 

It is interesting though that genetic engineers have demonstrated that design does generate novel function in biological systems and are rapidly progressing to the point where they will have demonstrated that design is an observed mechanism in operation today that does account for life and diversity of life. In order for natural processes to compete as an alternate explanation, someone soon will have to demonstrate that some observed natural process does accomplish the same. I think there are such a processes, but I doubt genetic error with natural selection is one of them. This conversation regarding generation of information is a good illustration of the limits of mutation and selection because any process capable of deriving observed diversity must be observed deriving large quantities of novel functional information. The fact that mutation and selection has never been observed to have generated large amounts of new information relative to the opportunities to do so is telling. The fact that no natural process has ever been observed to generate large quantities of functional information is a big disadvantage for those who advocate for natural processes.

Edited by cypress
Link to comment
Share on other sites

You have described what I have already acknowledged. Natural selection is capable of generating adaptations by meandering around a pre-existing function. But rediscovering existing function is like plagiarism. It is not generating novel information. Stylus provides a good illustration of this behavior of localized fitness. Generation of novel information involves movement to new function. Do you have any examples of mutation and natural selection deriving new information faster than a blind search or random walk?

 

My example is an example of evolution creating a new function. The function was not there, and then evolution created it. The reason I used that particular example is that you want an example that will occur within your lifetime.

 

Compare the speed for evolution to create the new function in this example -- a few generations -- to the speed of a blind search (many times the lifetime of the universe), and you see it is much much faster than a blind search (but less thorough).

 

In considering undirected causal modes, information entropy must be figured in. By Information Theory, one can demonstrates that information is generated by random processes on average at a rate equal to or less than pure random processes such as a blind search or random walk in proportion to the amount of resources available to perform a blind search or random walk.

 

Information is generated at 1 bit per coin flip.

http://en.wikipedia.org/wiki/Entropy_%28information_theory%29

 

It is interesting though that genetic engineers have demonstrated that design does generate novel function in biological systems and are rapidly progressing to the point where they will have demonstrated that design is an observed mechanism in operation today that does account for life and diversity of life. In order for natural processes to compete as an alternate explanation, someone soon will have to demonstrate that some observed natural process does accomplish the same. I think there are such a processes, but I doubt genetic error with natural selection is one of them. This conversation regarding generation of information is a good illustration of the limits of mutation and selection because any process capable of deriving observed diversity must be observed deriving large quantities of novel functional information. The fact that mutation and selection has never been observed to have generated large amounts of new information relative to the opportunities to do so is telling. The fact that no natural process has ever been observed to generate large quantities of functional information is a big disadvantage for those who advocate for natural processes.

 

So which human created us all then? If as you say only humans have been observed to create information (false, regardless of your modifiers), then what created us must therefore have been human. You proposing a non-human intelligence with no evidence other than an argument from ignorance, compared to scientists arguing evolution from a common ancestor via known and verified processes. Although evolution is not directly seen at the largest scales (nor expected to be), the evidence is in the DNA sequences that it was so. I'd go with the option with the evidence.

Link to comment
Share on other sites

The assumption of random is only a special case when it comes to living systems. One way to explain this is to consider a new deck of cards where the four suits are initially stacked, one top of each other, each in numerical order. To make this deck fully randomized we need to add enough energy via the shuffle machine. Say instead, we only cut the deck in half and flip the two halves into itself once (not enough energy for full randomization), all the odds change.

 

If we assume this single shuffle was a random system (not knowing there was not enough energy for a full card randomization), certain results would continue to appear, like dealing two players constant straight flushes , that may seem totally improbable. Life is a much about order as shuffle, with the deck never fully shuffled. This changes the odds in favor of life.

 

Probability has a connection to entropy, with entropy needing energy. Let us start with a six sided dice. The odds of the dice work under the assumption we have enough energy in the toss to fully randomize the odds for all the six sides. Say we use less energy or just enough entropy to tip the dice 90 degrees, but not enough to tip it 180 degrees. Under this level of random-entropy-energy, there are only five sides of the dice that have a probability of appearing. The bottom side can't occur. In cells, there are areas of the DNA that are more prone to mutation and there are other areas that are much less so. The energy for randomization is only sufficient to move the dice 90 degrees.

 

Maybe we need to develop the statistics of low entropy systems to help us predict the best places to look for mutations in organized systems. In our deck of cards, that we cut and flipped once, the odds for straight flushes is higher than two of a kind. This allows cells to evolve in style.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.