Information and Evolution. revised 10/23/99
Paul G. Decelles firstname.lastname@example.org
Johnson County Community College
Overland Park Kansas (913) 469-8500 ext 3395
When reading creationist literature one often comes across the argument that mutations and natural selection do not create new information. For example Sarfati(1999) in his book refuting evolution, argues that according to genesis God created different kinds of organisms that ..."reproduced after their own 'kinds' ". He further argues that these organisms had all the information(presumably he means genetic) to adapt to different environments and indeed he seems to accept the idea of natural selection. Further he claims that the original "kinds"1 must have had much higher levels of heterozygosity than currently existing kinds and that observed evolution represents a deterioration from originally perfect kinds(p 33).
Next, after discussing natural selection in a very simplistic sense he argues that natural selection leads to a loss of genetic variation. Here he conveniently only discusses directional selection and then says:
"Loss of information through mutations, natural selection and genetic drift can sometimes result in different small populations losing such different information that they will no longer interbreed. .For example changes in song or color might result in birds no longer recognizing a mate, so they can no longer interbreed. Thus a new species is formed." (p 37)
Observe that what Sarfati is describing is superficially what biologists know as allopatric speciation. But notice his assumption that this process can only happen by a loss of information.
For instance when discussing antibiotic resistance in bacteria Sarfati claims that some bacteria already had the genes for resistance and that when exposed to antibiotics the non resistant bacteria are killed and the information they carry lost. Further even when mutations are involved, he argues that there is still a loss of information. For instance he claims that the mutation involved in penicillin resistance works by disabling the control mechanism that regulates the production of penicillinase, the enzyme that breaks down penicillin. In natural environments the bacteria produce just enough penicillinase to cope with naturally occurring penicillin but the breakdown of this control system(due to a mutation) in the bacteria enables the development of resistance to antibiotics at the levels given to patients.
The purpose of this paper is to analyze creationist arguments concerning the ability of evolutionary processes to generate new information. I will first discuss Sarfati's argument and then briefly treat a much more interesting set of arguments due to Dembski which relate to a concept called complex specified information. It will be argued that both Sarfati and Dembski's arguments are based on fundamental misunderstandings about evolution and natural selection and that the mechanisms of evolution are perfectly capable of generating new types of complex information.
Misunderstanding of Information.
This idea of information loss is critical to Sarfati's arguments related to design(presumably intelligent design) as a viable hypothesis. Yet he clearly does not understand the meaning of the word information. For example in chapter 9 p 120 he claims that specified complexity means high information. He correctly notes that a random sequence can be made printed by a short computer program but to print the works of Shakespeare requires a complex program. However, he's comparing the wrong things. In the case of the random sequence generator, he is comparing a generator that will produce an arbitrary random sequence in the future to the already existing plays of Shakespeare. The correct comparison is between an already existing random sequence of letters and the already existing plays of Shakespeare. and the information needed to specify the sequence. So the question is how complex a program does it take to describe and print the already existing randomly generated sequence versus the plays of Shakespeare.? An elementary consideration of the fact that there are regular and repeating patterns in the plays of Shakespeare. because of the relatively regular rules of grammar and spelling in English means that it is easier to describe the play than to describe the typical random sequence. In other words the plays of Shakespeare. have redundancy that the sequence of random numbers lacks.
In a foot note on page 121 Sarfati makes a fatal error. He defines information2 as:
I = k*log10(W0/W1)
or as I'll rewrite it:
I = k*(log10W0 -log10W1)
Where I is the information content of a message, log10, the log of a number in base 103, W0 the number of possible states for a signal, and W1 is the number of possible states that the receiver of the information can be in after the information is received. The constant "k"(Related to Boltzmann's constant) is relevant when discussing entropy but not when we are talking about information content. Obviously the two are related though. See Morowitz(1970) for a discussion of this point. Sarfati correctly notes that for repetitive sequences, information content is low. He doesn't state this but all possible states are assumed to equally probable for the universe of possible states under consideration.
He claims that for a repetitive sequence, the number of possible states is lower than for a random sequence. This is in a sense true because the rules of generation for such a sequence constrain the universe of possible sequences. But then he makes the claim that for completely random sequences the information content is also low. This is clearly not true! In the case completely random sequences the form of the sequence is constrained only by the length of the sequence and of course the universe of symbols used in the sequence. Thus W0 is going to be larger for random sequences than of repetitive sequences of the same length.
Let's examine two sequences of length 20 involving four symbols A,G,C,T. 4The first sequence is made by the following rule: For all positions in the sequence all even letters repeat all odd letters. Thus
is a possible sequence but not:
A little though shows that in this situation: W0 = 4^10 and I = 10*log10(4) = 6.021
But consider the completely random sequence situation. Again if our universe of symbols is the same then the number of possible sequences(W0) is:
W0 = 4^20 and I = 20*log10(4) = 12.042
which clearly twice that than for I the non random sequence.
Now lets turn to W1. Sarfati defines W1 as the the number of possible states the receiver of the information is in after the information is received. What he's really talking about is the concept of channel noise as opposed to signal. Thus for a perfect channel, the signal will be received perfectly. Hence log10(W1)= 0 in this situation. Remember this the state of the receiver after the information is sent.
Sarfati claims that for a random signal W1 is almost the same as W0. Not so fast. Perhaps he's thinking that a for a receiver of a random signal that the channel somehow becomes noisier, but there is no logic in this. Now it is true that for a noisy channel the repetitive signal is easier for the receiver to decode and hence the receiver may have greater certainty about what the sent signal really was, simply because of the signal's redundancy. In the case of the random signal, the receiver has no way to separate the noise from the signal, but this relates more to the utility of the signal rather than its information content.
As Weaver(1975) repeatedly notes, information as defined in information theory should not be confused with meaning. Now what does this have to do with Sarfati's arguments? Two things first Sarfati argues the mutations do not increase information. This is just not true. Lets suppose we have my redundant four symbol sequence:
or any other sequence generated by my rule, and add to it another rule. For every such sequence generated, pick a position at random and then replace the letter at that position with one of the three other letters selected at random to represent a mutation. So for the sequence above the sixth position has an A and we might substitute a T at random for it
Now W0 becomes: (4^10)* (20*3)
and I = 6.021+3.487 = 9.508
The term (20*3) arises because there are twenty possible positions where the substitution can happen and since we are assuming only one substitution for simplicity we must multiply the 20 positions by the three remaining symbols in our symbol universe. Again this is a simplification as I'm assuming that a mutation happens every time the sequence is generated. Clearly mutations are much more rare, and the more general Shannon Weaver information formula in terms of probabilities must be used. However the general point about mutations and information still holds.
Thus if you think of the most basic mutation as being a base substitution then mutations certainly can easily increase information. Of course again we are not distinguishing between useful and not useful information. In biological evolution the utility of a certain mutation is obviously determined by natural selection.
Information and Populations.
Sarfati is on better ground when he discusses information loss in populations. He does correctly note that genetic drift leads to greater homozygousity and thus in some sense less information in the population, but completely fails to grasp the creative possibilities of mutation when combined with natural selection. I'd be curious how he explains the origins of human genetic diversity in humans from an original founding pair, presumably Adam and Eve. If genetic drift is important for small populations then it obviously ought to be important for humans in the same way. Thus most of the heterozygousity in humans, even given his claim that kinds of organisms were created with more variation than they show now, should have been lost by drift before the human population became large!
Sarfati tries to make a case that it's hard for even advantageous mutations to spread through the population. He correctly points out that the survival of a single mutant gene is low given one mutation event(foot note on page 37 of his text). But this is not is not really the whole story about the maintenance of genetic variation. In the real world, mutations happen at a certain rate per gene locus so a more relevant parameter is the probability that a certain mutation will be maintained in the population given a certain mutation rate and a certain selection coefficient. This probability is actually one for extremely large populations, though obviously deleterious alleles will not be very common- the equilibrium allele frequency being a function, at minimum, of the equilibrium between mutation rate and selection as well as effective population size.
Increase of Information Capacity in Living Things.
There is one other point that needs to be made. In addition to new information being generated by mutation, the capacity for information in living things can be increased by mechanisms that increase the amount of DNA in an organism's cells. Such mechanisms are well documented. For instance, in plants, chromosome number often doubles when plant species hybridize. See Grant(1971) for a cogent discussion of polyploidy in plants. Once polyploidy happens, the amount of genetic material in the cells of the polyploid individuals is dramatically increased and the extra copies of the genes are free to undergo their own mutational history separate from the those on the other chromosome copies. In both plants and animals genes at particular loci can duplicate along the length of the chromosome giving rise to families of related genes that produce related proteins that have different functions, Ohta(1989). The genes related to myoglobin and hemoglobin would seem to be a good example. The ability of organisms to increase the capacity for genetic information will become crucial later on when analyzing Dembski's arguments.
Information and the Design of Organisms.
I want to look at the information problem from the design perspective. Sarfati gives a bone to evolutionists when he says that some mutations may be beneficial. Again the examples he gives have to do with things like loss of wings in beetles(p 127). But does loss of information related to a reduction in wings or other structure relate to a total loss of information for the organism? The answer may not be so simple. For instance consider the Ostrich. This is a flightless bird; it still has wings but they are reduced in size. So I guess Sarfati would say that there has been a loss of information. But look at the rest of the bird: the feathers are used for insulation and display rather than aerodynamics5; the neck of the bird is elongated; the legs well adapted for running and quite formidable. Do these features represent a loss of information? I doubt it. Sarfati might protest that I'm assuming that the loss of wings arose via evolution. But he opens the door to beneficial mutations even if some of them lead to a loss of information. He can't have things both ways here.
Conclusion about Sarfati's arguments.
Thus we see that Sarfati's claims about mutation and natural selection reducing information content are not correct, but are based on clear misunderstandings or distortions of very basic information theoretic concepts and misuse of basic formulas in the population genetics. There is, in contrast to what he says, no evidence that evolution consistently leads to a loss of information. Indeed elementary considerations suggest quite the opposite. Evolution leads to an ebb and wash of information between the information generating effects of mutation and mechanisms such as polyploidy that increase information capacity of the individual organism's genome versus the information decreasing effects of directional selection6 and genetic drift.
Complex Specified Information.
In contrast to Sarfati's limited attacks on evolution as a generator of new information, more interesting arguments have been developed from the perspective of intelligent design. For example Michael Behe rightly focuses on structures that exhibit what he calls irreproducible complexity. Such structures include certain cyclic metabolic pathways and complex cellular machines such as the bacterial flagellum. The idea is that just as a mouse trap will not work unless all its structures are in place, so then these complex cellular machines need all their parts in place to operate. If these structures really are irreproducibly complex then only an intelligent designer can put them together, just like it takes an intelligent designer to put together a mouse trap.
Related to Behe's concept is Dembski's Complex Specified Information(CSI) concept, Dembski(1998). By complex information Dembski means information that is complex and patterned and specified in that the pattern giving rise to the information can be described independently of the information rather than just read off from the information. The analogy Dembski uses is an archer shooting arrows at a barn wall. In the first case the archer is shooting arrows at random and even though the probability of an arrow hitting a particular spot is small, there is no reason to prefer on hit on the wall to another. So even though information is actualized or realized each time an arrow is shot, the outcome does not correspond to a pattern but is strictly due to chance. Thus this information is unspecified.
On the other hand if someone first paints a target and then aims at the pattern then we can get specific information about the archer's ability. Thus this information is specified. Examples of complex specified information would be the sonnets of Shakespeare, this paper, a television transmission, machines and according to Dembski, living things. The key distinction between specified information and unspecified information is the existence of an independent pattern that can be recognized independent of the actualized outcome. Such patterns are considered by Dembski to be good patterns as opposed to bad patterns or fabrications which are constructed from the outcome. Thus in the archer's case, if the archer shoots an arrow and then paints the target around the arrow's point of impact, then the target in this case is a fabrication.
Unspecified information is information for which actualization of an outcome cannot be predicted by a pattern. Dembski notes that unspecified information can be turned into specified information. So a message in code is unspecified to someone who receives the message until the code is cracked. Another of Dembski's examples is that Chinese is unintelligible to a non Chinese speaker and is thus specified. Note though that to A Chinese speaker the information is specified. This gives the concept of specified information a very subjective feel. Of course this is not an alien idea in science as in quantum mechanics, where the experimenter alters the outcome of the experiment simply by observing the experiment.
Recall the sequence I mentioned earlier when discussing Sarfati:
I argued that making random changes in this information generates new information and this is true. However, Dembski argues that this type of information is meaningless because there is no pattern that can independently of the information be used to describe it, or give it meaning.
Complex specified information is what Dembski claims we are generally interested. This information is complex because it takes a lot of information to specify it. For instance a 12 digit combination to a lock is hard to break because it takes a lot of information to specify the sequence of numbers. Presumably the information is specified in that the lock designer has built the pattern for combination into the tumblers of the lock.
The Explanatory Filter.
Dembski(1996) argues that intelligent design of complex specified information can be recognized in terms a three fold "explanatory filter".
1. Can the origin of the information be explained by a "law"? If yes look no further.
2. Can Chance explain it? If yes look no further.
3. Does design explain it?
He claims, and makes a pretty good case, that this or some related procedure is how we as intelligent creature recognize intelligent design.
He provides two arguments about the reasons this filter is powerful:
The first argument is that in all places where the filter attributes design, design is actually present. He doesn't justify this but presumably he's referring to how we recognize human designed artifacts and transmissions from other patterns. At least in the literature I've seen, he does not document this sufficiently to make an inductive statement.
Secondly he argues that the filter corresponds to how we recognize whether or not something is intelligently designed. I'm not convinced of the utility of the filter for detecting intelligent design in biological systems because its not clear that one could reasonably be expected to get past the first stage of the filter.
For example Michael Behe, as mentioned earlier, argues that certain basic cellular structure are irreducibly complex. In other words the parts work together as integrated whole. Dembski contrasts this type of complexity with what he calls cumulative complexity:
"A system is cumulatively complex if the components of the system can be arranged sequentially so that successive removal of components never leads to complete loss of function."
He then claims that natural selection can only lead to cumulative complexity, never irreducible complexity. He quotes Behe as saying that since natural selection can only choose systems that are working, then if a biological system has to work as a unit then there is nothing for natural selection to operate on.
The problem with Dembski and Behe's argument is that they assume that the components of biological systems were first selected or developed in the context we see them today. For example, Behe considers cilia and flagella to be examples of irreproducible structures and yet even the slightest familiarity with the eukaryotic cell(Behe is after a biochemist) should lead him to the conclusion that many of the components of cilia are present in other cells, including those with out cilia where they serve different functions.
Another example of irreproducible complexity cited by Behe is the bacterial flagellum. This structure is unusual because it is one of two places in the living world where one finds the existence of a wheel. Behe makes a big point of the fact that the wheel's rotation is powered not by ATP but by proton pumps which in turn are powered by electron transport. However electron transport and proton pumps are widespread and found for instance in the mitochondria of our cells where they serve to make ATP. I suggest that reproducible complexity is impossible to demonstrate and that apparent cases of irreproducible complexity are artifacts of our ignorance.
So here we see a set of situations where Dembski's explanatory filter does not convincingly get past the first criterion.
The Law of Conservation of Information.
Dembski's arguments concerning evolution are complex, involving such issues as computation theory and how one might recognize whether or not complex specified information is due to natural processes or intelligent design. Quite frankly I think this is an exciting prospect but I'm not optimistic that a direct test of intelligent design versus design by "Chance and Necessity" is truly possible. For instance Dembski himself notes that an intelligent agent could fake complex unspecified information. Presumably such an agent could also use fake patterns(called by Dembski fabrications) to fake complex specified information caused by natural laws and chance(e.g. evolution).
Dembski's arguments seem to hinge on what he calls the law of conservation of information. He argues in the following way: First of all chance cannot generate specified complex information. Secondly, he argues that "necessity" presumably meaning deterministic processes cannot generate specified complex information
In information theoretic terms a deterministic processes generating new specified information means:
I(A&B) = I(A) + I(B|A), I(B|A) > 0
For a deterministic since result B necessarily follows A there for Pr(B|A) = 1 and Log2(1) =0. Thus No new information is added by considering B. Dembski argues that combining chance and necessity does not help since you can always put a chance process and necessary process in sequence.
Thus he concludes that natural processes cannot new complex specific information. This is his law of conservation of information.
1. The complex specified information(CSI) in a closed system of natural causes is constant or decreases
2. CSI cannot be generated spontaneously, originate endogenously or organize itself.
3. The CSI of closed systems of natural causes was either in the system eternally or added from the outside before the system is closed
4. In particular any closed system of natural causes that is also of finite duration received whatever CSI it contains before it became a closed system.
Finally he argues that only intelligent agents can generate CSI, which implies design for living things. This is obviously a bold claim and is obviously at the heart of the intelligent design argument and is worthy of careful analysis by evolutionary biologists. A detailed analysis is outside the scope of this paper but a few remarks in rebuttal are in order.
Rebuttal to Dembski.
The first problem with Dembski's idea is that specified information is a bit subjective. Dembski notes several times that unspecified information may become specified as in the case of a coded message. Once the code is broken, the information becomes specified to us. Indeed, Dembski argues that the pattern may be given after the possibility has been actualized:
"This is certainly the case with the origin of life; life originates first and only afterwards do pattern forming rational agents(like ourselves) enter the scene. It remains the case however that a pattern corresponding to a possibility though formulated after the possibility has been actualized, can constitute a specification."
Of course this fits in very nicely with Dembski's law of conservation of information since we(the rational agents) are generating the information. However, this presents a logical problem. That problem concerns the patterns. Presumably, the patterns involved in the specification can exist before the specification. Thus the genetic code existed before we deciphered it even though it did not become specified until it was "cracked". The patterns in the fossil record existed before we understood their meaning and bacterial flagella operated before we were around to crack the patterns involved in their operation.
Secondly his law of conservation of information is flawed and portrays a poor knowledge of natural selection and mutation as agents of change including generation of new complex specified information. Here again the problem is with the patterns. Recall that Dembski argues that chance and necessity cannot generate complex specified information. Consider the following situation:
An abstract population consists of a series of binary strings of length n. Suppose these strings can duplicate themselves and that the the reproductive success of each kind of string varies relative to the other strings leading to a pattern of fitnesses W1,...,Wj where j is the number of possible strings. Lets suppose there is one Wi which is greater than any of the others. Then eventually all the strings should end up being whatever set of strings have that fitness. We've eliminated the other possibilities. This is very much like what is done in the laboratory either to study natural selection of phenotypes in populations of organisms or more recently to study possible evolutionary mechanisms in hypothetical prebiotic worlds. For a review of the latter research with respect to the so called "RNA world" see(Landweber, Simon and Wagner, 1998).
If we consider the information in A to be the population information then I(A) = -log2(1/j).
Following Dembski's reasoning then I(B|A) = 0.
This is basically a distillation of Dembski's arguments about natural selection and related to that computer programs based on genetic algorithms. While discussing an illustration by Dawkins concerning genetic or evolutionary algorithm Dembski(1999)notes that such programs and presumably natural selection cannot general complex information because since the algorithm operates in a deterministic manner there is only one outcome. Thus since the other outcomes are excluded, the complexity of the out come is reduced. Because this is really the only possibility. In contrast, a lock combination is complex because the more complex the combination is then the less likely it is to be opened by chance.
But lets analyze this argument more closely:
The example of an evolutionary program was a program that took a random sequence of letters numbers and spaces and whenever a letter matched a letter in the same position in a target sequence, left the letters and randomly scrambled the remaining letters. The target sequence was:
Me thinks it is like a weasel
From Dembski's, position he's right; the program does reduce complexity since it reduces the possible alternate states. But suppose the sequence was used as a combination for a lock and a monkey was trying to open the lock. On average the monkey would still take the same number of tries at it would to handle the original sequence! Thus as measured from the monkey's point of view the sequence is complex! Further, Dawkins could easily have specified a purely random sequence as a target sequence in which case the final sequence would be equally complex to any human observer who was not privy to the target sequence! Thus, Dawkin's program does not decrease the complexity of the sequence as measured against the original possibilities.
More directly, Dembski's Conservation of information law assumes a specified universe of specified patterns and possibilities required to create specified complex information. For example Dawkin's evolutionary program has a specified deterministic universe, and a specified outcome. But what happens when the universe is not clear cut? Then it seems that chance and necessity can produce complex specified information. Lets examine a counter example to Dembski's statement.
Suppose we take our random series of 4 letters A,T,G,C from earlier in this paper. Use Dawkin's algorithm to begin convergence to a randomly selected target pattern. At a set iteration in the program's convergence, duplicate the string double its length and begin convergence to another pattern consisting of the first pattern with a random second pattern grafted to it.
What have we done? Notice we have a pattern. Notice we have complexity at least at the start and notice we have two types of random events, one doubling the string length and the second selecting a new target pattern. The effect is to create a final pattern operates on a string which has additional complexity that the first string did not have. The result is a more complex string that has only half of its information in common with the string that would have been selected by the first pattern. Since the second pattern, once it is selected is also completely specified, then the second string's pattern is also specified. Plus, the second string is more complex than the first string by virtue of the fact that it is longer. There fore Dembski's law is violated.
The example I selected though somewhat contrived is strictly not completely artificial. It is analogous to what happens in evolution namely that evolution of new types of enzymes, body structures etc depends on the ability of living systems to duplicate components which are then free to evolve in response to patterns in the environment which we as intelligent creatures can in theory read. For example in plants, polyploidy is known to a common phenomenon, leading to the evolution of new species and able to lead to new complexity. In both plant's and animals gene duplications happen which in turn lead to groups of enzymes or other proteins with common ancestors but with different functions. The hemoglobin/ myoglobin gene family is a prominent case in point and there are many others. Indeed Nadeau &Sankoff D.(1997) argue that the effect of gene duplication is to increase the number of different mutations that can safely be accumulated by the genes, allowing more opportunity for natural selection to act on this increased variation. This is not unlike my simple minded example. Natural processes can violate Dembski's so called conservation of information law.
Dembski's research program is very interesting and raises a number of important questions for biologists to consider. Unfortunately he does not always ask the right questions. For instance, his emphasis on how intelligent agents act is that they act through choice. Choice is important in patterning complex specified information but it's not the only thing. Increasing the information capacity of a complex specified sequence allows for new specification patterns to operate on the on the new combined sequence. Further, I submit that the types of duplications we see in nature, allow for the opportunity for new patterns to operate on the genetic material. There's really nothing new in this, but as biologists we tend to forget the importance of what amounts to adding information capacity to the phenomenon of increasing the specified complexity of the living world. If Dembski's program refocuses biology on duplications, polyploidy and other mechanisms to increase the information capacity of the genetic material as important stochastic forces in evolution then it will have served a useful purpose, even if the attempt to demonstrate intelligent design fails as I predict that it will.
I'm not implying in my remarks that scientists have all the answers or that science can answer basic questions about the origin of life. The best science can probably do, and I might very well be shown wrong here, is provide possible scenarios consistent with natural laws. But our ignorance is no reason to invoke creationism or intelligent design because once that is done scientific reasoning become open to ad hoc hypothesizing forever outside the logical confines of natural law. For instance evolutionists from Thomas Huxley to Dawkins have pointed out the existence of what we might consider evil, suffering in the natural world to which the theists ultimately argue God's ways are not our ways (Isaiah 55:8-9) As a theist who views God as permeating every aspect of my life I am highly sympathetic to these arguments and ultimately sympathetic to the goals of the intelligent design movement. Unfortunately, I see nothing yet in the intelligent design movement, just as I see nothing in the unsanitized versions of creationism, to lead me to conclude, or even suspect from a modern scientific perspective that intelligent design is necessary to explain the origin and development of the living world.
Dembski, W.A. (1996) The Explanatory Filter. Access Research Network, Premier Publications, Colorado Springs, CO. http://www.arn.org/docs/Dembski/wd_expfilter.htm
Dembski, WA (1998) Intelligent Design as a Theory of Information. Access Research Network, Premier Publications, Colorado Springs, CO. http://www.arn.org/docs/Dembski/WD_idtheory.htm
Dembski, W. A.(1998B) Science and Design. First Things 86:21-27
Grant, Verne (1971) Plant Speciation, Columbia University Press, NY
Landweber, Laur; Simon, Peter J. and Thor A. Wagner(1998) Ribozyme engineering and early evolution: combinatorial chemistry provides a simple tool for the study of nucleic acid catalysts and prebiotic evolution. BioScience.48:2 94-104.
Morowitz,Harold (1970) Entropy for Biologists, Academic Press, NY. xiii + 195pp.
Nadeau J.H., Sankoff D. (1997) Comparable rates of gene loss and functional divergence after genome duplications early in vertebrate evolution. Genetics 147:3 1259-66
Ohta T.(1989) Role of gene duplication in evolution. Genome 31:1 304-10
Sarfati, Jonathan(1999) Refuting Evolution, Answers in Genesis, Brisbaine AU 143 pp.
Weaver, Warren(1975) Recent Contributions to The Mathematical Theory of Communication. In: Shannon, Claude and Warren Weaver, The Mathematical Theory of Communication. University of Illinois Press, Urbana Illinois, 125 pp.
1. Exactly what is meant by kind is not ever made clear in a scientific sense, but instead is related to "kinds" of organisms mentioned in the Bible. However at the risk of misrepresenting Sarfati's views I quote Dr. Hovarth from his CSE site:
"God told Noah to bring two of each kind (seven of some), not of each species or variety. Noah had only two of the dog kind which would include the wolves, coyotes, foxes, mutts, etc. The "kind" grouping is probably closer to our modern family division in taxonomy, and would greatly reduce the number of animals on the ark. Animals have diversified into many varieties in the last 4400 years since the Flood. This diversification is not anything similar to great claims that the evolutionists teach. (They teach that "kelp can turn into Kent," given enough time!)"
2. The Shannon Weaver formulation for information is more appropriate but is equivalent to Sarfati's when the messages are all equally probable.
3. Sarfati uses natural logs. The choice of the log to use is somewhat arbitrary but base 2 logs are often used because of the use of information theory in computer science and communications where binary choices are very important. Base 10 logs can easily be converted to logs of any othe base(a) by means of the following formula:
Loga(M) = Log10(M) / Log10(a)
Also it should noted that information content of a message depends on how symbols are grouped and and evaluated. From an evolutionary point of view this evaluation depends on natural selection and not on meaning to the human mind.
4. Many readers will realize that my choice of letters was not accidental.
5. With respect to feathers Sarfati does not seem to understand that feathers can serve several functions on the same bird.
He writes on page 66:
"..All the matters is that feathers provide insulation, and hair like structures work fine-they work for mammals. That is natural selection would work against the development of a flight feather if the feathers were needed for insulation."
6. It is easy to show that directional selection does not lead to a monotonic decrease in information. Further other types of selection such as frequency dependent selection and and balancing selection lead to intermediate frequencies for alleles. Thus the genetic information contained in the whole population is increased.
Except for brief quotes and printing one copy for your own reference, this document may not be distributed by any means electronic or otherwise without express permission of the author. Requests to link to this document should be made to the author at email@example.com. The author also would appreciate notification of any sites that contain critical discussion (pro or con of the material in this document).
Copyright © Paul G. Decelles, September 18 1999. Revised 9/20/99, 10/5/99
VBS Home, Essays, Previous Page, Top of Page