Kenneth Miller's Best Arguments

Against Intelligent Design


Sean D. Pitman, M.D.

©May, 2007






Table of Contents














       Kenneth Miller (born 1948) is a professor of biology at Brown University. He received his Sc.B. in Biology from Brown University in 1970 and Ph.D. in Biology from the University of Colorado in 1974. His research involved problems of structure and function in biological membranes.

        Although Miller is a devout Catholic, he is an outspoken opponent of creationism as well as supporters of the intelligent design movement.  He has written a popular book (published in 2000) on the topic entitled, "Finding Darwin's God: A Scientist's Search for Common Ground Between God and Evolution" in which he argues that belief in both God and evolution are not mutually exclusive notions. It is just that belief in God is based on "faith" while belief in evolution is based on "science".  

        As an aside, Kenneth Miller does describe himself as a "creationist" in a certain sense because of his belief in God and the position that God has played some part in the development of the universe and in various interactions with mankind. However, Miller says that this belief is independent of science and is based entirely on his religious faith.  How ones faith, independent of any scientific investigation or physical testability, can produce meaningful "truths", such as the notion that God is in any way relevant to anything that happens or has happened in the physical universe, is not quite clear from Miller's book or subsequent statements. In fact, Miller argues that, "Whether God exists or not is not a scientific question" (NOVA Link). This is one of the reasons why those like Richard Dawkins are so frustrated with those who still cling to what he calls "The God Delusion" without any real solid scientific testable evidence for the very existence of God via his interaction with humans or the physical universe in any way. Given Miller's arguments in this regard, I certainly do sympathize with Dawkins and think that at least Dawkins is being far more rational in his thinking than is Miller and other scientists who suggest that science and faith are completely different yet equally valid means of approaching "truth".  With Dawkins, I fail to see the significant difference between Miller's "faith" and "wishful thinking"? - like a child's belief in Santa Claus.

        In any case, Miller has appeared in court as a witness and on panels debating the teaching of intelligent design in schools. In 2002, the Ohio State Board of Education held a public debate between prominent evolutionists, including Miller, and proponents of intelligent design.  He was a witness in Selman v. Cobb County, testing the legality of stickers calling evolution a "theory, not a fact" that were placed on the biology textbook Miller authored. In 2005, the judge ruled that the stickers violated the Establishment Clause of the First Amendment to the United States Constitution - the decision was vacated on appeal, and was remanded back to the lower court and was eventually settled out of court. Miller was also the plaintiff's lead expert witness in the Kitzmiller v. Dover Area School District, challenging the school board's mandate to incorporate intelligent design into the curriculum. The judge in that case also ruled decisively in favor of the plaintiffs.


Before reading further it might be most effective to review a very interesting video of Miller's Lecture at Case Western University dealing with this topic:






Irreducible Complexity


        Perhaps one of the biggest challenges to the modern theory of evolution, or at least a challenge that has created a fair bit of discussion within the scientific community, is a concept known as "irreducible complexity" - originally developed by Michael Behe in his 1996 book "Darwin's Black Box".  Behe, professor of biochemistry at Lehigh University, boldly claims that,  "Molecular evolution is not based on scientific authority.  There is no publication in the scientific literature in prestigious journals, specialty journals, or books that describe how molecular evolution of any real, complex, biochemical system either did occur or even might have occurred.  There are assertions that such evolution occurred, but absolutely none are supported by pertinent experiments or calculations." 1  





        Since the publishing of Behe's book a fair bit of controversy has arisen over such statements. Surprisingly, many evolutionary scientists seem to grudgingly agree with Behe, at least in some limited way.  For example, microbiologist James Shapiro of the University of Chicago declared in National Review that, "There are no detailed Darwinian accounts for the evolution of any fundamental biochemical or cellular system, only a variety of wishful speculations" (Shapiro 1996).  In Nature, University of Chicago evolutionary biologist, Jerry Coyne, noted that, "There is no doubt that the pathways described by Behe are dauntingly complex, and their evolution will be hard to unravel. . . . [W]e may forever be unable to envisage the first proto-pathways" (Coyne 1996).  In Trends in Ecology and Evolution Tom Cavalier-Smith, an evolutionary biologist at the University of British Columbia, wrote, "For none of the cases mentioned by Behe is there yet a comprehensive and detailed explanation of the probable steps in the evolution of the observed complexity. The problems have indeed been sorely neglected--though Behe repeatedly exaggerates this neglect with such hyperboles as 'an eerie and complete silence'" (Cavalier-Smith 1997).  Evolutionary biologist, Andrew Pomiankowski, agreed. In New Scientist, he challenged anyone to, "Pick up any biochemistry textbook, and you will find perhaps two or three references to evolution. Turn to one of these and you will be lucky to find anything better than 'evolution selects the fittest molecules for their biological function'" (Pomiankowski 1996). In American Scientist, Yale molecular biologist, Robert Dorit, suggested that, "In a narrow sense, Behe is correct when he argues that we do not yet fully understand the evolution of the flagellar motor or the blood clotting cascade" (Dorit 1997).

        There are many examples of what Behe describes as irreducibly complex biosystems.  However, the most famous of these is likely the bacterial flagellar motility system.  The flagellum is so famous and so commonly used by intelligent design advocates that Miller refers to it as the "poster child" of the intelligent design movement - and rightly so. The flagellar motility system is quite impressive indeed.  Consider that the flagellar system, in particular, requires the services of about 50 genes - including the genes for the sensory apparatus (turns the flagellum clockwise or counterclockwise at a greater or lesser rate depending on the environment) and the genes needed to code for proteins that assist in building the flagellum (about 40 structural proteins total).  The total number of fairly specified (specifically arranged for minimum function) codons of DNA needed to code for the flagellar motility system, at minimum, is well over 10,000 codons.  That's like a good-sized 2,000-word essay.  Without this minimum in place, in its entirety, the motility function of the flagellum cannot be realized to any useful degree of functionality.  In short, when it comes to the producing flagellar motility, a sizable minimum structural threshold is required and this requirement is "irreducible" if one wishes to maintain flagellar motility.  

        It is Behe's argument that such a high-threshold function cannot be built up by evolutionary mechanisms of random mutation and natural selection because it doesn't work at all until all of its many parts are in their proper place at the same time.  How can Nature get all of these parts together gradually? Miller counters by arguing that even highly complex functions, like the flagellum, are not really irreducibly complex since there are various subsets of the parts of such high-level systems that have various uniquely independent functions. Miller argues that if Behe were right, if one took away a part of the flagellum, the resulting structure could have no beneficial function whatsoever.  Therefore, if any beneficial subsystem function can be found, Behe's notion is clearly falsified - i.e., the system isn't really functionally "irreducible".





        So, has Miller actually found a subsystem with a potentially beneficial function?  Obviously, he has - - or I wouldn't be writing this essay.  Miller points out that if not just one or two proteins are removed from the flagellar system, but 30 of the around 40 structural proteins are removed, one would expect, if Behe were right, that what would be left would be as functional as a pile of junk.  Yet, this isn't the case.  Take away 30 or so particular parts of a flagellum and what's left (~10 homologous proteins) is a functionally beneficial toxin injector system known as the Type Three Secretory System (TTSS).

        The TTSS system is actually used by certain kinds of disease-causing bacteria known as gram-negative pathogens that attack plants and animals.  Obviously the TTSS system is quite beneficial to certain types of pathogenic bacteria. It is indeed a true survival/reproductive advantage to those bacteria that have and use it. Therefore, it seems quite reasonable that the TTSS system could be used as a viable stepping stone along the pathway toward the higher-level flagellar motility system.  And presto, Miller has just devastated Behe's notion of irreducibly complexity.  This is in fact one of the main points brought up to challenge Behe at the Dover trial.  And, it certainly did seem to convince a great many people, including the presiding judge.  What wasn't presented about at the trial though, or in the recent NOVA report on the trial (aired November 13, 2007), is an interesting question: 

        Given Miller's position as correct, which system is likely to have evolved first - - the much simpler TTSS system or the much more complex flagellar motility system?  Given Miller's argument, it seems intuitively obvious that the TTSS system should evolve first followed by the more complex flagellar system - right?  Of course . . .

        It is strange, then, that the TTSS system is thought to have evolved hundreds of millions of years after flagellar evolution. That's right. Many scientists believe that there is very good evidence to believe that the TTSS system arose from the fully formed flagellum - - not the other way round.  Consider that the bacterial flagellum is found in mesophilic, thermophilic, gram-positive, gram-negative, and spirochete bacteria while TTSS systems are restricted to a few gram-negative bacteria. Not only are TTSS systems restricted to gram-negative bacteria, but also to pathogenic gram-negative bacteria that specifically attack animals and plants . . . which supposedly evolved hundreds of millions of years after flagellar motility had already evolved.  Beyond this, when TTSS genes are found in the chromosomes of bacteria, their GC (guanine/cytosine) content is typically lower than the GC content of the surrounding genome. Given the fact that TTSS genes are commonly found on large virulence plasmids (which can be easily passed around between different bacteria), this is good evidence for horizontal transfer to explain TTSS gene distribution.  Flagellar genes, on the other hand, are usually split into 14 or so operons, they are not found on plasmids, and their GC content is the same as the surrounding genome suggesting that the code for the flagellum has not been spread around by horizontal transfer. Additional evidence for this comes from the fact that the TTSS system shows little homology with any other bacterial transport system (at least 4 major ones). Yet, evolution is supposed to build upon what already exists.  Since the TTSS system is the most complex of the bunch, why didn't it evolve from one of these less complex systems and therefore maintain some higher degree of homology with at least one of them? This evidence suggests that the TTSS system did not exist, nor anything homologous, in the "pre-flagellar era".  It must therefore have arisen from the fully formed flagellum via the removal of pre-existing parts - and not the other way around. In fact, several scientists have actually started promoting this idea in recent literature.3-8 Subsequently, this has been more directly proven by Toft and Fares:


    "Genome shrinkage is a common feature of most intracellular pathogens and symbionts... Our analysis indicates that genes responsible for flagellar assembly have been partially or totally lost in most intracellular symbionts of gamma-Proteobacteria… Based on our results, we suggest that genes of flagellum have diverged functionally as to specialize in the export of proteins from the bacterium to the host [i.e., TTSS].

     Reduction of genome sizes is among the best-characterized evolutionary ways of intracellular organisms to save and avoid maintaining expensive redundant biological processes. Endosymbiotic bacteria of insects are examples of biological economy taken to completion because their genomes are dramatically reduced. These bacteria are nonmotile, and their biochemical processes are intimately related to those of their host. Because of this relationship, many of the processes in these bacteria have been either lost or have suffered massive remodeling to adapt to the intracellular symbiotic lifestyle. . . .

      Comparative genomic analyses show that flagellar genes have been differentially lost in endosymbiotic bacteria of insects. Only proteins involved in protein export within the flagella assembly pathway (type III secretion system and the basal body) have been kept in most of the endosymbionts, whereas those involved in building the filament and hook of flagella have only in few instances been kept, indicating a change in the functional purpose of this pathway." 25 


        Now, isn't that just most interesting? - totally unpredictable based on Miller's arguments.  Rather, it seems much more in line with the predictions of intelligent design; that what is more functionally complex can indeed degenerate into something that has fewer structural requirements.  But, is it just as easy to turn things around and go upstream; so to speak?  Not at all.  In other words, it is far easier to destroy a car's motility function and still have its headlights work than to go the other way around and get the motility function starting with working headlights.  Yet, you won't hear this little interesting fact in Miller's books or lectures.  It certainly wasn't brought up by NOVA in their coverage of the Dover trial.  Even though these scientists knew of such facts, they probably didn't want to present such things to a general audience for fear of "confusing people" - - with the facts?





But the Lights Still Work!


        In short, what's wrong with Miller's argument is that the motility function of the bacterial flagellum does indeed require a certain rather large number of specifically arranged amino acid residue "parts" as well as the underlying codes in DNA.  Without all of these parts in place, in their proper order, at the same time, the motility function cannot be realized at all - not even a little bit.  Reduce the number of parts below this minimum threshold limitation and the motility function simply disappears - poof.  Like turning out a light.  The fact that various subsystems might still maintain their own separate functionalities does not mean that the minimum structural requirements needed for the motility function of a bacterial flagellum is therefore significantly reducible.  It certainly is not.  The same thing is true about the motility function of a car.  Just because the lights and CD player might still work without the drive shaft doesn't mean the car's motility function is therefore "reducible".  It isn't. To suggest otherwise, as Miller and many other scientists do and as NOVA did, is simply a misdirect - a misdirect which is a seemingly deliberate misdirect. 

        However, to help Miller out of a bit of a pickle here, just because a function requires a certain minimum structural threshold does not mean that it is necessarily unevolvable. This is perhaps where Behe could be more clear. Behe seems to indicate in his books and lectures that only certain types of biosystems are "irreducibly complex".  That's simply not true. It seems like all functional systems have minimum structural threshold requirements and therefore all are "irreducibly complex".  And, many types of these irreducible beneficially functional systems are actually evolvable.  Irreducible complexity does not automatically mean that a system cannot be evolved via random mutation and natural selection - despite Behe's apparent claims to the contrary.  If the next closest beneficial subsystem just so happens to be one or two residue changes or mutations away from a given starting point, the odds are extremely good that such an evolutionary step will be taken in very short order by a colony of just a few million bacteria (i.e., in one or two generations to cover such a small non-beneficial gap distance).  And, when it comes to many types of functional systems, such evolution does happen - and quickly (a few examples are discussed in some detail below).  

        The problem is that the proposed evolutionary mechanism of random mutation and function-based selection (i.e., Natural Selection) starts to stall out, in an exponential manner, with each step up the ladder of minimum structural threshold requirements.  While there are many examples of evolution in action producing novel systems of function that require dozens to a few hundred fairly specified amino acid residues, there are no examples of evolution in action (i.e., examples that can actually be observed in real time) beyond the 1,000aa threshold. There isn't a single example of such evolution in all of scientific literature - not one example.  

        What is the reason for this stalling out effect? for this "limited evolutionary potential" where evolution happens very quickly for low-level functional systems, less quickly or often for higher-level systems, and not at all beyond the 1,000aa structural threshold?  Well, it seems as though the average distance (as a Poisson distribution) between what exists in a gene pool and what might exist to some benefit within the vastness of the potential of sequence space grows in a linear manner with each increase in the minimum structural threshold requirements of different types of functional systems.  Those types of systems that have greater minimum structural threshold requirements are more widely spaced, like islands in sequence space, from all other existing and potentially existing beneficial systems.  These higher-level islands are surrounded, on all sides, by non-beneficial sequences so that getting from one island to the next by random mutation requires a truly random walk or random selection process.  Nature cannot guide the series of mutations across this gap because nature only selects, in a positive manner, what works right now - not what might work in the future.  So, until a random mutation happens to land on a distant island by sheer luck, natural selection plays no part.  As it turns out, a linear increase in the non-beneficial gap size translates into an exponential increase in the average number of mutations (and time) necessary to cross the gap.  Well before the 1,000aa threshold is reached, the average time required to cross the expanding gap works its way into the trillions upon trillions of years - even given a population of bacteria the size of all the bacteria on Earth (calculation).



Examples of Evolution In Action











        Another argument forwarded by Kenneth Miller has to do with a very interesting paper by Johnson et. al. reporting on the very real evolution of a novel enzymatic biosystem known as 2,4-dinitrotoluene or 2,4-DNT.9  What is interesting here is that 2,4-DNT is a synthetic compound that was first synthesized in the 1930s and comprises one of the components of the famous explosive TNT as well as expanded polyurethane foam.  Johnson et. al. somehow noticed that certain types of bacteria in the surface water and soil of Radford Army Ammunition plant in western Virginia were actually eating or metabolizing 2,4-DNT.  The bacteria identified were Burkholderia cepacia R34, a strain that grew using 2,4-DNT as a sole carbon, energy, and nitrogen source. The genes in the evolved degradative pathway were identified within a 27 kb region of DNA.

        Now, what is most interesting is the way in which these bacteria achieved this feat.  They co-opted enzymes that were already present and working as parts of other enzymatic pathways to perform an entirely new type of function - i.e., the digestion or metabolism of 2,4-DNT.  As it turns out the 2,4-DNT pathway that evolved ultimately involved the use of seven different enzymes.  "Inferences from the comparison of the structural genes of the 2,4-DNT pathway suggest that the pathway came together from three sources."9 

        Of the seven enzymes in the 2,4-DNT metabolism pathway four of the key enzymes include dntAaAbAcAd (745aa), dntB (548aa), dntD (314aa) and dntG (281aa).  Note that the first two steps (illustrated at the right - dntAaAbAcAd and dntB) produce the byproduct NO2- (Nitrite)..  As it turns out, nitrite can be used for energy by bacteria known as nitrifying bacteria.  And, you guessed it, Burkholderia cepacia are nitrifying bacteria.  Why is this important?  Because, it means that each one of the first two steps in the pathway are functionally beneficial since they both provide a source of additional energy to the bacteria that gain such enzymatic activities (see addendum).10  

        In addition, each of these steppingstones has independent function in that no specific arrangement or orientation is needed, relative to the other elements in the enzymatic cascade, before its own function can be realized. Statistically, this is very important because far less structural specificity is required before the next functional step can be realized - especially if a functionally equivalent enzyme already exists as part of any other system of function.  And, guess what, all of the parts in the 2,4-DNT cascade already existed, preformed, as parts of other systems of function within the bacterium.   

        If all the needed enzymes are already being made, as parts of other systems, then obviously not much change or evolution is required to be able to use the 2,4-DNT molecule for energy. Unlike bacterial motility systems, enzymatic cascades need not self-assemble themselves in any particular way before the function in question can be realized.  All that needs to happen is for all the required enzymes to be present somewhere in the intracellular environment (in any order/arrangement).  This is not the case for non-cascading functions (i.e., bacterial motility systems) where all the protein parts are required to be in a particular order (i.e., a particular three dimensional arrangement) all working together at the same time before the function in question will be realized.

        This is not to say that cascading systems have no significant functional complexity. Many of them are quite complex, but none are significantly more complex than their most complex single component part.  The most complex single part in the 2,4-DNT cascade seems to be the dntAaAbAcAd enzyme, which requires around 745 fairly specified amino acid residues.  Given just this degree of specificity alone, without the original genes and enzymes in place to begin with, even this relatively simple enzymatic function would most likely not have evolved.  The authors themselves state as much when they note that the "De novo evolution of genes for nitrotoluene degradation during the short period of time seems unlikely."9   

        Compare such cascading functional systems to a functional system like flagellar motility where all the parts are required to be in a very specific arrangement relative to all the other parts in the system to achieve the next beneficial steppingstone function. What this means is that the odds needed to get all the needed parts in the right order for a cascade are much much less than the odds needed to get all the right parts for a fully specified system of equal overall size.

        For example, if you needed 5 specific 3aa residues to form a certain cascading-type function, what are the odds that all 5 will exist within a pool of 1 billion different 3aa sequences?  Well, since there are only 8,000 possible 3aa sequences (203), the odds that all 5 will exist preformed somewhere in the gene pool are very very good - much better than 99% chance. The calculated is as follows:

        The odds that one of the 3aa sequences will not appear in 8000 3aa characters is (7999/8000) = 0.999875 chance.  So, the odds that a specific 3aa sequence will appear in a group of 8000 3aa sequences is 1- 0.999875 = 0.000125.  Now, the odds that a specific 3aa sequence will not appear anywhere in our pool of 1 billion 3aa sequences is (7999/8000)1.25e5 = ~1.63e-7.  So, the odds that one specific 3aa sequence will appear in our pool is 1 - 1.63e-7 =  0.9999998 . . .  And, the odds that all five 3aa sequences will exist in this pool somewhere are 0.99999985 = 0.9999990. In other words, better than a 99.9% chance that all 5 needed parts will exist within a given genome.

        Now, compare this with the odds of achieving a system that requires all five specific 3aa sequences to be specifically arranged relative to each other.  The number of different specific sequences possible is at least 2015 = 32,768,000,000,000,000,000 (~3.3e19).  So, the odds that one particular 15aa arrangement will appear within a pool of one billion different options is around 1 chance in 1010 (or in one pool out of 10 billion pools where each pool contains a billion random 15aa sequences) - i.e., pretty slim odds. 11
        See the difference?



Lactase and Nylonase


        Miller often uses actual examples of evolution in action in his lectures and books - like the evolution of novel nylonase and lactase enzymes.  While these most certainly are real examples of evolution in action, they are easily explained with the odds of evolvability being similar to those described above for the 2,4-DNT cascading system.  

        Let's start with the lactase example.  In his 1999 book, Finding Darwin’s God, one of Miller’s challenges of Behe’s position includes a research study from the early 80s carried out by professor Barry Hall, a biologist from the University of Rochester.  What Hall did was very interesting. He deleted a gene (lacZ) in a type of bacteria (E. coli) that makes a lactase enzyme (galactosidase).  This lactase enzyme converts a sugar called lactose into the sugars glucose and galactose.  E. coli then process glucose and galactose further to extract energy.  One might think that when Hall deleted the gene that codes for the lactase enzyme that these bacteria would never be able to use lactose for energy again.  However, when Hall exposed the mutant bacteria to lactose enriched growth media, that they quickly modified a different gene, which Hall named the "evolved Beta-galactosidase gene" (ebg), to produce a pretty good lactase enzyme.  This is interesting because the original gene product did not have the lactase function.  Only after a key random mutation was this genetic sequence able to produce a protein with the lactase function.12  

Behe counters by arguing that as far as the active sites of the lac and ebg ß-galactosidase enzymes are concerned, that they are essentially the same with both being a part of a family of highly conserved Beta-galactosidases - identical at 13 of 15 active-site amino acid residues.  The two mutations in the ebg  Beta-galactosidase,  that increase its ability to hydrolyze lactose, change the two non-identical residues back to those of the other Beta-galactosidases.  So, before the evolution of the lactase ability of the ebg gene, its active site was already a near duplicate of other Beta-galactosidases.13

  Even so, this really was quite an amazing experiment in that a novel enzymatic function, which was not present in the entire gene pool prior random mutation and natural selection, did in fact evolve in real time.  According to Miller and Hall, and many others quoting the same or similar experiments, such experiments give demonstrable proof of the proposed evolutionary mechanism in action.  Obviously then, Behe does not know what he is talking about . . . or does he? Consider that fairly often things are not quite as they would appear at first glance.

Most descriptions of Hall's experiments end with E. coli evolving the lactase function back again.  This is very interesting because Hall's actual experiments did not end there.  After his initial success, Hall wondered if any other genes would be able to evolve the lactase function.  So, he deleted the ebg gene as well as the lacZ genes to test this hypothesis. And, something most interesting happened - nothing.  No new gene or portion of DNA evolved the lactase function despite tens of thousands of generations of time, a huge population size, high selection pressure, and a high mutation rate.  Now that is just fascinating . . .  Despite tens of thousands of generations with large population numbers and high mutation rates, no new lactase enzyme evolved. Hall himself noted in his paper that these double mutant bacteria seemed to have “limited evolutionary potential.”12 

Other unfortunate bacteria seem to be just as "limited" in their evolutionary potential. Even though they would significantly benefit, many types of bacteria, after more than a million generations, have not been observed to evolve a relatively simple lactase enzyme. This is fewer generations than it supposedly took humans to evolve from ape-like creatures. One should also note that these same bacteria, unable to evolve a lactase enzyme, are all able to evolve, in relatively short order, resistance to any antibiotic that comes their way. So what is it, exactly, that “limits” the evolutionary potential of living things, like bacteria, in their ability to evolve some functions but not others?

I propose that the answer can be found in the number and density of beneficial “steppingstones” available (in the form of genetic sequences). For forms of antibiotic resistance that are gained by blocking the antibiotic-target function, there are lots of beneficial steppingstones very close together, but not so for the enzymatic functions of lactase, nylonase or penicillinase. Relatively speaking, there are very few such enzymes, compared to the total number of possible sequences. 

For example, there are 676 potential two-letter words in the English language. Of these, 96 are defined as meaningful, creating a ratio of meaningful to meaning- less of 1 in 7. Now, there are 296 more meaningful three-letter words, totaling 972, but the total number of potential words increases 26 fold to 17,576. Since the number of meaningful words only increased by a fraction of this amount, the ratio of meaningful to meaningless dropped to 1 in 18. 

  Still, such ratios are relatively high, and random walk can get from any one-, two-, or three-letter words to any other via a path of meaningful words, as in the steppingstone sequence of cat - hat - bat - bad - bid - did - dig - dog. "Evolution" (changing meaning or "function") at this level is rather simple because the stepping-stones are so close together. But, with each additional minimum letter requirement, the growth of the meaningless sequences quickly outpaces the growth of the total number of meaningful sequences, and the ratio of meaningful to meaningless gets smaller and smaller at an exponential rate. 

For example, there are around 30,000 meaningful seven-letter words and combinations of smaller words totaling seven letters, but there are 8,031,810,176 potential seven-letter sequences. This produces a situation in which an average meaningful seven-letter sequence is surrounded by over 250,000 meaningless sequences. Obviously then, compared to three-letter steppingstones, it is much harder to “evolve” between meaningful seven-letter steppingstones without having to cross through a little ocean of meaningless sequences. 

The same thing happens with the genetic codes in living things. The more genetic letters that are required to achieve a particular function, and the higher the level of the specificity of their arrangement, the more junk there is compared to the relatively few beneficial sequences at such a level of complexity.

For example, a simple BLAST 14 database search of known proteins will show that the shortest working lactase enzyme found in a living organism seems to require around 400 amino acids at minimum with at least a fair degree of specificity. Some estimates suggest that the total number of beneficial sequences at the 400-amino-acid level of specified complexity totals less than 10100 sequences.15,16 Now, considering that the total number of atoms in the entire known universe is around 1080, this 10100 number seems absolutely huge! 17 Huge, that is, until one considers that there are over 10520 possible sequences at this level of complexity, which creates a ratio of beneficial to non-beneficial sequences of about 1 in 10400 (which is like finding a single atom in zillions of universes).  The actually ratio of lactases vs. non-lactases is probably quite a bit lower than that due to a wider range of sequence flexibility (i.e., lower specificity). 


Nylonase, on the other hand, is in exactly the same boat.  The nylonase enzyme was originally thought to have evolved via a frameshift mutation in a stretch of DNA coding for a 472aa protein.  This frameshift mutation was thought to have been caused by the insertion of a single thymine nucleotide at just the right spot to create a "start codon" and produce an entirely new protein sequence of 392aa (6-aminohexanoic acid linear oligomer hydrolase).20 Other nylonase proteins have been coded for by as few as 355aa with what seems to be fairly loose minimum sequence specificity - even compared to the lactase enzymatic function.21   Then, a series of more recent studies by a team led by Seiji Negoro, of the University of Hyogo, Japan, suggest that in fact no frameshift mutation was involved in the evolution of the 6-aminohexanoic acid hydrolase (i.e., Nylonase).22  However, many other genes have been discovered which did evolve by gene duplication followed by a frameshift mutation affecting at least part of the gene. A 2006 study found 470 examples in humans alone.23 Scientists have also been able to induce another species of bacteria, Pseudomonas aeruginosa, to evolve the capability to break down the same nylon byproducts in a laboratory by forcing them to live in an environment with no other source of nutrients - using different enzymes than had been utilized by the original Flavobacterium strain. 24


Statistically, this means a nylonase enzyme is at least as easy to evolve as a lactase enzyme if not easier.   






To further illustrate the concept of an expanding sequence space and potential evolutionary steppingstones within that space, consider Choi and Kim's paper (illustrated figures above and to the right) and their "global view of the protein structure space."  Choi and Kim did something very interesting. They mapped "1,898 nonredundant protein structures from Protein Data Bank [onto] 3D space [down from the hyperdimensional space of protein-sequence/structure space] to visualize the major feature of the map. The protein structure space is sparsely populated, and all of the proteins of known structures cluster mostly into four elongated regions, which correspond approximately to four SCOP classes (all-{alpha}, all-beta, {alpha}+beta, and {alpha}/beta) of protein structures indicated by red, yellow, purple, and cyan spheres, respectively. The small proteins and multidomain protein classes are represented by green and black spheres, respectively. All structural class assignments were based on the SCOP classification. Three axes are drawn in to visualize high-population regions of all-{alpha}, all-beta, and {alpha}/beta class proteins, and the "origin" is represented by a large orange ball at the point where two of the axes meet." 18

Given this description, notice how the small proteins (green spheres) are much more closely spaced and clustered together compared to the multidomain proteins (black spheres) and other larger proteins (other colors) which occupy much much larger sequence and structural spaces. There is a progressive increase in the average distance between "viable" spheres with increasing size requirements. Again, this only highlights the fact that increasing structural threshold requirements produce a lower ratio and wider non-beneficial gaps between potentially viable and beneficial protein-based systems in sequence/structure space. There is a progressive increase in the average distance between beneficial protein structures with increasing size requirements. This feature is illustrated in an even clearer way in the figure above (c).  In this figure you will note a size scale where the shortest proteins are colored dark blue, medium sized proteins green to yellow, and the largest proteins red.  Guess which beneficial protein systems have the greatest average distance from each other? 

Also, consider that the three dimensional illustration presented is a dramatic under characterization of the actual distance that exists in hyperdimensional sequence/structure space.  It is like projecting the shadows of widely-spaced objects that exist in three dimensional space onto a two dimensional screen.  The resulting dots on the two-dimensional screen would appear much closer together than they really are in three dimensional space.  Now, extrapolate this effect by hundreds and thousands of dimensions (one extra dimension for every one amino acid residue increase in protein system size) to understand the true gap distances illustrated by Choi and Kim.

Erich Bornberg-Bauer's paper dealing with model protein structures (comparable to real proteins) supports the notion that sequence space is sparsely populated with fairly evenly distributed viable proteins even at low-levels of structural threshold requirements - features which I propose only become exponentially more and more accentuated with each step up the ladder of minimum structural threshold requirements.


"Roughly speaking, however, distances are randomly distributed. This means that, although only a small fraction of sequence space yields uniquely folding sequences, sequence space is occupied nearly uniformly. No "higher order" clustering (i.e., except the trivial case of the homologous sequences) is visible." 19



Real Life


Of course, since nature cannot tell the difference between two meaningless genetic sequences, it cannot select between them, making natural selection blind to such neutral changes. Since there are no recognizable “steppingstones” close by, all that nature has left, to find new beneficial sequences, is a blind random walk through enormous piles of junk sequences. Of course, this random, curvy walk takes a lot longer than a direct walk would take, and the time involved increases  exponentially with each increase in the minimum sequence and specificity requirements for a particular function. Random selection of sequences within sequence space starting from a beneficial island (like throwing darts at a dartboard) has no statistical advantage when it comes to finding novel beneficial sequences over neutral random walk.  This prediction is reflected in real life by an exponential decline in the ability of mindless evolutionary processes to evolve anything beyond the lowest levels of functional complexity.

Many simple functions, such as de novo antibiotic resistance, are easy to evolve for any bacterial colony in short order. Moving up a level of complexity, there are far fewer examples of single protein enzymes evolving where a few hundred amino acids at minimum are required to work together at the same time (and many types of bacteria cannot evolve even at this level). However, there are absolutely no examples in the scientific literature of any function requiring more than a thousand or so amino acids working at the same time (as in the simplest bacterial motility system) ever evolving - period. The beneficial "steppingstones" are just too far apart due to all the junk that separates the few beneficial islands of function from every other island in the vast universe of junk sequences at such levels of informational complexity. The average time needed to randomly sort through enough junk sequences to find any other beneficial function at such a level of complexity quickly works its way into trillions upon trillions of years — even for an enormous population of bacteria (all the bacteria on Earth: ~1e30) with a high mutation rate (one mutation per 100,000 base pairs per individual every 20 minutes).  (Link)

At this point the mindless processes of evolution simply become untenable as any sort of viable explanation for the high levels of diverse complexity that we see within all living things. The only process left that is known to give rise to functional systems at comparable levels of complexity involves human intelligence or beyond. No lesser intelligence, and certainly no other known mindless processes, have ever come close to producing something like the informational complexity found in the simplest bacterial motility system. (Link)



  1. Miller, Kenneth R., Finding Darwin’s God, HarperCollins Publishers, 1999.

  2. Behe, Michael J. Darwin’s Black Box, The Free Press, 1996. 

  3. Anand Sukhan, Tomoko Kubori, James Wilson, and Jorge E. Galán. 2001. Genetic Analysis of Assembly of the Salmonella enterica Serovar Typhimurium Type III Secretion-Associated Needle Complex. J. Bacteriology 183: 1159-1167.  

  4. Macnab, R. M., 1999. The bacterial flagellum: reversible rotary propellor and type III export apparatus. J Bacteriology. 181 (23), 7149-7153.

  5. He, S. Y., 1998. Type III protein secretion in plant and animal pathogenic bacteria. Annual Reviews in Phytopathology. 36, 363-392.  

  6. Kim, J. F., 2001. Revisiting the chlamydial type III protein secretion system: clues to the origin of type III protein secretion. Trends Genet. 17 (2), 65-69.  

  7. Plano, G. V., Day, J. B. and Ferracci, F., 2001. Type III export: new uses for an old pathway. Mol Microbiol. 40 (2), 284-293.  

  8. Nguyen, L., Paulsen, I. T., Tchieu, J., Hueck, C. J. and Saier, M. H., Jr., 2000. Phylogenetic analyses of the constituents of Type III protein secretion systems. J Mol Microbiol Biotechnol. 2 (2), 125-144.  

  9. Johnson GR, Jain RK, Spain JC. "Origins of the 2,4-dinitrotoluene pathway." J Bacteriol. 2002 Aug;184(15):4219-32. (Free full-text article Link)

  10. Emiko Matsuzaka, Nobuhiko Nomura, Hideaki Maseda, Hiroshi Otagaki, Toshiaki Nakajima-Kambe, Tadaatsu Nakahara and Hiroo Uchiyama Participation of Nitrite Reductase in Conversion of NO2- to NO3 - in a Heterotrophic Nitrifier, Burkholderia cepacia NH-17, with Denitrification Activity, Microbes and Environments, Vol. 18 (2003) , No. 4 pp.203-209 (Link)

  11. Talk Origins Debate, February 16, 2007 (Link)

  12. B.G. Hall, Evolution on a Petri Dish.  The Evolved B-Galactosidase System as a Model for Studying Acquisitive Evolution in the Laboratory, Evolutionary Biology, 15(1982): 85-150.

  13. Behe, Michael J., "A True Acid Test" - Response to Kenneth Miller, Discovery Institute, May 2002. (Link)

  14. BLAST Search:

  15. Yockey, H.P. 1992. Information Theory and Molecular Biology. Cambridge University Press, pp. 255, 257.  

  16. Yockey, H.P., On the information content of cytochrome C, Journal of Theoretical Biology , 67 (1977), p. 345-376.  

  17. Anonymous. n.d. The Universe. National Solar Observatory, Sacramento Peak.  [ Ed. note: The number of atoms according to this reference is estimated to be 10 79 .]

  18. In-Geol Choi*, and Sung-Hou Kim, Evolution of protein structural classes and protein sequence families, PNAS | September 19, 2006 | vol. 103 | no. 38 | 14056-14061 ( Link )

  19. Erich Bornberg-Bauer, How Are Model Protein Structures Distributed in Sequence Space? Biophysical Journal, Volume 73, November 1997, 2393-2403 ( Link )

  20. Susumu Ohno, "Birth of a unique enzyme from an alternative reading frame of the pre-existed, internally repetitious coding sequence",  Proc. Natl. Acad. Sci. USA, Vol. 81, pp. 2421-2425, April 1984. ( Link )  See also: New Mexicans for Science and Reason

  21. Seiji Negoro, Shinji Kakudo, Itaru Urabe, and Hirosuke Okadam, "A New Nylon Oligomer Degradation Gene (nylC) on Plasmid pOAD2 from a Flavobacterium sp.," Journal of Bacteriology, Dec. 1992, p. 7948-7953. ( Link )

  22. Negoro S, Ohki T, Shibata N, et al. (June 2007). "Nylon-oligomer degrading enzyme/substrate complex: catalytic mechanism of 6-aminohexanoate-dimer hydrolase". J. Mol. Biol. 370 (1): 142–56. ( Link )

  23. Okamura K, Feuk L, Marquès-Bonet T, Navarro A, Scherer SW (December 2006). "Frequent appearance of novel protein-coding sequences by frameshift translation". Genomics 88 (6): 690–7.

  24. Prijambada ID, Negoro S, Yomo T, Urabe I (May 1995). "Emergence of nylon oligomer degradation enzymes in Pseudomonas aeruginosa PAO through experimental evolution". Appl. Environ. Microbiol. 61 (5): 2020–2.

  25. Christina Toft and Mario A. Fares, The Evolution of the Flagellar Assembly Pathway in Endosymbiotic Bacterial Genomes, Molecular Biology and Evolution 2008 25(9):2069-2076



See Also:  Miller's Lecture at Case Western University: YouTube Link





The metabolic characteristics of the NO2 - transforming activities of Burkholderia cepacia NH-17, which was isolated as a heterotrophic nitrifying bacterium with O2 tolerant denitrification activity, were characterized. The conversion of NO2- to N2O and NO3- occurred concomitantly with a decrease in NO2- under aerobic conditions in growing cultures. In an in vivo assay, production of N2O and NO3- was induced by NO2- as an inducer for denitrification, and nitrite reductase activity in sonicated fraction (NiR) assay indicated that in vitro nitrite reductase activity was also induced by NO2-. These results suggested that nitrification and denitrification in Burkholderia cepacia NH-17 might be closely related. Therefore, we constructed a nirS knockout mutant of Burkholderia cepacia NH-17. The mutant had no in vitro nitrite reductase activity and did not convert NO2 - to N2O and NO3-. These properties were restored by introducing the intact nirS gene into the mutant strain, indicating that reduction of NO2 - to NO is necessary for the conversion of NO2 - to NO3- in Burkholderia cepacia NH-17.10





Review of Miller's interview with NOVA (Link):

My comments are indented and are in Blue:


In Defense of Evolution

Dr. Kenneth Miller is as familiar as anyone in the scientific community with the intelligent-design movement and its attempts to undermine the theory of evolution. A professor of biology at Brown University and coauthor (with Joe Levine) of the standard high-school textbook Biology, Miller testified at the Dover trial as an expert witness for the plaintiffs, the Dover parents who brought suit against their town's school board. Here, Miller, who stresses that he is also a man of faith, talks about why evolution matters, what flaws he sees in the intelligent-design argument, and why the Dover decision hardly means the end of the controversy.

Faith and reason

Q: Why is evolution so controversial?

Kenneth Miller: I think one of the reasons why evolution is such a contentious issue, quite frankly, is the same reason you can go into a bar and start a fight by saying something about somebody's mother. Evolution concerns who we are and how we got here. And to an awful lot of people, the story of evolution, the story of our continuity with every other living thing on this planet, that's not a story they want to hear.

They favor an entirely different story, in which our ancestry is separate, our biology distinct, and the whole notion of our lineage traceable not to other organisms, but to some sort of divine power and divine presence. But it's absolutely true that our ancestry traces itself along the same thread as that of every other living organism. That, for many people, is the unwelcome message, and I think that's why evolution has been, is, and will remain such a controversial idea for many years to come.

Sean Pitman: I agree.  All ideas that affect one's view of where one came from and why one is here on this planet are bound to be tied up with a fair degree of emotion - at least for most people.  What is interesting is that scientists are not immune from this sort of emotional bias. Evolutionists, just like creationists and those who believe in some form of intelligent design or input into the origins of life, are often quite passionate about their respective positions on origins.  Scientists are no more immune from this sort of bias than are philosophers, plumbers, or preachers.

Q: Where do you come from personally on this topic?

Miller: I think that faith and reason are both gifts from God. And if God is real, then faith and reason should complement each other rather than be in conflict. Science is the child of reason. Reason has given us the ability to establish the scientific method to investigate the world around us, and to show that the world and the universe in which we live are far vaster and far more complex, and I think far more wonderful, than anyone could have imagined 1,000 or 2,000 years ago.

Does that mean that scientific reason, by taking some of the mystery out of nature, has taken away faith? I don't think so. I think by revealing a world that is infinitely more complex and infinitely more varied and creative than we had ever believed before, in a way it deepens our faith and our appreciation for the author of that nature, the author of that physical universe. And to people of faith, that author is God.

Now, I'm a scientist and I have faith in God. But that doesn't make faith a scientific proposition. Faith and reason are both necessary to the religious person for a proper understanding of the world in which we live, and there is ultimately no necessary contradiction between reason and faith.

"Whether God exists or not is not a scientific question."

Sean Pitman: I'm most intrigued by Miller's thoughts here.  How is Miller's description of "faith" in God any better than wishful thinking or a child's belief in Santa Claus?  I may be wrong, but it seems to me that Miller is talking about some sort of fantasy or concept of completely blind "faith" where one believes in this or that hopeful reality based on absolutely nothing but feelings or desire. In my opinion, those like Richard Dawkins are correct in becoming quite exasperated by such thinking and rightfully calling it "The God Delusion".  

While I personally do believe an intelligent Creator God, I do so because I think there is solid, testable, falsifiable evidence for a God-like higher power that goes far beyond human-level intelligence, power, and creativity.  If I did not at least think I recognized such evidence, there is no way I would actually worship a God for which I saw no physical evidence of his/her/its existence or interaction with any aspect of nature.

Q: What's wrong with bringing God into the picture as an explanation?

Miller: Supernatural causes for natural phenomena are always possible. What's different, however, in the scientific view is the acknowledgement that if supernatural causes are there, they are above our capacity to analyze and interpret.

Saying that something has a supernatural cause is always possible, but saying that the supernatural can be investigated by science, which always has to work with natural tools and mechanisms, is simply incorrect. So by placing the supernatural as a cause in science, you effectively have what you might call a science-stopper. If you attribute an event to the supernatural, you can by definition investigate it no further.

If you close off investigation, you don't look for natural causes. If we had done that 100 years ago in biology, think of what we wouldn't have discovered because we would have said, "Well, the designer did it. End of story. Let's go do something else." It would have been a terrible day for science.

Sean Pitman: I see this argument all the time and am always amazed by how many otherwise intelligent men and women use it and/or are taken in by it.  If a God or someone with at least high-level or God-like powers and/or intelligence decided to manipulate nature in any way, Miller and many other scientists actually argue that it would be impossible for humans to recognize any kind of manipulation of nature as being the result of deliberate intent or "artifact".  Yet, when it comes to the detection of deliberate human activity, activity that is arguably far less intelligent than what anyone would call "God-like", scientists don't seem to have any problem detecting design. 

Entire scientific disciplines are built up around the concept of detecting deliberate activity behind various phenomena in nature - - to include forensic science and anthropology.  Of course, these disciplines are built around previous experience with and direct observations of humans in action. Yet, there are scientists who do in fact propose that highly intelligent activity, even superhuman-level intelligence, can be detected without any need for knowledge concerning the actual identity, motive, or method of the intelligent agents.  These scientist spend their time searching for signs of intelligence coming from outer space - -  as in the search for extraterrestrial intelligence or SETI.

The argument is, of course, that humans and alien intelligences living somewhere in outer space are "natural", not "supernatural", and can therefore be potentially detected by scientific investigation. So, what if someone with God-like intelligence decided to act in a similar way to manipulate nature in a way that would at least simulate what human or alien intelligences could or would do that would be detectable as artifact?  Would it then be possible to detect such activity as at least intelligent or artifactual in nature? - - Rather than the result of some as yet unknown non-intelligent natural process? 

Q: Does science have limits to what it can tell us?

Miller: If science is competent at anything, it's in investigating the natural and material world around us. What science isn't very good at is answering questions that also matter to us in a big way, such as the meaning, value, and purpose of things. Science is silent on those issues. There are a whole host of philosophical and moral questions that are important to us as human beings for which we have to make up our minds using a method outside of science.

Sean Pitman: It just so happens that I like vanilla ice cream.  That's a fact.  And, it didn't take any scientific investigation for me to discover this fact.  It is not subject to testing or falsification by me or anyone else.  It just is.  It is an internally derived truth.  Is it important to me? Well, I'm kind of glad I know it as a truth.  It saves me a lot of time and frustration when I go to pick out an ice cream to buy at the store.

Is a belief in the existence of God as a "truth" kind of like the truth that I like vanilla ice cream?  Well, I may really like the concept or idea of a God or God-like being.  It may really appeal to me.  However, once I start suggesting how this God would act or actually did act or interact with the physical world that exists outside of my own mind, I have moved into the realm of science.  Making suggestions or assertions about what God does or did outside of my own mind without at least some physical evidence to back up such assertions is like taking on a form of schizophrenia or deliberate mental delusions that are based on nothing more than mental projections or very strong mental or emotional desires - - which do not necessarily have anything to do with the reality that actually exists outside and independent of ones own mind.

Q: Can science prove or disprove the existence of a creator, of God?

Miller: Whether God exists or not is not a scientific question.

Sean Pitman: Actually, it is, or it at least could be a scientific question.  It all depends on if God wishes to act in a way that is detectable as "artifactual" from a human perspective.  If a God does actually exists that wishes or has actually acted in such a way, such actions could, theoretically at least, be detectable as "deliberate" and "intelligent" - - just as any alien intelligence could be detectable as such by SETI scientists.

Evolution in a nutshell

Q: What is evolution exactly?

Miller: Well, everyone knows that evolution, in a sense, is change over time.

Sean Pitman: Well, lots of things change over time.  Even intelligent design advocates and creationists recognize this fact.  The question is: How do living things change over time . . . and to what degree?  For example, there is a distinct difference between the changes proposed by Gregor Mendel over time vs. those suggested by Darwin .  Darwinian-style change cannot be explained using Mendelian-style change alone.  Therefore, the changes over time proposed by Darwin require a different sort of mechanism that goes beyond the mechanism of Mendelian-type change. 

But what few people understand is how straightforward the nature of this change is. It's important to understand, first of all, that individuals don't evolve. I'm not evolving into something else, and my dog isn't evolving into something else. I'm going to remain a human being, he's going to remain a dog. That's the way things are going to work. What changes over time are populations of individuals, for very straightforward reasons.

Sean Pitman: Strictly speaking, individuals do "change" over time.  Parts of individuals even undergo what could be called Darwinian-style evolution over time - such as the human immune system which is often used as an example of functional evolution in action.  Some individuals evolve entirely new proteins or chimeric protein combinations, such as the famous BCL/ABL tyrosine kinase protein seen in people who develop or evolve chronic myelogenous leukemia (CML).  However, it is true that such "changes" aren't going to change a dog into a chicken, etc. 

Number one, every species shows variation among individual members of that population.

Sean Pitman: This is true.

Number two, individuals in a population show what biologists call differential reproductive success. Some individuals leave more offspring than others. Some people have no children; some people have big families.

Sean Pitman: Also true.

Finally, one of the factors that influences differential reproductive success is how well-suited individuals are to the present environment in which they find themselves—how good they are at obtaining food, defending themselves against their enemies, resisting disease, and finding and meeting a member of the opposite sex and raising offspring. All these things matter.

Sean Pitman: Right . . .

What Darwin appreciated is that nature herself selects from variants in the population for those that are best able to succeed in this race for differential reproductive success. Over time, and given a steady input of new variation into the population, that can change the average characteristics of a species, and it can split one species into two.

Sean Pitman: Absolutely.  A good example would be horses and donkeys - - two "species" that clearly had a common ancestor, share the same basic "gene pool" and can interbreed to produce viable if not virile offspring (i.e., mules and/or hinnies). However, Mendelian variation can also change the average characteristics of a population over time via the guidance of natural selection.  Yet, Mendelian variation cannot create truly novel gene pools with unique functional elements present in the offspring which were not already present in the parent population.

Those species, those two groups, can then go on changing in different directions. That's what leads to the formation of yet more new species. Nature herself automatically selects for favorable variations, and this is the driving engine of evolutionary change. That, in a nutshell, is what evolution is.

Sean Pitman:  This definition of evolution allows for more types of "change" that just Darwinian-style change over time.  The real issue here is over the concept of Darwinian-style evolution where truly novel functional elements are added to gene pools over time.  That is the definition of Darwinian-style evolution, in a nutshell. 

The question then is: Can Darwinian-style evolution happen, and if so, does it have any evident limitations when it comes to the type or nature or degree of novel functional elements that can be produced? 

Evolutionists strongly believe that given enough time and the appropriate environments or environmental changes or variations, the answer to this question is no - - There is no significant limit to the nature or degree of novel functional systems that can be added to any gene pool.

It is this notion that creationists and intelligent design theorists wish to challenge.  Many, even a number of very well educated scientists, to include several Nobel Laureates, are starting to question this particular claim of mainstream evolutionary theory.

Q: Why is evolution important? How does it affect people in their everyday lives?

Miller: We should care about evolution because it concerns who we are, where we came from, why we are the way we are, and maybe even where we're going.

Sean Pitman:  That's true . . .

The whole notion that biology is wrapped up in the idea of evolution is extremely important to experimental biologists, because otherwise, to paraphrase another scientist, biology is nothing but stamp collecting. It's an exercise in which you say, "Here's a worm and here's how worms work, and here's this type of cell and here's how this cell works. And here is a plant, and here is how plants work."

If they're all completely unrelated, then biology is not a unified science.

Sean Pitman:  That's right . . .  Clearly biological organisms are very "related".  All living things share a great deal in common with each other.  Clearly every living thing has some sort of common origin.  This is not in question.  What is in question has to do with what common origin is most likely?  Is the common origin a common ancestor of all life that gave rise to all the various forms and functions of biosystem complexity that we see today via random mutation and natural selection acting over millions and billions of generations?  Or, is the common origin found in a common intelligent Designer/Creator? Or, and I favor this option myself, is there very good evidence that both processes have been in play in the origin and diversity of living things?

But we know from a half century of biochemistry and molecular biology that all these living organisms, no matter how diverse they are, share certain common features, and those common features include the way in which they store and transmit and evolve information, and these common features tie all of life together. They help us to understand our own bodies and our own genomes in the light of the bodies and genomes of other organisms. So what evolution really does is to make sense of biology, and what biology does is to help us make sense of ourselves, our own lives, and the planet on which we live.

Sean Pitman:  Darwinian-style evolution does indeed make sense of a number of interesting aspects of biology.  However, the Theory of Evolution, as it currently stands, does not recognize certain abundantly obvious limitations.  These limitations make it impossible to explain, from the standpoint of the entirely non-deliberate proposed mechanism, of random mutation and function-based selection, the existence of certain kinds of functional systems that exist in every living thing. 

For example, while there are many examples of evolution in action producing novel biosystems that have few minimum structural threshold requirements (i.e., less than a few hundred specifically arranged amino acid residues), there are no examples of evolution in action producing a novel functionally beneficial biosystem that has a minimum structural threshold requirement of more than 1,000 specifically arranged residues - - not one example in all of scientific literature.

Now isn't that most interesting?  Why might there be such a stalling-out effect illustrated by known examples of evolution in action?  I suggest that the reason for such a stalling out effect is due to the dramatic decrease in the ratio of potentially beneficial sequences/structures in sequence/structural space as once considers biosystems with greater and greater minimum structural threshold requirements.  I term this problem the exponentially expanding non-beneficial gap problem.

Intelligent design

Q: What is intelligent design?

Miller: My understanding of intelligent design is that it is the argument that the structures, features, organs, and biochemical pathways that we find in living cells are so complex that they could not have been produced by natural processes such as evolution and that they would require the intervention of an intelligent designer outside of nature to bring them into existence.

Sean Pitman:  Close, but not quite right.  Intelligent design theory does not care if the proposed designer is "natural" or "supernatural".  The intelligent agent could be a very smart alien from a galaxy far far away from the perspective of ID Theory.  ID Theory says absolutely nothing about the actual identity or nature of the intelligent agent beyond the notion that the agent is actually intelligent.  To suggest that ID is proposing that the intelligent agent is also "Supernatural" is a mischaracterization of the basic tenet of ID Theory.

Now, it is true that those who carry the title of "IDists" have actually gone about proposing who they think their proposed designer actually is.  This fact notwithstanding, the very basic notion that one can detect deliberate artifact without any additional knowledge of who, why, or how is still true and has at least the potential to be used as a valid scientific hypothesis to explain many types of phenomena observed in nature.

"I often hear people say that they're not descended from monkeys. Well, they're right."

Sean Pitman:  And, descent from a pre-monkey pre-human common ancestor is supposed to make them feel better?  ; )

Q: [Phillip Johnson, the father of the intelligent-design movement] likens this process to flipping a coin: if it lands and it's not heads, it must be tails. He says that evolution can't account for the diversity of life, therefore it's got to be something else. The only other thing it could be is an intelligent designer.

Miller: It's a negative argument in the sense that their proof of the existence of a designer is the alleged inadequacy of evolution to account for these complex features. What's wrong with that explanation is that it's a contrived dualism. It's an argument that says, "Either evolution can explain everything, or we can invoke an intelligent designer." What it amounts to, for example, is the claim that the moon is made of green cheese, and someone else says, "No, I think it's made of granite." Then we go to the moon, we bring back samples of rock, and we say, "You know what? They're not made out of granite." Does that mean we now have definite proof for the green-cheese explanation? Of course not.

The whole idea of intelligent design is a confession on the part of its advocates that they actually can't get any evidence at all in favor of a designer. So what they resort to is the notion that it's either evolution or it's design. And if evolution right now, today, cannot explain everything, that lack of a complete explanation amounts to evidence for the other side.

Well, it doesn't. What it really points out would be the current inadequacy of science to explain everything. And science, as any realist knows, is necessarily incomplete. On the day when we have a complete scientific explanation for everything in nature, it'll be time to close every science department of every research institution in the world, because all questions will have been figured out. I don't expect to see that day. But that doesn't mean that the incompleteness of science is an argument for a supernatural alternative like intelligent design.

Sean Pitman:  Much, if not all, of science is based on the potential for falsification.  If a hypothesis or theory cannot be falsified, then some would say it isn't a scientific hypothesis or theory.  Many scientific theories are in fact set up in a rather dualistic way, such that if X is not true, then Y most likely is true.  Again, SETI science is also based on this very same "contrived" dualism.  To quote Seth Shostak, senior astronomer at the SETI Institute:

"Perhaps the extraterrestrials will preface their [radiosignal] message with a string of prime numbers, or maybe the first fifty terms of the ever-popular Fibonacci series. Well, there's no doubt that such tags would convey intelligence." (Link)

Sean Pitman:  This is an interesting statement.  Why is it that such a numerical pattern, carried in a medium of a radiosignal, would so clearly indicate deliberate artifact?  By Miller's argument it would be at least possible that some as yet unknown non-deliberate natural phenomenon may have been responsible.  In fact, to ever stop looking for such a non-deliberate phenomenon and just give up after even a protracted search to conclude the action of some unknown intelligent agent would simply be anti-scientific - - at least according to Miller's argument. It seems that Miller would have one always conclude a non-intelligent agent no matter what and that one should keep up the search for a non-intelligent agent forever.

Is this a reasonable position? How long is it actually reasonable to search for a non-intelligent answer for a given phenomenon before the hypothesis of deliberate artifact or design gains a reasonable level of credibility?  While no scientific hypothesis can ever be fully confirmed in that there is always a possibility of being wrong or of having one's theory falsified, is there a point at which the weight of evidence favors even the hypothesis of deliberate design in certain cases beyond a "reasonable" doubt? 

Q: What's the harm in introducing intelligent design into a science classroom?

Miller: One could very well say that a God, a designer, a supernatural force was responsible, let's say, for an event that happened in your life or my life, or was responsible for our ability to meet the challenges of life. I don't see anything wrong with that, and it might be a valid explanation in many cases. But pretending that that explanation is a scientific one is a violation of everything we mean and understand by science.

Sean Pitman:  Let's say that Miller travels to some alien planet, like Mars, and while walking around on this seemingly barren planet comes across a highly symmetrical polished granite cube that measures one meter on each side and is parallel and perpendicular to within 0.01% of perfect symmetry.  In the center of each cube face is a highly symmetrical carved triangle that measures 10 cm on each side and is carved to a depth of 1 cm. 

What would Miller instantly assume if he were to find such a granite cube on Mars?  I'm pretty sure that even Miller would very quickly propose some sort of artifactual origin - - even without knowing the actual identity, motives, or methods of his proposed intelligent agents.

Now, why might that be?  Thought it is impossible to know for sure, one could still be pretty confident that such a cube is highly unlikely to have been formed by any known non-deliberate force of nature.  So, what is the most likely "contrived dualistic hypothesis"?  Even Miller would no doubt proposed, contrived and dualistically not withstanding, the hypothesis of deliberate artifact.  Just don't call the designer "God" is all.

Bringing that idea into the school classroom seems innocuous enough, because all you would do is tell students, well, there's either the evolution explanation or the design explanation. But consider the implications of that. If we present the idea of intelligent design as an alternative to evolution, students, who are very bright, are going to understand something right away, and that is, basically, you've got your atheist theory over here and you have your Bible or God-friendly theory over there.

Sean Pitman:  I'm afraid very bright students would be just a bit brighter than to simply accept such a mischaracterization without question. If anything is an unwarranted dualistic hypothesis, this is it.  Although the Theory of Evolution does not require philosophical atheism, what it does do, at least according to those like Richard Dawkins, is "Allow one to be an intellectually fulfilled atheist."  (Richard Dawkins, The Blind Watchmaker, p. 6). Without the theory of evolution, being an intellectually fulfilled atheist is just a bit more difficult is all. It certainly was for most, even within the scientific community, before Darwin came along.

What it does is to falsely cast evolution in light of an inherently atheistic idea. This is the goal of the intelligent-design movement, indirectly to tell students that either you turn your back on the faith that you've been brought up with in order to embrace the scientific mainstream, or to be true to your faith you have to reject modern science. That's a false choice. It does disservice to religion, and it does disservice to science, and I think it is a terrible way to proceed with scientific education.

Sean Pitman:  In a way there is a choice.  Both the Christian religion and mainstream science say things about human origins as well as the origins of all living things.  Not all of the various views of Christianity or science compliment each other.  Many are completely dichotomous.  The student is therefore faced with a choice.  Many religious ideas the student may have had will most definitely have to be discarded if the student decides to accept the evolutionary story of origins - - and visa versa.  This is no false choice or dichotomy.  There is a difference between being religious and believing in some Santa Claus version of God and being religious where one's religion proposes very specific views about how the physical world/universe works and how God actually interacts with that world/universe.

Common ancestry

Q: People often say, "I'm not descended from a monkey." What's the true relationship there?

Miller: Well, I often hear people say that they're not descended from monkeys, and they would defy me or anybody else to show that they are. Well, they're right, they're not descended from monkeys. They're not descended from chimps or monkeys or gorillas or any other living organism.

The essential idea of common ancestry is that ultimately all living things on this planet share common ancestors if we go far enough back into the past. So, for example, to take the case that people talk about all the time, we share a common ancestor with all primate species. This means that we're related, by having a single ancestor somewhere in the past, to monkeys, gorillas, chimpanzees, and so forth.

Sean Pitman:  Which I'm sure makes those who do not want to be descended from a monkey much more relieved!  ; )

But the idea of common ancestry goes way deeper than simply saying we're related to monkeys. We're in fact related to all mammals. You go farther back, we are related to all vertebrates. And, ultimately, we are related, if you go far enough back, to every living thing on this planet. The almost universal nature of the genetic code, the fact that all life depends upon DNA, all of these things are evidence of this commonality of ancestry, if we go far enough back in time.

Sean Pitman:  Or, at least some of these similarities could be evidence of common design - - that is if the certain features of living things could be shown to be clearly beyond the powers of random mutation and natural selection acting over the course of hundreds of millions and even many billions of years.

Q: One of the lines of evidence that you pointed out at the Dover trial is the organization of our own chromosomes. How is that evidence for common ancestry?

Miller: We've known for a long time that we humans share common ancestry with the other great apes—gorillas, orangs, chimps, and bonobos. But there's an interesting problem here. We humans have 46 chromosomes; all the other great apes have 48. In a sense, we're missing a pair of chromosomes, two chromosomes. How did that happen?

Well, is it possible that in the line that led to us, a pair of chromosomes was simply lost, dropping us from 24 pairs to 23? Well, the answer to that is no. The loss of both members of a pair would actually be fatal in any primate. There is only one possibility, and that is that two chromosomes that were separate became fused to form a single chromosome. If that happened, it would drop us from 24 pairs to 23, and it would explain the data.

"The closer we look at our own DNA, the more powerful the evidence becomes for our common ancestry with other species."

Here's the interesting point, and this is why evolution is a science. That possibility is testable. If we indeed were formed that way, then somewhere in our genome there has to be a chromosome that was formed by the fusion of two other chromosomes. Now, how would we find that? It's easier than you might think.

Every chromosome has a special DNA sequence at both ends called the telomere sequence. Near the middle it has another special sequence called the centromere. If one of our chromosomes was formed by the fusion of two ancestral chromosomes, what we should be able to see is that we possess a chromosome in which telomere DNA is found in the center where it actually doesn't belong, and that the chromosome has two centromeres. So all we have to do is to look at our own genome, look at our own DNA, and see, do we have a chromosome that fits these features?

We do. It's human chromosome number 2, and the evidence is unmistakable. We have two centromeres, we have telomere DNA near the center, and the genes even line up corresponding to primate chromosome numbers 12 and 13.

Is there any way that intelligent design or special creation could explain why we have a chromosome like this? The only way that I can think of is if you're willing to say that the intelligent designer rigged chromosome number 2 to fool us into thinking that we had evolved. The closer we look at our own DNA, the more detailed a glimpse we get of our own genome, the more powerful the evidence becomes for our common ancestry with other species.

Sean Pitman:  Case closed! - right?  Just because one feature can be easily explained by a particular mechanism doesn't mean that all features can be as easily explained.  The problem with the proposed evolutionary mechanism of random mutation and Natural Selection is that Natural Selection only recognizes functional genetic differences. Nature can only select, in a positive manner, those differences that are functionally more beneficial than what came before.  Pointing out numerical similarities or differences between the genomes of great apes and humans is interesting and can certainly be explained by non-deliberate processes, but it really doesn't say anything about the functional differences involved.  The real question is: Can the proposed mechanism explain the functional differences that exist between one creature and another?  The overall pattern isn't a problem. The problems arise when one starts to consider functional differences at different minimum structural threshold requirements.

But, what about the fact that the observed pattern is quite predictable from the perspective of common descent?  That's true. The pattern is quite predictable given the hypothesis of common descent.  However, such patterns where once thought to be right in line with the mind of a very orderly God who produced a very integrated, pattered, orderly creation. For example, Carl Linnaeus, the father of modern taxonomy and the father of modern ecology, was the man who laid the foundations of modern classification and naming method of living things according to a binomial nomenclature.  Yet, Linnaeus was a firm believer in God and believed that the patterns and similarities he observed in living things were the result of the mind of an orderly God, a God who was interested in exploring the range of creative possibilities within various basic body shapes, plans, forms, and functions.

From this perspective, finding a spectrum of creatures with seemingly slight genetic and morphologic variations in a nested pattern or "hierarchy" is quite consistent and even predictable.  Could I possibly argue that the fusion of a human chromosome relative to that of the great apes is actually predictable given the hypothesis of separate design?  It really isn't that much of a stretch to suggest the possibility of separate design of both humans and apes with 48 chromosomes each followed by subsequent fusion of two of these chromosomes in the human generations during a population bottleneck. 

Both options are at least reasonably possible given the truth of either the design or evolutionary hypotheses.  So, how to judge which hypothesis is most likely true? - beyond a reasonable doubt?  To be honest, the design hypothesis has the greatest burden of proof.  So, if there is any evidence that could possibly tip the scales in favor of separate design of apes and humans, it would have to be rather overwhelming.  So far, I haven't found such overwhelming evidence from genetics, morphology, or the fossil record to argue that humans and apes definitely show evidence of independent creation.  I do believe, however, that such evidence may be found with more detailed investigation into the functional differences between apes and humans.  Again, this is because the limits to the evolutionary mechanism are functional limits, not pattern limits.  While these functional limitations have not clearly been crossed between humans and apes, at least not to my knowledge, they are crossed in other well investigated biosystems that happen to be a bit more humble - such as flagellar motility systems, DNA transcription and RNA translation, ATPase function, and a host of other such functional systems that require more than 1,000 specifically arranged codons of genetic real estate at minimum.

The process of evolution

Q: What do gaps in the fossil record represent vis a vis evolution? Why are such gaps not a problem for evolutionary theory?

Miller: It's important to appreciate that all historical records are necessarily incomplete. We don't have complete data for any historical process. I've tried to trace my own ancestry, and after about four generations, we lose bits and pieces of it. I don't think that means I don't have any ancestry. I think it means that some of the evidence is missing.

The same is true for the study of history. We know, for example, when and where the Battle of Gettysburg took place in the Civil War. We know the opposing generals on both sides. But we don't know exactly what every soldier, by name, was doing at every moment during the Battle of Gettysburg. That doesn't mean Gettysburg didn't take place. It doesn't mean that the Union forces didn't win. It simply means we have more to learn about that battle.

The same is true for the fossil record. We have an enormous amount of information as to what life was like in the past. That information tells us that life changed, that it changed in a particular pattern, and that the history of change is complete, with one example after another of descent with modification, an ancestor-descendant relationship between organisms. And in a few lucky cases, we can trace almost step by step the evolution of key organisms in the history of life. [See Fossil Evidence.]

Sean Pitman:  The problem here is that what may seem like a very small step morphologically is not necessarily that small of a step genetically.  Evolutionists love to talk about how this feature morphed into that feature without ever really getting into just what genetic changes would be required or the pathway that such genetic changes could possibly follow where each specific mutation would be functionally beneficial over that which came before.  The really significant gaps are not in the fossil record, but in the genetics.  Assumed evolutionary relationships based on morphology change all the time. 

     "The more similar two species looked, the more closely related they were thought to be. But looks can be deceptive. This became abundantly clear more than a decade ago, when molecular biologists began comparing small numbers of genes from various organisms and found that many species were not what they appeared. Hippos, for example, were once thought to be the kissing cousins of pigs, but genetic evidence revealed their closest living relatives to be the cetaceans (whales, dolphins and porpoises).

     Without the insights of molecular analysis, traditional morphologists also had no way of knowing whether a particular feature had been lost in a given lineage, or had never been there in the first place. In line with the idea that things evolve towards increasing complexity, they tended to assume the latter, sometimes quite incorrectly. Take the sea squirt. Its larva swims around looking like a tadpole, with a nerve cord along its back, gill slits for feeding and a tail - all classic features of chordates, the large group of animals with backbones that includes us. Then, however, it stands on its head and turns into a sack of jelly, having first digested what it had of a brain. The adult looks suspiciously like a plant. For a long time it was considered to be one of the most primitive chordates because of its simple adult form - about as far from vertebrates as it was possible to get. In between were myriad other groups, including the lancelets - fish-like animals that hang on to their nerve cords into adulthood. Then molecular studies revealed that sea squirts are genetically closer to us than are lancelets, and the tree had to be reshuffled." (Link)

So, the fossil evidence and morphologic evidence really isn't that reliable.  Many assumptions are made that are not necessarily true, even from the evolutionary perspective. Again, it is all back to genetics when it comes to really evaluating the plausibility of all of the claims of the Theory of Evolution, its potential, and limitations.

Q: What about the claim that no one's ever seen a new species form?

Miller: Right now new species are literally in the process of forming in the state of California . For years David Wake of the University of California at Berkeley has studied different species of salamander that surround the Central Valley in California . When you look at the range of these species, what you discover is that the local variations at the very ends of the range are now so different from each other that if you capture them both and you put them side by side in a cage, any biologist would agree that they are distinct and separate species. Nonetheless, they have been produced in recent times simply by the spreading of salamanders over a geographic range.

Many opponents of evolution will sort of retreat and say, "Well, okay, but those species are really similar to each other. Show us a species that is dramatically different." But that initial splitting, that's the phenomenon that actually drives evolution. You shouldn't expect to see a cat suddenly give birth to a dog or something along those lines. At the moment when one species splits into two, you should see two distinctly different species that still show the similarities that previously united them within a single classification. We see this happen all the time.

The people who say that macroevolution, by which they mean really big evolution, has never been observed, inevitably cannot give you a strict and rigorous definition of what macroevolution is. They'll simply say it's the formation of new categories or evolutionary novelties. They're loath to put specifics on that idea, to tell you what percentage of the genes or how many base pairs of DNA have to change, because I think they know very well that once they make specific what they mean by macroevolution, some darn biologist is going to go out into the field or into the lab and follow exactly that rate of change and show that macroevolution really does occur.

Sean Pitman:  This is a valid argument.  The subjective nature of what defines a species has always been a problem - for evolutionists as well as creationists and IDists. 

For example, scientists from Berkeley have noted that, "the planktonic larvae of many marine invertebrates are commonly described as separate species when they are first discovered in the ocean. Only later when they can be reared in the laboratory can the link to their adult form be recognized. Similarly, the different life stages of many fungi are given different names because they have different physical forms and hosts. Only through detailed inoculation studies can mycologists work out which forms are members of the same life cycle. Since some fungi may have more than five discrete life cycle stages, this can be a long process. Similar problems exist for some marine algae and multiple-host parasitic organisms of many kinds. "Even among well-studied vertebrates, some tropical birds have been described as separate species until they are observed to mate and rear young together." The naming of hominid fossils not immune from this subjective problem.   In a March 2002 statement, Tim White, who co-directs the Laboratory for Human Evolutionary studies said, "There’s been a recent tendency to give a different name to each of the fossils that comes out of the ground, and that has led to what we think is a very misleading portrayal of the biology of human evolution… But when you find a fossil like this one so similar to Asian and European ones, it indicates the same species." "This whole species question is all about what you accept as a sharp enough distinction to tell you that it is a separate species," said Susan Anton, a Rutgers University anthropologist.The classifications of plants is classically prone to give different names to very similar plants or even parts of the same plant.  Bill DiMichele, a paleobotanist, notes, "The problem of organ association is one of the reasons why paleobotanists insist on so many different names for isolated parts of the same whole plant. Furthermore, there are phenotypic convergences that can cause great confusion, such leaves of virtually identical morphology borne on ferns and seed plants. Separate names for each fossil plant organ can be carried to extremes, however, and not all paleobotanists, myself included, favor the attribution of separate names to organs otherwise known in attachment (yes, this is still done routinely, no kidding)." (Link)

However, functional aspects of specific biosystems are more objectively defined.  A functional system that definitely has a certain minimum structural requirement can be studied to a degree in which this minimum requirement can be known in sufficient detail to be fairly objective.  As it turns out, all examples of evolution in action produce novel beneficial systems that require no more than a few hundred specifically arranged codons of DNA or amino acid residues.  There are no examples in scientific literature producing any novel system of function that requires a minimum structural threshold of more than 1,000 amino acid residue building blocks or 1,000 codons of genetic real estate to code for it.  There's not a single example beyond this threshold level - period.  If Miller wants a precise definition of what constitutes "macroevolution", this is it.  Evolution just doesn't happen at this level or beyond.

Q: Another criticism often made is that all this couldn't just have happened by random chance.

Miller: One of the great mischaracterizations of evolution is that it's driven by random chance, that things just happen. People like to say, "I don't like to believe that I'm just an accident." Well, you're not. What evolution says is that the variation that crops up in a species is indeed unpredictable. We can't be sure what will happen next. But that doesn't mean it's random.

To me, the word "random" means anything can happen. But the reality is that evolutionary change is restricted. It's restricted by the laws of physics and chemistry. It's restricted by the nature of molecular biology. It's restricted by the constraints of developmental biology during development. Most importantly, evolutionary change is governed by natural selection, and natural selection is not a random process at all. Natural selection selects for successful phenotypes, for successful combinations of characteristics that actually work, and that's not random at all.

Sean Pitman:  Darwinian-style evolution is supposed to use aspects of random and non-random forces.  The reason why random mutations are called random mutations is because they are in fact random.  During DNA replication, for example, any base can be miscopies and turned into any of the other 3 options (i.e., a T could get turned into an A, G, or C - or it could get deleted or added) at each genetic loci.  Such changes are pretty random when they happen. 

Of course, natural selection is supposed to come to the rescue as the non-random force of Nature.  While it is certainly true that Natural Selection a non-random force, it is not true that Natural Selection overcomes the problems associated with non-directed random mutations or "random chance".  The reason for this is that Natural Selection is very limited in what random changes it can actually recognize to base a selection on any more than random chance.  To overcome random chance a random mutation must produce a functional genetic change that actually affects the creature's survival and reproductive success. And, in order for Natural Selection to select in a positive manner, the random genetic change must produce a functionally beneficial genetic change. 

As it turns out, the majority of random mutations produce no detectable functional change.  These mutations are called "neutral mutations."  There is even a neutral theory of evolution.  The problem is that neutral mutations are truly random since they cannot be guided by the non-random force of Natural Selection.  This is a problem because a linear increase in the gap distance between a starting point and the next closest potentially beneficial genetic target equates to an exponential increase in the number of random mutations needed to achieve success.  This is why there is a marked stalling out effect of evolutionary potential when it comes to finding "targets" with higher and higher minimum structural threshold requirements.  The minimum gap distance increases.  As a result, the average time needed to achieve success declines - - exponentially.

"Any theory that can stand up to 150 years of continuous testing is a pretty darn good theory."

Sean Pitman:  There have been a lot of wrong theories that have lasted much longer.  Besides, the evolutionary mechanism has not been tested beyond the very lowest rungs of the ladder of minimum structural threshold requirements.  Well, that's not quite true.  It has been tested, but it has never passed the test beyond the 1,000aa threshold level.

Q: I have heard critics say that mutation doesn't create information, it destroys it.

Miller: That notion is at variance with the facts. Four or five million years ago, for example, the Antarctic Ocean, which was warm at the time, froze over as a result of a kind of climate change on this planet. Well, to this day, there are fish that swim in the oceans of Antarctica . One of the interesting things about those fish is that even though the saltwater is actually below the freezing point—our own blood would freeze solid in that cold water—these fish don't. The reason they don't freeze solid is because their blood contains an antifreeze protein, sort of the biological equivalent of ethylene glycol in antifreeze.

Well, how did they get it? It turns out that the antifreeze protein that is found in the blood of Antarctic fishes was the result of a digestive enzyme that was mutated, retargeted to the bloodstream, and then mutated again and again to enhance its antifreeze properties. All of these changes were the result of mutation.

Now, that Antarctic fish has a kind of biological information that its ancestors didn't have. It has the ability to make a completely new protein that enables it to survive in very cold waters by preventing its blood from freezing. That's novel information, and it's information that was produced by the process of mutation.

Sean Pitman:  That's true.  Those who argue that it is impossible for random mutation to actually get lucky once in a while and land on some beneficial target are mistaken.  There are many such examples.  Several are discussed above in this essay dealing with Miller's best arguments (Link).  The problem here is that not all functions are created equal.  Different types of functional systems have different minimum structural threshold requirements.  The evolution of single protein enzymes that do truly novel functions, as in this case of antifreeze evolution and other cases such as lactase or nylonase evolution, are truly spectacular.  However, none of these examples produces a protein-based system that requires more than a few hundred specifically arranged residues working in concert.  That's the problem, because there are in fact many such systems in all living things that go well beyond this 1,000aa threshold.  How did these systems come to be?  Explaining how evolutionary processes produced such systems isn't nearly as easy as explaining novel relatively small single-protein enzymes and the like. In fact, higher level systems are exponentially more difficult to explain.

The test of time

Q: How do you answer the charge that evolution has never been tested?

Miller: Evolution is tested every day in the laboratory, and it's tested every day in the field. I can't think of a single scientific theory that has been more controversial than evolution, and when theories are controversial, people devise tests to see if they're right. Evolution has been tested continuously for almost 150 years and not a single observation, not a single experimental result, has ever emerged in 150 years that contradicts the general outlines of the theory of evolution.

Any theory that can stand up to 150 years of continuous testing is a pretty darn good theory. We use evolution to develop drugs. We use evolution to develop vaccines. We use evolution to manage wildlife. We use evolution to interpret our own genome. Every one of these uses of evolution is a test, because if the use turns out to be inadequate, we would then go back and question the very idea of evolution itself. But evolution has turned out to be such a powerful, productive, and hardworking theory that it's survived that test of time.

Sean Pitman:  Again, all of these examples of evolution being tested do not test evolution beyond the very lowest levels of novel functional complexity.  None of these examples produces anything requiring more than a few hundred specifically arranged genetic codons or amino acid residues.

No one is arguing that evolution doesn't happen at all.  The argument is that the evolutionary mechanism is significantly limited - i.e., limited to very low levels of functional complexity that never go beyond the 1,000aa threshold.

Q: So when they talk about teaching the strengths and weaknesses of evolution, what are the weaknesses?

Miller: Evolution has great strengths in that it unifies biology and gives us a coherent explanation. Its only weakness is that it hasn't explained everything yet.

For example, we have great doubts as to what the evolutionary purpose of sex is. Now, sex is everywhere, not just in us, but also in trees and flowers and microorganisms. It's very difficult to understand exactly how sex first evolved, why there are only two sexes, and why things work the way they do. Evolution hasn't completely explained that yet.

We also don't understand where the first living cell came from or how prebiological evolution took place. But most of us in science don't regard the inability of science to explain everything as weakness. We regard that as the unexplored territory that's going to keep most of us busy for the rest of our careers.

Sean Pitman:  No one expects any valid scientific theory to explain everything.  If everything could be explained, science would no longer be needed.  Science is only needed because of an inability to perfectly known anything about the world and/or universe in which we live.  Science produces predictive power that never reaches perfection.  However, when a theory consistently runs into very clear limitations, those limitations may eventually undermine many of the claims that were once supported by it.

A complexity theory

Q: What is irreducible complexity?

Miller: Irreducible complexity is a term that was first used on behalf of the intelligent-design movement by Michael Behe, a biochemist at Lehigh University . What Behe observed is that living cells are filled with complex biochemical systems and that these systems have multiple parts. Dr. Behe has argued that systems like that are irreducibly complex. He says that all these parts are required for the system to function, and if you take even one away, it stops working. That means its complexity is irreducible. In other words, you need all the parts.

If that were true, it would indeed be a powerful argument against evolution, because what evolution can only do is to produce these complex systems by putting together a few parts at a time. And if there is no function until all the parts are assembled, evolution's in trouble. That's the argument from irreducible complexity.

In reality, these supposedly irreducibly complex systems are cobbled together by evolution from individual systems that have functions of their own.

Sean Pitman:  Interesting that this cobbling never actually happens when the system in question requires over 1,000 specifically arranged amino acid residues or codons of DNA at minimum.  Its like getting from one meaningful 3 letter sequence to another by using what already exists in a genome - -  as in cat to hat to bat to bad to bid to did to dig to dog.  Easy.  This is because the ratio of potentially meaningful 3-letter sequences in the English language system is about 1 in 18.  For 2-letter sequences it is about 1 in 9.  However, for 7-letter sequences the ratio drops to about 1 in 250,000. 

The continued decline of the ratio in such an exponential manner means that the odds that the gap between a starting point and the next closest potentially beneficial target is always going to be just one character change away drops exponentially as well.  Pretty soon the minimum gap distance grows from 1 to 2 for a given "pool" of options.  With a gap of 2 needed character differences, the question is, what are the odds that these two characters will exist somewhere in the pool of options preformed?  And, beyond this, what are the odds that these two characters will get copied and pasted properly into the location where they would create a new beneficial sequence?

This is exactly the same problem that exists in genetics.  There is no fundamental difference. And, this problem only gets dramatically worse as one considers systems with greater and greater minimum sequence and/or structural threshold requirements.  By the time the minimum requirements are up to 1,000 codons or amino acid residues, the likely minimum gap distance is not one or two or three needed characters, but dozens of needed characters.  The odds that a gap distance of 50 or so needed characters exist preformed in a specific order anywhere else in a even a very large pool of genetic options are extremely remote.  Without such a pre-formed subsequence or structure, no one mutation is going to be able to cross the gap between what already exists in a gene pool and what might be beneficial if it were ever found by pure random mutation.  

Miller and others have proposed that the actual steppingstones needed to form higher level functional systems, like flagellar motility which requires well over 10,000 codons of DNA, are actually small steps. There are several facts that strongly question this assertion. Perhaps the most obvious problem is that not one of these proposed steps in the evolution of something as minimally complex as the bacterial flagellum has ever been shown demonstrated to evolve in nature or under laboratory conditions - not one step.  The only evidence behind assertions that such steppingstones could have paved the way is story telling and hand waving.  There simply is no experimental evidence for any proposed step that crossing the gaps between novel functional targets that exist beyond the 1,000aa threshold level.

Q: Dr. Behe has pointed to the bacterial flagellum as a good example of irreducible complexity. Can you explain why you think it isn't?

Miller: Well, the bacterial flagellum is this marvelous little machine that consists of about 30 or 35 individual proteins, and the argument is if you take even one part away, the flagellum doesn't work anymore. So evolution couldn't possibly have produced it, because evolution is blind. Evolution couldn't say, "Well, we've got 20 parts for the flagellum. Next year we'll evolve the 21st part, and then 22 and then 23, and maybe in 10 million years, we'll get the 30th part, and everything will start working." Evolution doesn't work that way.

When you look at the experiments that biologists and biochemists have done on the bacterial flagellum you discover that little clusters of proteins in the flagellum, in other bacteria that don't have flagella, are busy doing other functions.

"Not a single scientific paper has been published that supports the notion of irreducible complexity."

For example, about 10 of those proteins in the base of the flagellum form a little machine called the Type 3 Secretory system. It's kind of like a molecular syringe that bacteria use to pump poisons into cells they're attacking. This system, this little syringe, is found in bacteria that don't have flagella.

The very existence of this little subset of parts, just 10 parts, with a perfectly good function of their own, shows that the idea of irreducible complexity is wrong. And when you take the flagellum apart, you discover that virtually every protein in there is related to another family of proteins that performs a different function somewhere else in the cell.

So the prediction of evolution, which is that these complex systems are actually slapped together by scavenging pieces of different systems, turns out to be true. And the prediction made by irreducible complexity that none of these proteins would have any function until they're all put together and all work, that prediction turns out to be wrong.

In the 10 years since Professor Behe first advanced the idea of irreducible complexity, not a single scientific paper, even from his own lab, has been published that supports the notion of irreducible complexity for any of the systems that he described, and that's why the scientific community simply has not embraced this idea.

Sean Pitman:  Miller doesn't seem to understand the concept of irreducibility.  Irreducibility means that a system requires a certain minimum number of parts in a certain arrangement for that particular function to work.  It doesn't matter if some subsystem would still work in some other capacity if parts were removed. This doesn't change the fact that a minimum part and arrangement requirement is still required for the function in question to be realized.

For example, consider the powered motility function of a car.  This function requires a certain minimum number of parts in a particular arrangement.  Now, let's say that I take away the drive shaft.  What happens to the motility function of the car?  The motility function instantly disappears - right? But the lights and radio still work.  Does this mean that the car's motility function is therefore "reducible"?  Of course not. 

All functional systems are irreducible in this sense. And, all functional systems that I can think of have subsystems that could be used in beneficial capacities as parts of other systems.  Longer genetic sequences no doubt have shorter subsequences that make sense in many other capacities.  For example, consider the Shakespearean phrase used by Richard Dawkins, "Methinks it is like a weasel".  By itself it may have a beneficial function in a certain environment.  Remove a few characters though and that function may be completely lost.  Does this mean that all function will be lost?  Not at all.  The sub-phrase "it is like" might still be quite useful in various contexts.  Even a single word or a single letter would no doubt be quite useful in many different contexts. 

The same thing is true of biosystems.  All functional elements have minimum structural requirements to perform their specific functions.  Might their sub-elements have other beneficial functions?  Of course. A single protein, like lactase, might have subsections that perform other useful functions in various other systems within the organism.  But, that subsystem has its own minimum structural threshold requirement and its own subsystems that could perform other useful, though more basic, functions in the organism - - all the way down to single amino acid building blocks.  These single amino acids also have minimum structural requirements.  They also have subsystems in the form of different kinds of atoms that individually can perform many useful functions as parts of other systems or molecules within the organism.

Again, it is like the difference between 3-character, 7-character, and 1,000 character sequences.  Raising the minimum part requirement doesn't remove the potential for functionality of lower-level systems.  What it does do, however, is to increase the likely minimum gap distance between what exists and what might exist, in a beneficial manner, within the gene pool.    

Q: In the trial, both Michael Behe and Scott Minnich [a microbiologist at the University of Idaho who is a proponent of intelligent design] claim that intelligent design is testable, but then they say that they don't conduct those tests. What does that indicate to you?

Miller: One of the biggest problems with intelligent design is it's not empirical. It doesn't feature any testing. The advocates of intelligent design are not experimentalists. They're not going out in the lab and doing experiments to see this. Both Michael Behe and Scott Minnich have said that one could disprove intelligent design by taking a bacterium in the laboratory that didn't have a flagellum and evolving a flagellum in it.

Well, that's a ridiculous proposal for an experiment for two very simple reasons. First of all, the experiment would probably take 10 to 100 million years to carry out, and it's kind of hard to get funding for that long. The second reason is that what they propose is to retrace the path of an existing sequence of evolutionary changes. Evolution doesn't repeat itself like that. So even if we were absolutely certain the flagellum had been produced by evolution, we wouldn't expect the same sequence of events to happen again. That's a critical point.

Sean Pitman:  The test wouldn't have to produce an entire flagellum from scratch.  Miller and many others have argued that the relevant steps between beneficial steppingstones along the pathway toward flagellar motility are closely spaced.  All they have to do is demonstrate that one or two of these proposed steps are in fact crossable by random mutation and function-based selection. If these steps are in fact as close together as Miller claims, then the experimental setup and confirmation should be no problem.

The truth of the matter is that Miller and other scientists prefer to stick with their just-so-stories at such levels of complexity, even when it comes to demonstrating just one of their proposed steppingstones, because that is all they have.  Statistically each one of their proposed steppingstones would require not 10 or 100 million years, but trillions upon trillions of years.  That is why it is much easier to sit back and tell these oh so plausible stories as long as they can keep their audience from actually considering the statistics involved.

A better test for the whole notion of irreducible complexity is just to compare various bacterial genomes and see if their arguments are correct. Their arguments are that none of the genes that produce the proteins of the flagellum are used for any other purpose in any other organism. Well, that test has been done, and it turns out their premise is not correct, that these individual proteins and individual genes are used for other purposes in other organisms, which is the direct prediction of evolution.

Sean Pitman:  That is a complete strawman mischaracterization.  It simply isn't true.  No system, biological or otherwise, is completely novel when it comes to the building blocks used.  One would only expect a good designer to avoid reinventing the wheel each time a wheel-like structure was needed.  So, it is completely nonsense to represent the concept of irreducible complexity as being defined by a complete lack of subsystem function.  That's just not true.  Miller's biggest problem here is that the existing subsystems just aren't very close to each other when it comes to linking them up properly to realize the next steppingstone in the pathway.  The gaps between steppingstones are just too large to be crossed in one, two, or even dozens of mutational steps.

In essence, when one looks closely at the arguments that are raised of intelligent design, these are not arguments that are raised to advance science, because if they were, the advocates of intelligent design would be busy in the laboratory and they'd be producing research papers. What they're really busy doing is raising a series of arguments against evolution. The purpose of these arguments, quite frankly, is to prop open the schoolhouse door long enough to get a religiously inspired doctrine into the science classroom under the pretense that it's authentic science when it's not.

Sean Pitman:  Religion has already entered the classroom under the guise of science.  It all depends on how one defines religion.  If one defines religion as requiring blind faith in some sort of Santa Claus deity, then perhaps Miller is right.  However, if religion is defined as a search for truth that may propose some sort of powerful Creator, then Miller is wrong.  Science has its own all powerful creator in the form of Nature.  Nature explains absolutely everything for the mainstream scientist.  It is just that because Nature is said to be disinterested and non-intelligent that makes such proposals of a very powerful creative force "science" and therefore "non-religious". However, try to propose a creative force with some actual intelligence behind it, and suddenly that's religion trying to get into the classroom.  Propose a random force, that's science.  Propose and intelligent force, that's religion. It sure seems that this is what Miller's argument really boils down to here.

Seeking a designer

Q: Critics of evolution say that the search to understand design has gotten us a long way. Was that what Isaac Newton was kind of all about in a way?

Miller: I think it's a gross mischaracterization to take a scientist in the past who was a person of faith—and Newton is a good example—and say that he worked on the basis of a hypothesis of design. Well, it's true that he certainly believed in a creator, and he believed that that creator was the architect of the universe he investigated.

But Newton never proposed God as a cause in any of his theories. In other words, he didn't seek to explain the way in which the prism broke light into many different colors by saying, "Well, it happens that way because it is God's will, and I will stop investigating."

He sought a physical explanation, and his explanation was that white light is composed of many colors and what the prism does is to bend each color by a different amount. That's not a divine explanation. That doesn't use intelligent design. That's an explanation based on the principles of physics.

What Newton and other scientists did was to assume that the universe made sense because it had a designer, and then to use what we would call ordinary material scientific methods to investigate that universe. That's just what science does today. What intelligent design pretends to be is in the tradition of Newton . What intelligent design actually is, to be perfectly honest, is in the tradition of the Middle Ages, where they stop investigation by saying, "We cannot answer this mystery; it is the work of God the designer."

In short, Newton 's on our team.

Sean Pitman:  Not quite.  Newton saw God as a mechanic who built the universe using certain laws that could not be explained without deliberate intent.  There is no inherent reason why the universe ended up like it has as a place that can support physical existence as well as life as we know it. The universe could easily have been set up in a way that would be very unfriendly to life as well as other forms of physical atomic existence as we know it.  Beyond this, Newton wrote more about the Bible and Biblical prophecy in particular than he did about physics.  He strongly believe that Biblical prophecy was very good evidence of deliberate even supernatural design. Newton was not on your team by any stretch of the imagination.

"No idea should be inserted into the science classroom by force of law unless that idea can first win a place for itself in the scientific community."

Sean Pitman:  Oh, so mainstream popular science is the only source of ideas that should carry with it the force of law?  Really?  A great many important ideas would have been excluded from the discussion and consideration in the classroom throughout history if this policy had been in place.  This is a really dangerous and stifling idea.  The classroom should be open to many new ideas.  I mean really, if the truth of the ideas of evolution and other such scientific ideas are so plain to see, what is there to worry about when it comes to considering other ideas that are obviously tenuous at best?

Q: Phillip Johnson argues that determining intelligence from non-intelligence is within the purview of science, specifically forensic science. That is, forensic scientists can determine whether someone died of natural causes or was killed.

Miller: It's true that we can detect the actions of an intelligent agency scientifically. We can look for fingerprints. We can look for a purposeful arrangement of parts, as the advocates of intelligent design say. But the heart and soul of their argument, that you can detect intelligent action in biological systems, rests on a premise that the way you identify intelligent action in living systems is by showing that evolution couldn't have done it. So the heart of their argument is basically a claim that evolution can't do this, can't do that.

Sean Pitman:  Miller doesn't understand that this is the heart of forensic science as well.  As a pathologist who has been involved in forensic investigations, one of the goals is to demonstrate that the phenomenon in question is the unlikely result of any non-deliberate, accidental, or natural cause. 

For example, let's say that a man is found dead with a knife in sticking in his back.  Obviously a case of deliberate murder - right?  Well, let's say that the scene includes a knife factory that had just blown up due to a gas leak - and that several other workers were found with knife and other shrapnel wounds at the scene.  How easy would it be to jump to the conclusion of deliberate design now? 

Forensic science must rule out non-deliberate possibilities to at least a reasonable degree of certainty before the scientist can adequately propose some sort of deliberate cause.  This is also true of other sciences like SETI science or anthropology.  And, it isn't always easy to rule out natural causes.  For example, consider the work of Germaine Henri-Martin on Fontéchevade cave in France. For a long time the scientific community believed her claims that she had discovered how the "first Frenchmen" used to live there.  That is until Shannon McPherron and Harold Dibble proved that the cave sediment was nothing more than flood deposits using lazar positioning analysis (Link). The same thing is true of pulsars.  When they were first discovered, some scientists thought that they might be a sign of extraterrestrial life.  That was until it was shown how pulsars really worked.

Obviously, has to be very careful to rule potential natural causes in all sciences that are looking for deliberate artifact.

Q: Some people charge that positing material causes for everything has removed God from life, taken away meaning and purpose. How do you see it?

Miller: I think with all due respect that people like Phillip Johnson have it wrong, that they have taken the position that we can't find meaning and value and purpose to our lives except in those areas of scientific ignorance, that we have to find significance in the sort of dark recesses of what science cannot explain.

I take an entirely opposite view, that we should find our being, our value, and our meaning as human beings not in the darkness but in the bright areas of knowledge that science illuminates. I think understanding evolution gives us a fundamentally more optimistic and open view of the world than can those who have placed their faith in the claim that science isn't going to figure out these key questions.

Sean Pitman:  I agree. If the mechanism for evolution actually worked beyond the lowest levels of functional complexity, it would be a most exciting bit of information.  Such an amazingly creative force could be investigated in detail to see if it could not be harnessed and sped up, via computers perhaps, to design robots and various machines to cure all human illnesses and solve all kinds of mysteries, like improved space travel to far off places, or the stock market perhaps.  The problem is that the evolutionary mechanism, though it does play a limited role, is quite limited indeed.  Software programmers, engineers, and doctors are not going to be put out of a job by the evolutionary mechanism - even if it is put on steroids.

The ultimate project of the intelligent-design movement is much grander than simply trying to displace evolution. It's a project that is basically designed to bring the supernatural into science. And that kind of introduction would destroy both science and religion.

Sean Pitman:  The goal of the intelligent-design movement is to bring some balance into the classroom; to at least allow the consideration of an intelligent cause (not necessarily supernatural or even God-like) behind the very clear limitations of the proposed evolutionary mechanism.  If there are those who take the basic proposal that intelligent design must have been involved in this or that feature a step farther, to include trying to identify who the designer might be, that is only natural for the human mind - to include the scientific mind. 

Don't tell me that if SETI scientists find the radiosignal they've been looking for, that holy grail of an artifactual signal, that they are going to be satisfied knowing that there is some sort of intelligence out there.  Surely not. The search will be on to actually try to identify that intelligence, its creative mechanism, and perhaps even its motives.

Dover and beyond

Q: What was at stake in the Dover trial?

Miller: One of the things that the Dover trial brought to a head was the idea that the intelligent-design movement represented a genuine alternative, something very different from the creation-science movement that took hold in several states in the U.S. in the early 1980s. The advocates of intelligent design disavow any connection with creationism or creation science. They say their ideas are purely scientific and have nothing to do with religion.

Sean Pitman:  This is a bit of an overstatement on the part of IDists.  Everything has something to do with religion.  That is just unavoidable.  The strength of ID is that in its pure form does exactly what SETI scientists are trying to do, identify intelligent artifact without the need to first identify the actual intelligent agent, motives, or method used to produce a particular phenomenon.  It is just that the IDists are looking at biology instead of radiosignals from space.  Otherwise, there simply is no fundamental difference.

In the trial, documents regarding the formation of the intelligent-design movement, the construction of the intelligent-design textbook that was recommended for use in the Dover schools, came to light. And it was very clear that intelligent design represented nothing more than an intentional effort to relabel creation science by taking all the same old arguments and putting a new label on them.

Sean Pitman:  While this is true, poor motives on those who take on the title of IDists does not remove the fact that the basic concept of ID is still potentially valid as a true scientific investigation - just as valid as is forensic science, anthropology, and even SETI.

The second thing that was very much at stake in the trial was religious freedom. Religious freedom in this country is based on two great and essential principles. One is that the government shall not interfere with the free exercise of religion, and the other one is that the government shall not endorse or establish a religion. What the Dover board was doing very clearly, by their own statements, was trying to establish an official religion for the school district of Dover and trying to get science teachers to advance the Dover board's view of that religion.

Sean Pitman:  Again, this depends upon how one defines religion.  Should the government try to establish any particular form of thought by law over that of civil interaction between its citizens?  Miller doesn't seem to have any problem with government enforcement of mainstream science status quo.  In my opinion, that is getting very close if not actually crossing over the line of the establishment clause.

Now, the members of the Dover board are perfectly entitled to hold all these religious views and to hold these views about intelligent design and evolution and everything else. But what they're not entitled to do, under our Constitution, is to use the force and power of the state to foist those ideas on young people. That would have been a very dangerous precedent if they'd been able to get away with it.

Sean Pitman:  But it is perfectly ok to "foist" mainstream science ideas onto children without any option to consider any alternative notions?  Every hypothesis should have its alternative that might at least be true if the standing hypothesis were ever falsified.  If the Theory of Evolution is a true scientific hypothesis shouldn't various means of potential falsification be at least considered? Shouldn't those who think that they have found evidence that directly undermines the very pillars of evolutionary theory at least be given a minor platform? 

Preventing such a platform from even existing in the school system is what starts to turn the Theory of Evolution into something more than a scientific theory.  Such protection of a theory and such fear of questions being brought against fundamental aspects of a theory starts to turn what used to be science into a sort of religious-type dogma - - something like a holy untouchable doctrine that must be defended to the death by the faithful.

Q: Was it wrong, in your view, for the Dover school board to try to get their ideas into the science classroom?

Miller: No idea should be inserted into the science classroom by force of law unless that idea can first win a place for itself in the scientific community. The real problem that happened in Dover was not intelligent design being a bad idea or anything else. The real problem was the use of a government agency to pick up an idea that science itself had rejected and to say, "We're going to put this idea in the science classroom regardless of its inability to win any following within science itself."

They did this for religious reasons. That's why they lost the case. But the general idea of not allowing science to work was at the heart of what was wrong about Dover .

"Not a single scientific society has made a statement or claim in support of intelligent design. In fact, quite the contrary."

Sean Pitman:  Who made mainstream scientists the priests of doctrines taught in the classroom?  Has our society escaped mainstream religious oppression only to run into the strong arms of another form of religious oppression cloaked in the robes of "science"?  Science is supposed to be open to critique and criticism from all sides.  Schools are suppose to teach children how to think and how to consider both good and bad ideas - - not just one view on anything.  Miller's position would result in the presentation of just one option to students to even consider.  Even the supposed bad alternative to this option should at least be discussed.  And, if no alternative can even be found to discuss, where is the "science"?

Q: So is this over? Are we beyond intelligent design yet?

Miller: I'd love to think that this battle is over. It's not. The war is going to go on. Intelligent design as anything resembling a scientific theory has been shown fundamentally to be intellectually bankrupt, and it's also been shown to be an idea that is religious in character, simply cloaked in the language of science. I think that came out of the trial at Dover . The evidence that was presented, and even the testimony from the other side, showed that beyond any shadow of a doubt.

But the people behind the intelligent-design movement will do what they've always done. They will move on, they'll change terms, they'll come up with a new label, and they'll continue to fight this fight against evolution and against scientific rationalism.

One of the legacies of the Dover trial is that the term intelligent design has almost become a kind of intellectual poison, and its advocates are running around saying, "No, no, no, no. We don't want to teach intelligent design in the schools." They'd better not, especially after the Dover trial. Instead, they say, "What we want to do is we want to teach critical analysis of evolution, or we want to teach the controversy surrounding evolution."

Ironically, when you look at what they actually would like to teach, it is simply the collection of anti-evolution arguments that were always part and parcel of intelligent design in the first place. So it is simply relabeling the intelligent design critique of evolution. And this idea of teaching the controversy is built upon a false premise, that there is a controversy within the scientific community on the issue of evolution. Well, there isn't. Evolution is, in fact, mainstream science.

Sean Pitman:  And that is the very reason why mainstream science is moving away from what makes science science.  A lack of investigation of any alternative or questioning the mainstream dogma is not in the spirit of science.  It is rather in the spirit of religious indoctrination and dogmatic oppression.

Q: Critics of Darwinism often say that evolution is a theory in crisis. How do you see it?

Miller: Evolutionary theory has never been more active in terms of an area of inquiry and an area of scholarship than it is right now. Evolution as an idea has never been more useful than it is right now, because we use evolution everyday to interpret genomes, to develop drugs, to prolong the useful lifetime of antibiotics, to grow genetically modified crops—all these things have components of evolution in them.

Sean Pitman:  That's true.  They all have components of evolution in them, but none of these components go beyond the lowest levels of functional complexity.  Antibiotic resistance, in particular, is almost always the result of disruption or destruction of some pre-existing functional interaction between antibiotic and target.  Since there are so many ways to destroy or at least disrupt functional interactions, such evolution is almost always easy and rapid for even a moderate sized population of bacteria. Such examples, though quite common, are deceptive in that they are based on some of the lowest forms of evolution possible.

If you look at the major scientific societies in the United States and around the world, not a single scientific society has made a statement or claim in support of intelligent design, in support of scientific creationism. In fact, quite the contrary. Every major scientific organization that I'm aware of that has taken a position on this issue has taken their position four-square in favor of evolution. So the notion that evolution is in some sort of crisis is just not true.

Sean Pitman:  Where the Theory of Evolution is in crisis is in the increasing number of well-educated scientists who are starting to publicly question its most basic tenets.  Even a few Nobel Laureates are now members of this growing group.  Perhaps someday soon this number will reach a critical threshold where the mainstream notions of evolution will suddenly collapse?  Wishful thinking? - of course.  But, it is not without its growing number of supporters. 




. Home Page                                                                           . Truth, the Scientific Method, and Evolution   

. Methinks it is Like a Weasel                                                 . The Cat and the Hat - The Evolution of Code   

. Maquiziliducks - The Language of Evolution             . Defining Evolution    

. The God of the Gaps                                                           . Rube Goldberg Machines  

. Evolving the Irreducible                                                     . Gregor Mendel  

. Natural Selection                                                                  . Computer Evolution  

. The Chicken or the Egg                                                         . Antibiotic Resistance  

. The Immune System                                                            . Pseudogenes  

. Genetic Phylogeny                                                                . Fossils and DNA  

. DNA Mutation Rates                                                            . Donkeys, Horses, Mules and Evolution  

. The Fossil Record                                                                . The Geologic Column  

.  Early Man                                                                                . The Human Eye  

. Carbon 14 and Tree Ring Dating                                     . Radiometric Dating  

 . Amino Acid Racemization Dating                   . The Steppingstone Problem

.  Quotes from Scientists                                                           . Ancient Ice

 . Meaningful Information                                                          . The Flagellum

 . Harlen Bretz                                   . Milankovitch Cycles



Search this site or the web powered by FreeFind

Site search Web search








Since June 1, 2002








Locations of visitors to this page