Evolution & Creation: Part III – Specified Complexity


In the overwhelming ocean of information available online, I still find the best way to stay afloat is with a book.  For the surprising issue of Intelligent Design, I’ve decided to focus on one particular book by a proponent of the form of Intelligent Design that I find most worthy of scrutiny, William A. DembskiThe Design Revolution: Answering the Toughest Questions about Intelligent Design.

The fundamental claim is this:

There are natural systems that cannot be adequately explained in terms of undirected natural forces and that exhibit features which in any other circumstance we would attribute to intelligence. (p. 27)

Put in the most succinct way possible, if true this would represent as great a challenge to institutional science as Darwin did to traditional religion.

The question, then, is whether or not there are natural systems that cannot be explained in such terms.  Or, as Dembski writes, whether the formula “time plus chance plus matter entails life” is truly tenable.

While I’ll engage this work with as an open mind as possible, my position at the outset is that though Theosophy would agree that natural forces are directed, such direction may not be detectable by “probability theory, computer science, molecular biology, the philosophy of science and the concept of information”, the combination of which Dembski believes will justify an intelligent designer.  Rather, Theosophy states that natural forces are subject to natural law, but that higher forces working on higher planes are the ultimate source of such law and that access to such planes is a spiritual act inseparable from personal development.

But first, let’s look at a key concept in Dembski’s work: specified complexity.

Dembski writes:

Specified complexity, as I develop it, incorporates five main ingredients:

•     a probabilistic version of complexity applicable to events
•     conditionally independent patterns
•     probabilistic resources, which come in two forms: replicational and specificational
•     a specificational version of complexity applicable to patterns
•     a universal probability bound

For the full quote and a brief description of each of these, click here.

Essentially, these criteria amount to an attempt to objectively determine whether an event could happen by chance.  If an event is statistically improbable and yet admits a short description, then it implies intelligent design.

Dembski illustrates with the difference between a single letter and a sonnet:

A single letter of the alphabet is specified without being complex. A long sentence of random letters is complex without being specified. A Shakespearean sonnet is both complex and specified. [William A. Dembski (1999). Intelligent Design, p. 47.]

This evokes the classic Infinite Monkey Theorem, which postulates that a monkey randomly hitting keys on a typewriter will write the complete works of Shakespeare given enough time.  Strictly speaking, it’s possible, but we intuitively balk at the idea.  In doing so, we’re actually tasting the fifth of the ingredients above: a universal probability bound.  Basically, there hasn’t been enough time in the universe for a monkey to randomly type out a single sonnet, even one of the bad ones.

Another accessible and oft evoked illustration is the random coin toss:

Consider the following two sequences of ten coin tosses: HHHHHHHHHH and HHTHTTTHTH. Which of these would you be more inclined to attribute to chance? Both sequences have the same probability, approximately 1 in 1,000. Nevertheless, the pattern that specifies the first sequence is much simpler than the second. For the first sequence the pattern can be specified with the simple statement “ten heads in a row.” For the second sequence, on the other hand, specifying the pattern requires a considerably longer statement, for instance, “two heads, then a tail, then a head, then three tails, then heads followed by tails and heads.” Think of specificational complexity (not to be confused with specified complexity) as minimum description length.

For something to exhibit specified complexity it must have low specificational complexity (as with the sequence HHHHHHHHHH, consisting of ten heads in a row) but high probabilistic complexity (i.e., its probability must be small). It’s this combination of low specificational complexity (a pattern easy to describe in relatively short order) and high probabilistic complexity (something highly unlikely) that makes specified complexity such an effective triangulator of intelligence. [William A. Dembski, The Design Revolution: Answering the Toughest Questions About Intelligent Design, 81.]

Every time you toss a coin there is an equal chance that it will lands heads up or tails up.  That probability doesn’t change as you continue tossing the coin.  There is just as much as chance you’ll get H fifty times in a row as a random string of H and T.  But while the latter is expected, the former would make you suspect, if not crazy.  Which, oddly enough, brings us back to Shakespeare.

In the opening to Tom Stoppard’s play ‘Rosencrantz & Guildenstern Are Dead’, the latter eponymous character grows uneasy as his tossed coin lands heads up 89 times in a row.  He contemplates the possibilities:

List of possible explanations. One: I’m willing it. Inside where nothing shows, I am the essence of a man spinning double-headed coins, and betting against himself in private atonement for an unremembered past. Two: time has stopped dead, and the single experience of one coin being spun once has been repeated ninety-times. On the whole, doubtful. Three: divine intervention. Four: a spectacular vindication of the principle that each individual coin spun individually is as likely to come down heads as tails and therefore should cause no surprise each individual time it does.

Strictly speaking, the final explanation holds.  Dembski is effectively arguing the third.

If we can determine that an event, such as the formation of protein, an incredibly specific and complex chain of amino acids, is exceedingly improbably, Dembski argues, then we might be witnessing the work of a designer.  It’s worth noting here, that nothing in this argument contradicts natural law.  Everything that happens can happen, it’s just really, really unlikely.  It’s this unlikelihood that provides the clearing for God’s work.

There are counter arguments to the above, and I personally detect one  significant flaw that I haven’t seen addressed elsewhere, all of which will be treated in the next post.

For the moment, let’s focus on the improbability of life in all its boggling specificity and complexity.

To quote Guildenstern once more:

Syllogism the second: One, probability is a factor which operates within natural forces.  Two, probability is not operating as a factor.  Three, we are now within un-, sub-, or supernatural forces.  Discuss.  Not too heatedly.