Almost everyone landing in The Skeptical Zone will have heard that intelligent design (ID) proponents William A. Dembski and Winston Ewert have published a second edition of The Design Inference: Eliminating Chance Through Small Probabilities (TDI). Some parts of the book are clearly responses to posts I have made here. But there is no mention of those posts. Why would that be?
TDI (2e) is concerned primarily with specified complexity, defined quite differently than in Dembski’s original TDI (1998) and No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002).
which in its present form is a generalization of algorithmic specified complexity — a specialization of Dembski’s revised notion of specified complexity in “Specification: The pattern that signifies intelligence” (2005).
That is to say, the present-day concept of specified complexity derives from Dembski’s major revision of his thinking in “Specification: The pattern that signifies intelligence” (2005).
That is to say, the concept of specified complexity in TDI2 does not derive from Dembski’s The Design Inference: Eliminating Chance Through Small Probabilities (1998) and No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence (2002), but instead from the major revision of his thinking in “Specification: The pattern that signifies intelligence” (2005). Dembski neglected to explain that the revision had left him without an argument that specified complexity is conserved in natural (free of intelligent intervention) processes — an essential component of the logic of design inference.
Dembski subsequently teamed with Baylor University professor Robert J. Marks II, and managed to transfer many of his earlier claims about specified complexity to a newly defined quantity, active information; see “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information” (2010; preprint 2008). The switcheroo was incredibly brazen, given that active information is roughly the opposite of specified complexity. That is, Dembski had begun by attempting to identify conditions under which low-probability events should be attributed to intelligent design rather than chance. He and Marks instead focussed on events that are more probable than they “naturally ought to be,” and argued that the processes giving rise to such events are intelligently designed. Tersely put, the specified complexity of an event is inversely related to the probability of that event, while the active information of a process is directly related to the probability that a prespecified event occurs in that process.
The specified complexity of an event, as defined from 2005 to the present, is inversely related to its probability of occurrence.
Ewert addressed algorithmic specified complexity in his 2013 dissertation (advised by Robert J. Marks II).
Starting in 2014, Ewert, Dembski, and Marks held that algorithmic specified complexity is a measure of meaningful information. I demolished that notion in “Evo-Info 4: Non-conservation of algorithmic specified complexity” (2018), and remarked:
This embarrassment of Marks et al. is ultimately their own doing, not mine. It is they who buried the true Acheter du Viagra sans ordonnance story of Dembski’s (2005) redevelopment of specified complexity in terms of statistical hypothesis testing, and replaced it with a fanciful account of specified complexity as a measure of meaningful information. It is they who neglected to report that their theorem [addressing algorithmic specified complexity] has nothing to do with meaning, and everything to do with hypothesis testing. It is they who sought out examples to support, rather than to refute, their claims about meaningful information.
I suspect that Dembski and Ewert actually were embarrassed. They now make a big show of recounting the historical development of specified complexity, in interviews as well as in their book, but somehow fail to mention that they regarded it as a measure of meaningful information until recently.
Ewert revised thesis.