(5th April, 2013: stickying this, for a bit, as it has come up. Mung might like to comment).
Time to look at this in detail, I think 🙂
His definitive paper to date on CSI, is Specification: The Pattern That
Signifies Intelligence. It is very clearly written, not very mathy, but, by the same token, a paper in which it is easy (IMO) to see where he goes wrong.
Here is the abstract:
ABSTRACT: Specification denotes the type of pattern that highly improbable events must exhibit before one is entitled to attribute them to intelligence. This paper analyzes the concept of specification and shows how it applies to design detection (i.e., the detection of intelligence on the basis of circumstantial evidence). Always in the background throughout this discussion is the fundamental question of Intelligent Design (ID): Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? This paper reviews, clarifies, and extends previous work on specification in my books The Design Inference and No Free Lunch.
It’s in eight sections, and the argument in brief goes like this:
We cannot conclude that just because a pattern is one of a vast number of possible patterns that it was Designed; to conclude Design, it has to be one of small subset of those patterns that conforms to some kind of specification. Fisher proposed that if a pattern of data was one that fell in the tail of a probability distribution (aka Probability Density Function, PDF) of patterns of data under some null hypothesis, we could reject the null, but he didn’t give a clear rational for the cut-off point. Dembski suggests that if we have sequence data which a very large number of sequences would be possible under a non-Design hypothesis, and those sequences are binned according to their “compressibility” (ease of description) then they will form a Probability Density Function in which there is a tail consisting of a small subset of easy-to-describe pattern. Under the non-Design null hypothesis, these will happen rarely. If, therefore, the number of opportunities for them to happen is low enough, we can reject non-Design. And if the number of opportunities required to give them a sporting chance of happening at least once in the history of the universe is fewer than the number of events that have occurred in the universe, then we can confidently reject Design.
If any ID proponents think I have mischaracterised Dembski’s argument, I welcome your comments. But, assuming I have this broadly right, here are the problems as I see them:
- He does not attempt to characterise the probability distribution of his compressible sequences under his “non-Design” null, and simply assumes that only Design processes could reliably result in highly compressible patterns that would be improbable under a process that assigned each element in the sequence independently from any other – he does not attempt to argue why this should be the case, and it demonstrably is not.
- He does not show how compressibility should be measured (in fact Hazen et al, as discussed here IMO do a much better job, by substituting functional efficiency for compressibility, but their paper does not help Dembski’s case)
- He ignores the fact that the very easiest-to-describe sequences (e.g. ranked order sequences) are readily produced by non-Design sorting processes, yet can be highly “complex” i.e. one of a vast number of sorted and unsorted sequences), e.g. Chesil Beach.
Now, there may be various other ID papers proposing some kind of alternative to CSI that tackle some of these problems, but my point is that these three objections are fatal flaws in Dembski’s concept, and that therefore any improvement has to tackle all three. But it would be interesting to see if there is any disagreement about whether I have his argument right, and what the flaws are.
Dembski is already off on the wrong foot in the first three sections if he thinks he is going to apply this to the physics and chemistry of biology.
First of all, uniform sampling distributions don’t apply in the world of chemistry, physics, and biology. Secondly, it is not appropriate to apply “specification” to what falls out of the evolution of a complex system of condensing soft matter. One does that only retrospectively as though what occured was the target from the beginning.
Suppose we make a heated, saturated solution of salt water. We let it cool down and observe regular crystalline patterns emerge. We repeatedly heat it up and cool it down. We get the same kinds of crystalline arrays in which molecules come together to produce the same cubic crystals.
What are the odds, according to Dembski, that those atoms and molecules knocking about randomly in a solution would reproduce the same crystals repeatedly?
Isn’t this like flipping 10^23 coins multiple times and having them come up in exactly the same pattern every time?
ID/creationists brush crystals off as not relevant because they apparently think crystals are different in principle from all other forms of condensed matter by having some kind of “built in regularity” in their behaviors. Somewhere between crystals and other forms of condensed matter, they seem to think that the laws of physic stop operating and it is all “spontaneous molecular chaos” down there.
I believe it is not a coincidence that DNA and RNA have crystalline forms.
The way I see it, a design (noun) is simply the result of a design process (verb). A specification instructs the design process so as to produce a particular result. And so it is not conceptually valid to deduce design from examining an object except in light of knowledge of the history and process that produced it.
One way of measuring the creativity of children is asking them how many different things they can think of to do with a brick. Some children can think up quite a few, ranging from doorstops to artist’s pigment (after grinding into powder). So which of these many uses is “the” specification? If there are as many “specifications” as there are uses bricks have ever been put to, clearly these are being assigned post facto. The bricks themselves could be perfectly natural rocks.
What Dembski is doing here, is pretending that one can deduce the specification from the nature of the object, knowing nothing of its history or context. But this is not only impossible, the claim strikes me as deceitful. Instead, Demski is presuming a design process, “filling in” his desired purpose, context, and history. From these imported assumptions, he constructs a “specification” which, by golly, just happens to match. And THEN he turns around and says the assumptions from which he constructed his specification are instead conclusions derived from examining the object in isolation. At best, this is cheating.
A specification is not something an object HAS, it’s something the design process uses, that the designed object approximates. To show design, one must show the design process. Dembski’s efforts to evade this notwithstanding.
I have always been less critical of the general notion of CSI than have most critics of Dembski. In essence, if we take fitness as the scale on which to measure, CSI is just a fitness value too high to reasonably come about (even once in the history of the Universe) by pure mutation. I think organisms in general meet that criterion and this is not controversial — could pure mutation (without natural selection) lead to a species as well adapted? Of course not.
Posing CSI in terms of a fitness scale also avoids all the squabbling about which among us really understands information theory.
Where the argument goes wrong is not the CSI part but in saying one is detecting Design. In fact, one is successfully detecting either Design or natural selection or both. The only reason Dembski believes it is detection of Design is that he thinks his Law of Conservation of Complex Specified Information rules out that the amount of adaptation could be due to natural selection. I have argued (invoking work by others, particularly Elsberry and Shallit 2003 plus some new arguments of my own, that Dembski’s LCCSI cannot do the job, so that CSI can be due to natural selection, as that has not been ruled out.
I agree, Joe. It seems to me that CSI is a potentially useful concept (and in the paper by Hazen et al, actually works). I think it tells us something interesting about any pattern that displays it – in fact, I think it tells us that it was “intelligently designed” if we use Dembski’s own definition of “intelligence” as something with “the power and facility to choose between options”, which of course is what natural selection does. Moreoever, I’d say that natural selection is a good description of how our own intelligence works, with excitatory/inhibitory connections and Hebbian learning serving the role of differential reproduction.
It’s not CSI I object to, it’s his assumption that something with CSI (and I’d be happy with a much less stringent alpha than is UPB) must not only be “intelligently” designed (by his definition) but intentionally designed. He just assumes this must be the case. He does not even present an argument for it.
It’s the elephant in the room of ID, but, as far as I can see, dismissed by the assumption that “chance” means both “non-intended” and “equiprobable”, and that if something is not “equiprobable” it must be “intended”.
Umm-
1- The paper pertains to specified complexity, not CSI- yes CSI is a special case of SC but not all SC = CSI
2- CSI and SC pertain to origins, and natural selection does not
3- CSI is not a fitness measure as biological fitness is an after-the-fact assessment and changes with the environment
4- To refute what Dembski said all one has to do is actually step up and demonstrate that blind and undirected processes can do it
5- It is not up to IDists to demonstrate what blind and undirected processes cannot do, it is up to you to demonstrate what they can do
6- There may be no law that prevents life from arising from non-living matter via blind and undirected processes, but there are also no laws that allow for it to occur
7- Natural selection is a result and doesn’t do anything- no selection is taking place, that is a fallacy
Joe, CSI stands for Complex Specified Information. This paper is about Information that is Complex and Specified. How is that not Complex Specified Information?
What kind of “Specified Complexity” is not “Complex Specified Information”?
In any case, in the UD FAQ, is stated (FAQ #27):
CSI presumably pertains to more than just origins, no? And in fact most OOL theories invoke natural selection at a very early stage – as soon as self-replication of the simplest units begin.
Agreed, CSI is not a fitness measure.
Done, many times.
It is certainly up to IDists to demonstrate what blind and undirected processes cannot do if they want to claim that blind and undirected processes cannot do it, which is the core ID claim. The burden of demonstration lies on the claimant. We have shown over and over again that CSI, by Dembski’s definition, can be produced by a system in which self-replicators self-replicate with variation in heritable reproductive success. And if all IDists are claiming is that OOL requires an ID, then why the attacks on Darwin?
“Laws” in science, are simply heuristics, discovered by scientists to be generally true, sometimes universally true. As you say, we know of no laws that suggest that OOL is impossible without a designer. The fact that we do not yet have a posited mechanism (“laws that allow for it”) is not an argument that there is no mechanism.
It’s a metaphor, not a fallacy, and I agree, it’s a misleading one. I avoid it where I can. What it is is heritable variance in reproductive success. If there is heritable variance in reproductive success, then the most successful variants will become most prevalent. That’s not nothing. It’s rather important – it means that each generation is a biased sampling of the previous generations genotypes. “Selection” is a reasonable metaphor for “biased sampling”, but “biased sampling” (biased in favour of variants that are better fitted for the current environment) is probably less easy to misunderstand.
Please reference the part of the paper that says “complex specified information”- I looked it isn’t to be found.
What kind of SC is not CSI? The kind that does not deal with bits of information- the kind we see in buildings and the kind we see in machines.
CSI pertains to origins- read “No Free Lunch”- and yes people try to invoke natural selection but NS doesn’t do anything.
And please present the peer-reviewed paper that demonstrates blind and undirected processes can produce SC or CSI. Ya see if you had that then there wouldn’t be any ID- so I say you are fibbing when you said it has been done, many times- please provide a reference to support your claim, thanks.
Can’t prove a negative Liz- and your position is making the claim bluind and undirected processes can do it- so you need to support that claim.
Unfortunately you do not appear to understand Dembski, what self-replicators and by what evidence can they “evolve” into something else?
Whatever they are- and it isn’t always so- and it still doesn’t do anything. It does not add functional information. It does not construct new, useful and function multi-protein configurations.
You are correct: it does not appear. However, the subject of the paper is information that is complex and specified, and the equation given is the one that Dembski has elsewhere given for CSI.
So why does Dembski, in that paper, calculate specified complexity in bits? Joe, do read the paper!
Please read my comment re NS above.
Read the peer-reviewed paper by Hazen et al that was the subject of this thread Also virtually any paper by Lenski, and any paper (or text book) on genetic algorithms.
No. “My position” is not making that claim. That’s what I keep pointing out. non-IDists are not claiming that there was no ID. IDists are claiming that there must have been. Your “position” is the one making the positive claim about an ID. We are not saying: we have demonstrated that there was no ID. You are saying that you have. So the onus is on IDists to support the claim that they are making.
Joe, you don’t even seem to have read that paper – so I think I understand it a little better than you do!
Did you read Joe Felsenstein’s post?
Joe:
Dembski disagrees:
But what does he know?
Genetic algorithms have nothing to do with blind and undirected processes- Hazen never demonstrated blind and undirected processes can produce anything and Lenski sure as heck didn’t demonstrate such a thing.
Also, Liz, I will go with Mayr when he says that teleology is not allowed in biology and dawkins who said that if living organisms were designed then we would be looking at a totally different type of biology.
And yes I read Joe F’s post- it doesn’t make any sense, doesn’t have anything to do with functional information and still doesn’t have any real-world support.
BTW from your post it appears taht you are not in agreement with the Darwin, Dawkins, mayr or the current theory of evolution.
Strange that neither “CSI” nor “complex specified information” isn’t even included in the paper…
Can’t prove a negative Liz- and your position is making the claim blind and undirected processes can do it- so you need to support that claim.
The theory of evolutuion is making that claim. Chemical evolution is making that claim.
I can’t improve on most of what Elizabeth says in response to Joe G’s notions. His ideas are not Dembski’s, that is clear.
However, let me disagree with one item in that response. Joe G had said:
and Elizabeth responded:
It is not a fitness measure but it can be. Dembski in No Free Lunch describes CSI in terms of a rejection region, usually on some sort of scale. He then
says on page 148:
so he explicitly allows fitness (or components of it) to be the scale. And so I have used fitness, in an attempt to get away from the sterile arguments about who understands information theory better than who. It also connects it to a fundamental issue: whether natural selection can be the reason we see so much adaptation.
Gee, if you read the book then you know CSI pertains to origins…
If pure mutation (infinite monkey theory) cannot gain CSI, then NS isn’t going to help any. All NS does is fix some things that have already been generated in a population and subtract information from the genome; it doesn’t – in fact, cannot – create any new CSI. It’s not a mutative process. Dembski isn’t concerned about fixing the CSI in a population; he’s concerned about it being generated at all, once, anywhere, given any length of time, via any amount of collected non-ID processes.
IMO, you just admitted that ID theory is correct, even if you intellectually deny it by (mistakenly) claiming that NS bridges the gap from what non-ID forces can do to the generation of CSI.
I also disagree with one item in Elizabeth’s previous response to my first comment in this thread. She said:
The argument that he does present is against the possibility that natural selection can account for almost any of the adaptations that we see. He then assumes that this makes the case for (Elizabeth would say “intentional”) Design. He is not just assuming that adaptation must be intentional Design, he is using his conservation theorem (LCCSI) and arguing that that theorem rules out the possibility that natural selection accounts for the adaptations.
Unfortunately for him, the theorem is both not proven (as Elsberry and Shallit 2003 showed) and posed in a way that makes it inapplicable to ruling out natural selection (as I showed in my 2007 article).
This is patently wrong. A random process with infinite computational resources will be able to find any answer, no matter how complicated. Monkeys banging on typewriters for an infinite amount of time will produce all of the Shakespeare works.
The question is about processes lasting a finite amount of time. And in this case a pure random can’t accomplish much. However, a random process aided by cumulative selection does very well.
You’re being totally, completely wrong about it, WIlliam. You need to learn some basics. I am sure you have heard of Dawkins’s Weasel program. Read its description and see why cumulative selection is important. Then come back and argue more sensical things.
From a purely pragmatic point of view, what good is CSI? No one can use it, no one can calculate it.
Try this variation of the Weasel program. It has no target, but still manages to make words. There many variations on this theme.
https://itatsi.com
It should be useful to note that The best biochemist in the ID camp, Behe, does not make this silly argument.
Yes, but they don’t have an infinite amount of time in our universe, and that’s the point. If you cannot expect a chance result within the Universal Probability Bound, then you cannot appeal to chance to explain the result.
If cumulative selection as a selection algorithm necessarily aided in the creation of combinatorial CSI, you’d have a point, but the natural selection process is at least as likely to select against increased CSI as for, so in terms of generating increased CSI as a more fit algorithm towards that (descriptive) goal, NS is just another chance resource – just another monkey banging at the keys that might blindly write a shakespearean play.
Unless an algorithm is added that is, in some way, targeting Shakespeare, it ain’t gonna get done. NS is not an algorithm that targets our metaphorical Shakespeare.
That’s an interesting assertion. Care to explain why this is so? Or are you going to punt as you have done over and over again because you can do logic, but you can’t do specifics?
If CSI is related to function, and increased function would manifest in increased survivability, ‘CSI’ should be actively selected for?
This is what happens when you pretend you can measure something. You wind up equivocating about whether CSI refers to string length or to something more difficult to quantify, such as fitness.
When someone at UD argued there are only 2000 protein domains, and they are all isolated and irreducible, my thought was that they could easily be coded as 00000000000 through 11111111111. The actual implementation requires a reader, and the reader could easily just look them up in a table.
Of course I believe that coding sequences are implemented in chemistry, not as information, but people who reify the information metaphor never seem to follow this.
Simple logic. CSI is not a commodity that intrinsically increases fitness in general terms of survivability, fecundity, or progeny success. It might happen to do so, but not **because** it is an increase CSI, but only because the increase in CSI happens to be “more fit” in those terms. It could as easily, or even more easily be less fit in those terms.
Generally speaking, the more complex and specified a thing is, the more prone to failure it is (compare a lever or a fulcrum to a computer). The more complex, interdependent and quantity of necessary parts; the more refined and exact the parts, the more likely it can become defective and fail, because the parts rely on precision of form and process to do whatever they are doing. Relatively minor variations in both the mechanism, the infrastructure around it, and the environment it is in can cause malfunction and failure.
Also, the more CSI is involved, the more supportive infrastructure, maintenance, repair, and consumable energy is required by the organism, which increases the potential for non-fitness in several categories. Logically, NS probably more often (over the long term) selects against increased CSI than for, and that those selections of increased CSI necessarily lead to, down the road, more potential for failure. IMO, NS would most often select for decreased CSI because less interdependent complexity is easier to maintain, less expensive in terms of energy to keep alive, and more likely to survive – generally speaking.
What this means, logically speaking, is that NS is most likely more actively selecting against increased CSI than for it, making it useless as an explanation for increased CSI.
Interesting thought. that might explain why microbes are overall the dominant live form on earth, easily surpassing plants and vertebrates in both numbers and physical mass.
Hmmm – Seems like some IDists associate CSI with function and function is:
“Functions are biological features which do things for the organism. The purpose of intelligent design theory is to look at various functions and ask if they bear the marks of something which has been designed by an intelligence.
So, in other words, when we see in the biological structure-producing DNA machinery the ability to create some structures, and not others, which perform some specific action and not some other specific action, we can legitimately say that we have complex genetic information. When we specify this information as necessary for some function given a pre-existing pattern, then we can say it was designed. This is called “complex specified information” or “CSI”. ”
http://www.ideacenter.org/contentmgr/showdetails.php/id/832
William, you need to decide for yourself whether CSI is positively correlated with biological fitness or not. If it is then Darwinian evolution selects for higher CSI by selecting more fit organisms. If it isn’t then CSI is a useless concept in biology.
Don’t bog the philosophers down with specifics, unless it pertains to tiny dancing angels!
Only if the selective pressure against increased CSI (assuming there is such a thing) is stronger than all other selective pressures ranged against the organism.
In the real world the relative complexity of an organism is but one of a practically infinite number of factors that natural selection can examine. And if there’s a selective benefit in being more complex, even extravagantly so, then it that’s just another morphological avenue to explore.
Exactly. If evolution can select both simple and complex organisms then complexity is not a particularly useful yardstick in biology.
William J. Murray,
Which has more CSI, a mouse, a turtle, a butterfly, an octopus, a human, a whale shark, or an elephant?
OK? CSI pertains to origins- not that this post will see the light of day….
olegt,
Except evolution doesn’t select.
In the real world the relative complexity of an organism is but one of a practically infinite number of factors that natural selection can examine. And if there’s a selective benefit in being more complex, even extravagantly so, then it that’s just another morphological avenue to explore.
Well, we agree on this, and if this is true, then natural selection is not adding anything significant to the generation of high levels of CSI, because it is not algorithmically predisposed to choose either for or against (although, I think an argument be made for the latter) CSI.
So, to reiterate, Joe F said:
Which is, essentially, saying that without NS, the CSI of biological organisms is just too high to reach via non-ID processes. Unless he can show that not only **can** NS increase CSI (which it cannot, BTW), even using his accumulative distribution methodology of “increasing” CSI, that it necessarily does so to a greater degree than it selects against CSI, and distributes decreased CSI throughout the population.
Otherwise, he just lost his NS crutch as explanation for the CSI, even using his (erroneous, IMO) accumulative distribution method, because otherwise NS can just as easily decrease CSI throughout a population.
Creodont,
Who cares? How is that question even relevant?
So, here’s the point: unless you can show NS to select more for increased CSI than against increased CSI, you have no basis for the claim that NS adds anything to the search for increased CSI.
OMG, that is such a sweet argument. Short, concise, and perfect. This is why I take the time and effort in forums like this.
My thanks to everyone who contributed!!!!!
olegt,
Umm “cumulative selection” only works in a design scenario, as with “weasel”- can’t have “cumulative selection” without a target.
I don’t think anyone, including Dembski, knows how to measure or compute the CSI of a biological system. So the task you are saddling us with is not feasible at the moment.
Of course, this will be easy. We’ll do a ‘Lenski’ style experiment where we observe genomic and morphological change. Now I don’t think anyone on our side can do the CSI calcs, William. Can we put you down for those?
What search for increased CSI?
olegt,
The Hazen paper tells you how to calculate the functional information, oleg.
Elizabeth Liddle, Joe Felsenstein, and other evotards, Still Misrepresenting CSI
Then why would Joe F say:
If CSI isn’t computable, how can it be considered by Joe F to be “too high to reasonably come about” in organisms without natural selection? And if he is saying that NS is a significant process in bridging that gap. where has he demonstrated that NS is more prone to fix increased CSI into a population than decreased CSI? He’s only shown that NS **can** fix greater CSI into a population, not that it in principle does so more than it fixes decreased CSI into the population.
Even positing his accumulative distribution method of “increasing” CSI in genome as valid, he must show both that it can, and that it in principle does so more often than the converse, in order for NS to provide a valid deviation from the mean towards acquiring high CSI targets.
That’s how Dembski envisions the application of CSI. From Dembski’s paper, p. 12:
So he knows what he would like to use CSI for, but so far he can’t compute, or provide a recipe for measuring, this quantity in a biological system.
Without the ability to calculate CSI in a biological system, any discussion of its use as a metric of fitness is pointless. Worse, such discussions really only serve to obfuscate our understanding of evolutionary processes, which by the way, I think is the real purpose of Demski’s CSI, namely, to try and muddy and confuse the real issues with a lot of “high falutin’ mumbo jumbo”. Gosh, it all looks and sounds so impressive, he must be onto something!
Of course, I say that as a complete ignoramus when it comes to high level math (anything beyond counting on my fingers) so I could be terribly wrong.
CSI seems like one of those concepts that ought to be useful, but turns out not to be, upon close inspection. It has a certain intuitive appeal — after all, genomes have something that looks like a digital code, so why can’t you simply count the digits required to make a microbe or a mouse?
The problem, of course, is this little thing called chemistry. Complexity of effect can;t be measured by counting the digits in the recipe. No one knows how much of the recipe is junk, how much can be mutated without affecting the outcome, how much could be deleted. We know by actual experiment that the answers are a lot.
But we can’t anticipate the effects of changes.The properties of biochemical molecules are emergent. A big word that means we can’t anticipate the properties of novel molecules.
Not being able to anticipate means we can’t design except by cut and try. There is no prospect that emergence will ever be solved. It would be like trying to anticipate what devices Apple will be marketing in 50 years. Inventions are incremental and are largely the result of cut and try. Even the greatest human inventions don’t fall very far from the tree. take a look at the great TV series by James Burke on this subject.
It seems a bit contradictory, but the path to astonishing inventions like lasers and atomic bombs have incremental histories. there is no magic in “intelligence.” No ability to poof 500 bits of isolated information into existence.
Note how the debate immediately turns from the presumed capacity of NS to bridge the gap towards increased CSI, to a string of posts that try to erase CSI as a meaningful value whatsoever, right after I made the point that NS can as easily choose for decreased CSI as increased.
Nobody argued with Joe F’s paper that NS **can** increase the CSI of a genome; nobody barraged Joe with posts that CSI was not measurable and so his paper was nonsense (which it must be, if CSI is not a measurable phenomena); but here, suddenly, the argument is that CSI isn’t a meaningfully measurable commodity at all.
If it isn’t a meaningfully measurable commodity at all, what is it that NS supposedly **can** increase in Joe’s paper, and what is it that NS is argued to be capable of producing more of than than chance mutations?
You can’t have it both ways. You can’t promote papers and arguments that purport to show that NS **can** increase CSI, then when it is pointed out that NS can also decrease CSI, argue that CSI isn’t even a computable commodity.