The Lenski et al 2003 paper, The evolutionary origin of complex features, is really worth reading. Here’s the abstract:
A long-standing challenge to evolutionary theory has been whether it can explain the origin of complex organismal features. We examined this issue using digital organisms—computer programs that self-replicate, mutate, compete and evolve. Populations of digital organisms often evolved the ability to perform complex logic functions requiring the coordinated execution of many genomic instructions. Complex functions evolved by building on simpler functions that had evolved earlier, provided that these were also selectively favoured. However, no particular intermediate stage was essential for evolving complex functions. The first genotypes able to perform complex functions differed from their non-performing parents by only one or two mutations, but differed from the ancestor by many mutations that were also crucial to the new functions. In some cases, mutations that were deleterious when they appeared served as stepping-stones in the evolution of complex features. These findings show how complex functions can originate by random mutation and natural selection.
The thing about a computer instantiation of evolution like AVIDA is that you can check back every lineage and examine the fitness of all precursors. Not only that, but you can choose your own environment, and how much selecting it does. There are some really key findings:
- Irreducibly complex functions evolve, where IC is defined as “if you take something away, it breaks”
- Functions evolve via Irreducibly Complex pathways, i.e. pathways in which there are several neutral, or even quite steeply deleterious steps.
- The more complex functions do not evolve without selection.
- All the functions evolve via at least some neutral steps
- There are many pathways to each function
- Not all the functions are achieved in the same way.
The AVIDA findings are very good evidence that there is no reason, in principle, why irreducibly complex things can’t evolve, that some degree of natural selection aka heritable variance in reproductive success in the current environment is necessary for for certain features to evolve, but that not all steps need be advantageous – because of the role of drift, even quite sharply deleterious steps can prove key to the emergence of a complex function.
Then there’s the Lenski lab e-coli experiments. But let’s start with AVIDA, because it really does rebut the idea that Irreducible Complexity, whether defined as the property of a function or of a path to a function, is an in-principle bar to evolution by natural selection.
CharlieM,
So did I. You dismissed it as ‘models’.
Mung, of course it’s not a surprise.
It’s what Darwin proposed – that complex things evolve by means of earlier advantageous precursors.
So I’m glad you aren’t surprised.
However, what you might be surprised at if you were Behe, or a Behe follower, is that a complex thing evolves even when that thing breaks if any one component is removed (is IC) AND by pathways in which none of the precursors performs the function of its predecessor (highly indirect paths) AND by paths in which many steps are not only non-advantageous but actually deleterious (high-degree IC pathways).
So it seems bit odd to be implicitly criticising AVIDA for showing that for EQU to evolve it needs advantageous precursors! If it didn’t, now THAT would be a problem for Darwin, who proposed that as the mechanism for adaptive evolution!
But what Darwin didn’t envisage, and what Behe thought was a problem for Darwin’s theory, is that many necessary steps on the way to a function can be not only non-advantageous but actually deleterious.
I do find it frustrating that ID proponents seem so unwilling to engage in AVIDA’s challenge to what is one of ID’s best arguments (and implicitly acknowledged at UD by I think Dembski’s choice of masthead icon).
Ah, there’s one!
Thanks!
CharlieM,
Amazing how many critics don’t understand his argument, but supporters all do!
Can you back these statements up? Not just by blindly quoting White; could you explain how he gets his figures? I can’t work them out from the paper.
There is a big difference in the effect on mutational survival between (say) replicating 10^20 cells once each, and generating 10^20 total replication events through multiple generations – taking a smaller pool and subjecting it to population fluctuations (ie, taking account of what actually happens).
10^20 events generate the same total number of mutations in either case, but the final population has a substantially different number of them remaining, setting selection aside for now.
I said you misrepresent Behe as to where “the edge of evolution” lies.
You replied:
Here are the quotes Allan:
And here:
And here:
Behe’s whole point was that chloroquine resistance can be reached by Darwinian means. He placed the edge at 10^40 and not at 10^20.
Behe wrote
And in The Edge of Evolution he wrote:
If you want to know what a “double CCC is read the book, but, trust me, it is equivalent to 10^40 trials.
Unless you can convince people that 10^20 is getting close to 10^40 then you have misrepresented Behe.
I’ll respond to the second part of your post later.
CharlieM,
None of those quotes misrepresents Behe’s argument. I accept that Behe accepts that chloroquine resistance can arise by Darwinian means, and those quotes merely affirm that, but I suggest that he has a mistaken view of the effective population size of the organism. It is vastly smaller than he assumes. Since resistance nonetheless arose, his assumptions, and extrapolation to supposedly smaller populations, must be off.
As per 10^20 vs 10^40, I never even mentioned a figure in any of those quotes above (eta – nor anywhere else). You came up with 10^20, quoting White, and petrushka also. Not me. Try again?
Did you read that in those biology books? Lol.
In his response to Edge critics, Behe asserts that the genome must at one point contain a severely deleterious mutation. This is a factual error. There are paths not considered by Behe. There is no need for two simultaneous mutations. The 10^20 number is bogus. The actual repeated occurrence of resistance in a short period of time supports this.
And while 10^32 is less than 10^40, it is a really big number. The problem isn’t with the arithmetic. It is with the assumption that there is only one path, and that it requires simultaneous mutations.
CharlieM,
OK, Behe says the edge of evolution is an event with probability 10^40. He looks for an event with the square root of that probability, so that two things with that probability would hit the edge. Even ignoring multiple pathway arguments, the CCC is not it. If it emerged 10 times in 50 years, in an organism with a very small effective population size, then its probability is not 1 in 10^20.
petrushka,
I wouldn’t restrict the problems to those.
Allan Miller,
Then you’re statement that Plasmodium operates near the “edge of evolution”, is meaningless. How can you say that the edge can be reached quite easilywhen you don’t even quantify it?
CharlieM,
I don’t say the edge can be reached easily. If 10^40 is your edge, then yeah, let’s say it is way beyond anything that can happen.
What I am saying is that the probability of the ‘CCC’ itself is not 10^20, nor anything like.
I’ll ask you again to explain how White arrived at his figure, and why we should draw any evolutionary conclusions from it. P falciparum suffers not only from substantial population bottlenecking, but also demonstrable inbreeding in its sexual phase. Both of these restrict effective population sizes to a mere fraction of the census size. It makes a huge difference how you count replication events, and whether and how the population fluctuates. One genome doubling ten times gives 1024 individuals from 2046 replication events. This does not have the same variation as a steady state equilibrium population of 1024 individuals replicating twice. Nor one of 2046 replicating once. Nor such a steady state population after 10 more generations.
To borrow one of your rhetorical tics, do you understand how effective population sizes relate to the variation to be expected by random sampling? Do you understand how random sampling relates to the likelihood of the single mutants in the CCC case?
Show me how the White figure is arrived at. This is vital to seeing whether his 10^20 is a relevant statistic, or whether it just coincidentally happens to be the square root of 10^40.
CharlieM,
Incidentally, the effective population size in Pf by various means comes out pretty low – much, much lower than 10^15. There may be several reasons for this, but the result is the same – they are much closer to the effective population sizes of their hosts than they are to a free-living protist. The reasons are actually pretty obvious, when you consider the life cycle.
Census sizes are misleading. That’s Behe’s biggest error IMO.
I realise now, perhaps Behe (and maybe White) is assuming that the two CCC mutations have to arise in one cell by double mutation, or it cannot happen at all. That’s wrong. He forgets the contribution of drift, which can create subpopulations of the single mutants within either of which the other can arise, and recombination, which increases the probability still further (and by a larger amount).
The de novo emergence of chloroquine resistance is not a probability calculation. It is arrived at from known data.
In his paper White referred to this paper and concluded that resistance has arisen spontaneously less than ten times in the fifty years prior to 2004. He had a good estimate of the number of infected people with chloroquine in their system during that time and the numbers of Plasmodium falciparum cell divisions (and thus chances of gaining chloroquine resistance) during that period. This is how he arrived at the figure of 10^20.
He is a world renowned malaria expert and I have no reason to doubt his findings.
No feature or capability that Behe discusses requires two simultaneous mutations.
Where did Behe claim that some feature or capability he discusses requires two simultaneous mutations?
Allan, if you don’t believe me, maybe you will believe Ken Miller when he says that no one is arguing over the 10^20 figure.
Miller’s second criticism about the pathway to achieving resistance is just as irrelevant as your criticism. Behe’s argument does not rest on the pathway to resistance, it rests on the empirical data which demonstrate the number of cell divisions it actually took before resistance was stumbled upon on each occasion.
It is Miller who sould be asking himself, “How could he and his supporters get it so wrong?“
CharlieM,
Can probability calculations not be arrived at from known data? Alternatively, If it’s not a probability calculation, you can’t feed it into one by squaring it to get 10^40. Take your pick.
That’s just handwavy waffle. I’m looking for some numbers that lead to 10^20. The way they are arrived at is vital for the conclusions we can draw. ISTM that he is simply counting every malaria parasite that ever was replicated in that time period. This is not a useful evolutionary parameter. He could also have counted infected people, or blood meals, or anything else. All those parameters would show the same proportions of chloroquine vs atovaquone resistance as cell counts. Obviously he didn’t, he went for cells, and suddenly Behe’s ears prick up. 10^20? Why, that’s the square root of 10^40!. Why are we moreiinterested in the 10^20 figure, when most cells die in a person?
Arguments from authority don’t cut it. He is not a world renowned expert on evolution (and if he was, you’d be finding some means to undermine him!). I don’t doubt that something of the order of 10^20 events of some kind may well have happened between incidences of resistance. What I dispute is that the things counted have any relevance to the ‘edge of evolution’ argument.
CharlieM,
Why should I care what (another) Miller thinks? Arguments using other people’s words are unimpressive. Whether or not other people considered the 10^20 figure suspect as a probability has no bearing on whether I do.
You could equally ask yourself “How come the entire world of biology is so wrong about evolution?”. That’s the trouble with arguments from authority or consensus – they are not arguments. Whatever other critics may have said about the issue is irrelevant. You could try arguing with the person you are talking to, using your own words, and actually addressing their arguments. How am I supposed to argue with Miller or ‘other critics’ when you don them as glove puppets?
That said, I know how Behe and his supporters can get it so wrong: they have a simplistic, wrong-headed view of evolution, if they think counting total replications is the way to assess an evolutionary likelihood in a bottlenecked and sexual popualtion.
Just to try and illustrate again the significance of census vs sampling (don’t think of this as a ‘model’, but an illustration of a principle): suppose you seed a very large inflatable flask with a single cell. You allow it to replicate through 67 generations, all survive, and you will have had 10^20 separate genome-copy events. If you have an event whose ‘true’ probability is once in 10^8 copy events, you will have a trillion instances of it.
Now, you sample a tiny amount of that culture, through a pipette which you have made to look like a mosquito for comic effect. You take up N cells. What is the chance that your pipette contains your 1-in-10^8 mutation?
Suppose you now allow the flask to go on for another 67 generations – 10^40 copy events. Again, you take N cells. What’s the chance now – higher, lower or the same? Would someone be justified in saying that the number of replications is a decisive parameter in determining the probability of the event being detected?
Hopefully, you recognise that the ‘true’ probability of the event is 1 in 10^8 in all instances, not the current replication count at the moment you detect it. But that’s what you’re doing when using White’s figure as a probability (what Behe does, even if you deny it). Behe is determining that the probability of the phenomenon is a function of the total number of replications between detections, regardless of population sampling (bottlenecking). It’s not obvious, because the sampling is being done by billions of mosquitoes on billions of infected people. But the effect is the same. Total in-victim replications is not a relevant parameter for determining what evolution can do in such low-effective-size populations, which are conditioned more by minima than maxima.
Mung,
Here:
That could only be true if both mutations occur in the same cell. If they occur sequentially in different cells, different odds apply, which I will cheerfully explain if prodded.
He claims they must occur simultaneously in the same cell because one of them is deleterious by itself. It’s one of those bumble bees can’t fly kind of statements.
Just watch, says the bee.
Seriously, folks, if a phenomenon is claimed to be at the edge of the possible, and it is observed ten times in 50 years, some assumption is wrong.
petrushka,
And yet both mutations are found independently! Damn those bees.
To be fair, as Charlie pointed out, he is looking for the edge at a ‘double CCC’ – the probability of four independent and essential mutations, which would be a CCC squared if they all had to be independent (ie simultaneous). But his mistaken assumptions in computing the ‘single CCC’, and extrapolating to small populations, are legion.
On the data, a ‘CCC’ arises about once every 5 years in a recombining population with an effective size of about 15,000 given significant selection for it. Double CCCs are well within reach from there without troubling the universe for more probabilistic budget. This is of the order of the capacity of animal and plant populations.
ORLY?
Yes they are.
I’d prove you wrong, but you’d just dismiss it as an argument from authority.
Perhaps, but then you’d just dismiss that as well.
You could try that too. Or:
Was it Roland Barthes who said that a piece of writing is just a tissue of quotations [absorbed from the surrounding culture]?
That would include your writings. CharlieM is just identifying his sources.
Mung,
YRLY.
Only if you invoked an authority to try and persuade me. “I can’t sustain this argument myself, but here’s a Very Clever Person who knows stuff about other stuff, so he’s probably right about this too.” That’s an argument from authority. Yes, it’s an argument of sorts, just a crap one. But yeah, you win, have a banana.
Would I? I like to think I would take the case on its merits. Try me.
Using the Miller quote was not ‘identifying his sources’, it was using an argument from authority (and a quote mine). I am making my own case. This seems to bother you. Do you have any comment on my actual case, rather than my method of pursuing it?
Effective population sizes. Independent and dependent probabilities. Things that are germane to Behe’s case. Point-scoring and sniping. Things that aren’t.
First time for everything I suppose. If you want to abandon the whole “hard facts” thing with regard to biology that’s fine, philosophy is obviously your thing. And nothing wrong with that. So sticking to calculating the angel/pin ratio might be the way to go for you.
Allan I suggest you read at least White’s papers here, here and here. This will help you to understand why the de novo aquisition of chloroquine resistance is extremely likely to have developed in the mitotic bloodstream phase of P. falciparum infection.
For eample White states that:
and:
He also writes:
If they were all viable 5 x 10^16 parasites all undergoing mitosis would give an astronomical number of cell divisions over a ten to fifteen year period. Obviously they do not all survive but from data about how many are likely to survive White has estimated that the actual number of cell divisions per appearance of chloroquine resistance is 10^20. I don’t see anyone but you challenging this figure.
I can understand why anyone would be suspicious of anything Ken Miller, Jerry Coyne, PZ Myers and their like have to say on the matter but I didn’t expect you to be so dismissive of them.
We are not talking about the spread of chloroquine resistance so your talk of bottlenecks is meaningless. Try to picture what is happening in the real world.
Millions of people are continually being infected with P. falciparum but they have a defence against it. The chloroquine in their system does not let it get a hold to the point where it becomes fatal. It is a very successful drug. Then, all of a sudden, in one particular location people start to die despite the chloroquine in their system. With atovaquone and other drugs the emergence of resistance was a common event but with chloroquine it was an extremely rare event. How rare? Well that’s what White has estimated from its observed occurrence.
From The Journal of Infectious Diseases (2001):
As in your irrelevant flask example, it doesn’t matter how many cell divisions there are, 10^20, 10^40 or !0^1000, de novo appearance of chloroquine resistance will still remain around 10^20, and nobody has said any different; not Behe, not White and not me.
Of course there must be at least two relevant mutations in the same cell. That is why it takes so long for chloroquine resistance to appear compared to the likes of atovaquone. The path by which they get there is irrelevant, but in the end they must be there in the same cell for resistance to establish itself.
Seriously, how close do you think 10^20 is to 10^40?
Every feature in Avida is complex.
CharlieM,
So, to be clear, you are saying (and are also claiming that White is saying) that the probability of de novo chloroquine resistance is 1 in 10^20? Because approx 10^20 cell divisions happen between appearances of the trait?
Simple question: there are maybe 34 trillion cells in a human body. Let’s say that represents 17 trillion replications in each. After a time period during which 60 billion people were born, there would be about 10^20 cell replications of the human genome. You find that a particular recurrent 2-mutation change is detected once every 60 billion humans (= once every 10^20 cell replications). Single mutants are neutral. Is the probability of getting that particular 2-mutation change to be present (forget detection) in the final population of humans
1) also 1 in 10^20?
2) 1 in 6 * 10^10?
3) Something else?
CharlieM,
They must end up in the same cell, but don’t have to originate during the same round of replication, or in a lineage with precisely 1 offspring per generation. The probabilities are not independent.
And therefore you cannot assume that a 2-mutant trait has a probability of the individual mutations squared. Not in a replicating, bottlenecked population, that is, and certainly not one with recombination. This is why Behe has never published this outside of a book – ie it would never get past peer review. You seem curiously resistant to findiing out why.
CharlieM,
Longer version:
You don’t even see me challenging it. This is frustrating; it’s like I’m typing away and you just hear the clicking. I am quite prepared to accept that 10^20 cell divisions happened between instances of chloroquine resistance. This is simply a number, a statistic. You could count all sorts of things. It is NOT the probability of getting chloroquine resistance. Behe squares this value to get the probability of a double CCC. If the probability of getting a single CCC is not 10^20, how can we square it to get to the edge?
I am not dismissive of them; you are importing them to support your case when they are not even talking about effective population sizes or sexual recombination, as I am.
You first. Of course we are talking about the spread of chloroquine resistance. How do you detect it if it hasn’t spread anywhere? You think we are monitoring each of those 10^20 cells and setting a bell off when the mutation happens?
But more than that, we are also talking about the spread of the single mutants. As (for argument’s sake) two mutations have to happen to get chloroquine resistance, and Behe has declared their independent probabilities as 10^10, to get both in the same cell simultaneously requires 10^20 events. But this is only true if they are independent probabilities. Which they aren’t.
Unless the single mutants are lethal, the probabilities of the combination cannot be independent. If mutation A or B happens, the probability of the other mutation is dependent on how many individuals have the first at a given moment in time. An A subpopulation can get the B; a B subpopulation can get the A, and there can also be recombination between A and B subpopulations.
Sure. Using his ‘clock’ of cell divisions, it happens 100 million times less frequently. This does not mean his clock is ticking probabilistic trials. If he counted infected people, or blood meals, or per-mosquito rates, he would get the same relative incidence. You can’t just count total population members without considering the fact that not all individuals breed. 1 in 10^20 is not a probability. Well, it is, but not the probability of this!
Define complex.
Operational definition of complex: unevolvable.
Operational definition of unevolvable: too complex.
Allan Miller,
Correction – 60 million (6*10^7), not 60 billion.
I’m sure Mung will be along soon to clarify what Mung meant. After all, Mung must know as Mung has made the claim and can presumably quantify/justify it.
CharlieM,
From your White quote:
You know what you are looking at there? You are looking at a population bottleneck!. There’s a bottleneck going in, and another bottleneck going out. And bottlenecks in the mosquito too. This is vital, even if you don’t think so. It is also vital even if White doesn’t think so.
10^16 cells derived from 10 have an effective population size much nearer to 10 than to 10^16. The larger number is pretty irrelevant, in terms of the likelihood of a particular variant getting past 1 human body (as it must to be clinically significant).
I’ll keep saying this, and you’ll keep telling me to ‘look at the real world’, in which apparently there are no population bottlenecks, or if there are you think they have no relevance to frequency of variation – a mistaken belief, as any population genetics primer will show you.
The frequency of 1 in 10^20 is an estimate from observed facts. If the parasite required a similar complexity of mutations to overcome a new drug that had the same effect as chloroquine, then I would say that the probability of it appearing in an individual cell would be 1 in 10^20.
Here are the factors which White takes account of in determining the probability of
selection of de novo antimalarial drug resistance:
1. The frequence with which the resistance mechanism arises
2. The fitness cost to the parasite associated with the resistance mechanism
3, The number of parasites in the human host that are exposed to the drug
4. The concentration of the drug to which these parasites are exposed
5. The pharmodynamic properties of the antimalarial drug or drugs
6. The degree of resistance that results from the genetic changes
7. The level of host defense
8. The simultaneous presence of other antimalarial drugs or substances in the blood that will still kill the parasite if it develops resistance to one drug.
Its difficult to know what you mean here. For a start I don’t know where you get the 17 trillion replications from. It you mean each cell replicated 17 trillion times that is far to many, if you mean all the cells that have existed in each human, that would be far too few. Cells are constantly being produced and destroyed in each of us, some types at a phenomenal rate.
Anyway, let me give you some sort of answer. If the 2-mutation change is detected once every 60 million humans then the probability of it occurring in a period that 60 million people were born would be 1. The probability of it appearing in 1 person would be 1 in 6 * 10^6. The probability of its de novo appearance in 1 individual cell would be astronomical.
The proof is in the pudding. The appearance of atovaquone resistance can be measured in days. It took decades for the parasite to achieve chloroquine resistance.
CharlieM,
Sure. It’s harder (actually, about 1 decade). But is its probability therefore 1 in 10^20?
I have written an OP about this trying to explain my thoughts in a single essay rather than split across posts. Could we continue there? It also addresses your prior post, though in advance of it. I will copy that post there, and respond there also. This thread’s getting long, and keeps dropping off the page.
The likelihood of a particular variant getting past 1 human body is relevant for the fixation in the wider population, it is not relevant for the arrival of the mutation arrangement in one organism. When you talk of getting past one human body you are talking of the spread of the mutation arrangement not its first appearance. We already know that once the mutation arrangement appears its spread throughout the population is highly probable.