LONG WINDED VERSION AT UD:
Darwin’s Delusion vs. Death of the Fittest
CONCISE VERSION AT TSZ
From Kimura and Mayurama’s paper The Mutational Load (eqn 1.4), Nachman and Crowell’s paper Esitmate of the Mutation Rate per Nucleotide in Humans (last paragraph), Eyre-Walker and Keightley’s paper High Genomic Deleterious Mutation rates in Homonids (2nd paragraph) we see that by using the Poisson distribution, it can be deduced that the probability P(0,U) of a child not getting a novel mutation is reasonably approximated as:
where 0 corresponds to the “no mutation” outcome, and U is the mutation rate expressed in mutations per individual per generation.
If the rate of slightly dysfunctional or slightly deleterious mutations is 6 per individual per generation (i.e. U=6), the above result suggests each generation is less “fit” than its parents’ generation since there is a 99.75% probability each offspring is slightly defective. Thus, “death of the fittest” could be a better description of how evolution works in the wild for species with relatively low reproductive rates such as humans.
Sal will be back when the page turns and “hides” these questions. He can then pretend he missed them in “good faith”.
See what I mean William? These are the same people you put your trust in regarding FSCO/I etc.
He didn’t “defend” Muller, and he wasn’t citing Muller to support the idea that U = 0.5 would spell genetic doom for humans; he’s citing him as the first geneticist to conclude that humans have a high deleterious mutation rate. Muller was right: we do have a high U, possibly well up in the range that our species can tolerate. Lynch, in fact, thinks that U is actually much higher — somewhere between 1 and 5 — and that natural selection has been dealing with the deleterious mutations without difficulty. In other words, he does not accept your central claim.
The problem Lynch is worried about is that NS is no longer eliminating many deleterious mutations, now that it has been relaxed by industrialization and improved medical care. He’s worried that we’re going to get stupider, sicker, slower and generally worse before things get bad enough that NS again becomes an effective pruning agent for many mutations. Maybe he’s right, but that has nothing to do with your claim.
I don’t say, “NS no longer operates”. I say, “Your model is wrong and you’re not defending it.”
That’s absurd. If you’re right, then the human genome has been declining very slowly for thousands of years. How is one more generation, with the introduction of another half dozen deleterious mutations on top of the thousand or so we already carry, going to suddenly make a big difference?
U very likely doesn’t equal 6, and even if it were, it would not take 807 births per mother to handle deleterious mutations. (And selection is much more than 99.75% effective at eliminating most deleterious mutations. Heck, genetic drift is much more effective at eliminating new mutations than that.)
You’ve given fallacious arguments. No waiting is needed to dismiss them. (In any case, we’re the ones doing the observing and the testing, not you.)
Lizzie,
Yup, he has. Many moons ago. 🙂
stcordova
I think you may be missing an important mechanism of neighbourhood ‘exploration’ – it is not all down to point mutation. Frame shifts, inversions, transpositions, recombinations, excisions … the dimensionality of the exploratory ‘neighbourhood’ is massively increased. The “mum to mud to mad to dad” (answers in Genesis, hee hee!) vision of evolutionary ‘exploration’ is incorrect.
Of course, you may subscribe to the view that protein space is so severely circumscribed that such multi-acid ‘leaps’ are inevitably too far to be functional. You’d be wrong. There is substantial evidence that proteins have historically swapped ‘modules’ amongst themselves, and plenty of evidence that random, gross changes – or even quasi-random generation of synthetic ‘enzyme-like’ folds – produce functional products. You don’t even need to change a bit of sequence. One synthetic method, for example, joins the free N and C terminals of the chain then creates a new N and C by breaking a peptide bond elsewhere. It can produce functional products (although, obviously, not every single one works or is useful). Laid out linearly, one might think these products were an enormous distance apart in protein space.
Hate to resurrect this, but it was linked to a couple of days ago and it mentions ReMine’s tired old ‘Haldane’s dilemma’ canard.
ReMine suffers from the same maladies that Cordova does – in this case, he ignores that which does not support his position while elevating and embellishing that which he thinks helps it. ReMine claims that Ewens “acknowledged” Haldane’s “dilemma”, and agreed that there is no ‘solution’ to it. But ReMine omits the fact that Ewens thought that Haldane’s model was wrong, and that there was no actual ‘dilemma’ at all.
http://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/evolution/public/ewens.html
“From the very start, my own calculations suggested to me that Haldane’s arguments were misguided and indeed erroneous, and that there is no practical upper limit to the rate at which substitutions can occur under Darwinian natural selection.”
Odd, isn’t it, that in all of ReMine’s supposed conversations with Ewens, that this never came up…