…the noyau, an animal society held together by mutual animosity rather than co-operation
Robert Ardrey, The Territorial Imperative.
…the noyau, an animal society held together by mutual animosity rather than co-operation
Robert Ardrey, The Territorial Imperative.
Like I keep saying, It’s amusing when people who don’t know anything about compilers talk about compilers “aborting on error”.
If the Ribosome is a compiler then where is the Backus Normal Form specified? If you can’t say, in what sense are those people defining “error”??
Obviously you don’t know anything about compilers. I made my case. You can either address it or remain willfully ignorant.
Like I’ve said a couple of times, It’s amusing when people who don’t know anything about compilers talk about compilers “aborting on error”.
If the Ribosome is a compiler then where is the Backus Normal Form specified? If you can’t say, in what sense are those people defining “error”?
When you find someone who talks about compilers and you think they don’t know anything about compilers, then make your case against them. Absent that all you are is a little cry-baby- but we already knew that.
BruceS,
Yet another equivocation. As noted by a couple of other people here, those seem essential to any IDC argument.
If the Ribosome is a compiler then where is the Backus Normal Form specified? If you can’t say, in what sense are you defining “error”?
Neil Rickert,
Lisp family compilers often pause and give the developer the opportunity to fix the problem in situ, then continue compiling.
For a compiler, an error is when the input does not match the BNF (Backus Normal Form). OMagain is quite right about that.
Biologists have made the discovery, OM. Perhaps your issue is with them.
Yes, Patty, we already know that you are trying to wiggle out of the challenge.
Typical but still pathetic.
And I am quite right when I said that a compiler takes a source code and translates it into an object code. I was also quite correct when I said that compilers abort- ie they do not create an object code- when they detect an error.
The entire field of evolutionism relies on equivocation.
If the Ribosome is a compiler then where is the Backus Normal Form specified? If you can’t say, in what sense are you defining “error”??
Biologists have made the discovery, OM. Perhaps your issue is with them.
Or perhaps you are just a belligerent ass…
If the Ribosome is a compiler then where is the Backus Normal Form specified?
If you can’t say, in what sense are you defining “error”?
It’s part of the ribosome’s software.
If the Ribosome is a compiler then what Backus Normal Form is it using?
If you can’t say, in what sense are you defining “error”?
Belligerent ass it is then. Good luck with that
If you can’t say, in what sense are you defining “error”?
I am OK with the biologists who said ribosomes reject detected errors. If you have any issues with them then take it up with them.
Compilers mostly detect syntax errors and I don’t think failures to transcribe the genetic code into the correct protein should be called a syntax errors.
I’d say they are semantic errors, analogous to the program not doing what the program designer had in mind. But if I understand that analogy correctly, the reason for the error would not be poor programming. It would be a hardware failure during execution of the program.
What about poor programming? When applied to creating coding schemes, programming errors include an ambiguous code, an incomplete code, or an inefficient code (eg 4-letter words when 3 are all that is needed). The last is an example of failing to meet a non-functional requirement.
I suspect that the designer that ID proponents have in mind would not commit these types of errors, but that is just speculation on my part.
But a designer is not needed to prevent them. Evolution would weed out these errors, if they arose naturally. For example, a more-energy efficient implementation would be more likely to survive.
Ribosomes “detect.”
DNA “maps.”
Compilers “try.”
It’s like a theme park of metaphorical usages. Is that really the sort of thing that ID enthusiasts depend on to make their case?
There is no such thing as a ‘malformed string’ as far as a ribosome is concerned. You can get a ribosome to peptide-bond all manner of truncated tRNA analogues, reducing it down to the ACC- Acceptor stem. So you don’t even need mRNA to get some peptide-bindiing activity.
It is an all-purpose – uh – machine, more like a pasta maker than a semantic parser. Anything with a START codon will be translated until it hits a STOP. There is no way for the ribosome to detect an ‘error’ in the mRNA. Malformed proteins are detected elsewhere and degraded, because misfolds have distinct physical, not semantic, characteristics. But that’s not a ribosomal function anyway.
Compilers, that abort, meanwhile – that would be a bug, not a feature. Compilers report errors. Typically, they carry on, in my experience, though they may halt for programmer intervention. There is no ribosomal analogue of this. It’s not that they are expected to share all the features, but for the purposes ‘Frankie’ invoked the comparison, they share none, beyond string input and output.
Yes.
Hi walto,
Have you heard of Stephen Wolfram?
Let’s start here:
code
Mung,
Thanks, Mung. I had not heard of Wolfram. I’ll take a look.
Hmmmm.
I’m neither a mathematician nor an expert and don’t want to be substituting my judgment for that of somebody who is both, but I’m pretty good with defs, and I think I see some problems here. For example, in the definition of “string, we see an implication that the set elements being being referred to are symbols (though “not necessarily distinct symbols”). But there’s nothing in the definitions of “word” or “alphabet” here that mentions symbols (although “letter” is used, undefined).
This may seem like a quibble, but I think one of the main issues here revolves around whether a code must be representative. Because if codes must be symbolic, naysayers will say that DNA sequences aren’t codes, and thinking that they are is nothing but a map-territory error. And if codes need not be composed of symbols, but are required only to be set elements as defined above without that implication, naysayers will say “OK, DNA sequences are codes. So what? So is this collection of pebbles.”
It’s noteworthy that any finite sequence of letters here qualifies as a word.
I think this might be helped if “symbol” and “letter” were defined, but I’m guessing that, again, either one or both of these defs would be controversial (and beg a question) or they would not help settle anything.
ETA: If there’s a moral to all this, I think it’s that the only hope ID theorists have is to focus on complexity, what Bruce puts as their number one argument. Not that that’s an easy row to hoe either, but I don’t think it NECESSARILY relies on a philosophical confusion (though its exponents may often screw up there too).
And, FWIW, I get the sense that that probably *IS* where they’re expending most of their efforts. I think that’s good, because IMO the other two arguments Bruce mentions simply cannot be put sensibly. They’re unfixable messes.
DNA is A template for chemical reactions, but DNA sequences do not represent anything abstractly.
I agree that usage of “code” is not using it to mean representation but instead to encompass encryption, compression, error-correction.
Interestingly enough, one theory of what neurons do is to create a compressed version of the information perceived from the environment which tries to find the best tradeoff of encoding as much of that information as possible with the fewest neurons. So a natural process of encoding and compression happens all the time in the brain under that theory.
I personally would still not consider “code” a word much used in mathematics; this usage to me is more of a case of mathematics applied to engineering problems. But that’s a personal quibble.
Wolfram is a very smart man who made a lot of money by inventing a system to do symbolic math on a computer. In his spare time, he re-invented science.
I think Plantinga’s EEAN depends partly on the claim that intentionality cannot be naturalized (basically because it relies on saying evolution shapes behavior, not knowledge).
His argument may be unsound, but I don’t think is it a mess.
I’ll buy that, but it might have to be added to your list. I’m not sure it fits comfortably in your 2 or 3.
You mean there would have to be a lot of work to make ID arguments into something respectable, even if wrong? Sure, that is where the fun is! Else why bother with them?
BTW, I count Eric’s latest as a win for your recent posts on that thread, which were at least in a Gandhian spirit.
I agree with that. There is a narrow part of mathematics that theorizes about error correcting codes and such. The Wolfram definition seems to come from there. It’s usually a mistake to think that the mathematical definition of a word has much to do with the ordinary meaning of that word as used in other aspects of life.
That’s surely a mistake (by Plantinga).
It is likely true that intentionality cannot be mechanized. But “naturalized” should mean something different from “mechanized”.
Incidentally, the reason that people find consciousness such a puzzle, is because they mistakenly take “naturalized” to mean “mechanized”.
I just meant that the claim that intentionality requires a designer is a different sort of claim than that DNA is intentional. Hence, a 4 is in order on your list.
I would say that the EAAN depends essentially on the claim that it is logically possible that semantic content is irrelevant to adaptive behavior.
The real problem is that Plantinga fails to start off with what a naturalistic account of semantic content might be. (He’s entitled to be skeptical that there is one, of course.) But since he doesn’t take a naturalistic account of semantic content as a starting-point for the EAAN, his entire argument fails to show that naturalism is self-refuting, as he claims it does.
Just as pumping representation into codes is an easy gig for IDers, if you let them do it, so is allowing “natural” to include consciousness. In addition, it likely results in panpsychism.
Again, if it seems easy, you’re doing it wrong.
I have always thought it was unwise to bet against something that has already happened.
petrushka,
Also, if the answer seems obvious, there’s an excellent chance you’re thinking about the wrong question.
That’s certainly not an understatement.
Wikipedia says:
From 1992 to 2002, he worked on his controversial book A New Kind of Science,[3][35] which presents an empirical study of very simple computational systems. Additionally, it argues that for fundamental reasons these types of systems, rather than traditional mathematics, are needed to model and understand complexity in nature. Wolfram’s conclusion is that the universe is digital in its nature, and runs on fundamental laws which can be described as simple programs. He predicts that a realisation of this within the scientific communities will have a major and revolutionary influence on physics, chemistry and biology and the majority of the scientific areas in general, which is the reason for the book’s title. link
I’m not sure if the penny has not already dropped with scientists but the idea of simple, local rules seems fundamental to understanding all sorts of biological phenomena.
One of the implications of Wolfram’s simple local rules is that systems evolve from them that cannot be anticipated or predicted.
God making a stone so heavy He can’t lift it. To be poetic about it.
I am of the opinion that digitalness is an illusion. I’m a becoming kind of guy.
Here’s a Turing machine implemented in GOL: http://rendell-attic.org/gol/tm.htm
I have to admit I was being sarcastic. Wolfram is known as an egotist.
Wolfram did much detailed work, as I understand it, but he did not invent a new way to do science.
First, it does not work for most of physics I believe.
Second, people like Freidkin have been looking in that direction for some time, though not the level of detail of Wolfram, I suspect.
I’m not sure what you mean by asking if scientists know about simple, local rules. Lots of work in complexity theory and its applications is about how simple rules in complex environments lead to complex, emergent behavior: fish schooling, ants finding the fastest route to food, termites building nests, fireflies synching, stuff like that.
Suppose we conclude some signals from outer space represent a meaningful message. Then we could say the signals have intentionality, dervied from the aliens.
Similarly, if we accept the conclusion that the genetic code has meaning, and is not just biochemistry in action, and we refuse to accept evolutionary arguments for its origin, then we might conclude an intelligent entity did it.
That’s how I see the intentionality being linked: not to a designer, but to a entity who is the source of the intentionality in the genetic code (just to emphasize, it’s the code, not the DNA, that this applies to; the DNA is just a particular implementation just like newspapers are where you can find language implemented).
Right, If there’s intentionality there’s intentionality that has to be explained. The point is that the first paragraph could be true WITHOUT the genetic code having any meaning. You can start with the Chinese language or human cognition to get that there’s intentionality in the universe. Then a Plantinganian can argue that that can’t happen without a designer.There’s no need to discuss whether DNA sequences are intrinsically meaningful or not–it’s an unimportant side issue.
As it’s a fundamentally different type of argument, even though they both rely on the existence of intentionality to prove there must be a designer, I think you should expand your taxonomy to include it as a separate item.
That’s what I was referring to. The cross-disciplinary work with field biology and computer
projectionssimulations has proved really fruitful.ETA see above
OK, I think I see your point. There are two separate issues, which I am going to express in terms of derived and original intentionality, because I think we understand what these terms refer to, even if we make not agree that they are useful:
Issue 1: If the genetic code has meaning, it must be derived intentionality. DNA was here before us, so its intentionality must be derived from different intelligent agent.
Issue 2: No one disagrees that we have original intentionality now. But how did we get it? Plantinga argues it could not be created naturally, therefore God must have intervened at some point to make sure our behavior and our representations aligned (or something like that), thus enabling us to have original intentionality.
I agree those are two separate arguments and that I mistakenly conflated 1 and 2 earlier.
When you said “I’m not sure if the penny has not already dropped with scientists but the idea of simple, local rules seems fundamental to understanding all sorts of biological phenomena.” I understood you to be wondering if scientists were generally familiar with this work. I understand now that is not what you meant.
I suppose my confusion may have arisen because in Canada we no longer use pennies.
My brother emigrated to Ontario in 1968. I have spent many happy times there. I am still utterly confused by the idea of the liqbo but otherwise have very fond memories. A very civilized country!
So?