We have folks on both sides of this question, so it should make for an interesting discussion.
(I’m a ‘yes’, by the way.)
We have folks on both sides of this question, so it should make for an interesting discussion.
(I’m a ‘yes’, by the way.)
You must be logged in to post a comment.
empaist:
keiths:
empaist:
Very few, which is why I am at pains to stress the difference between reducibility in principle and reducibility in practice. Should we shut down the Department of Biology and send the professors over to Physics? Absolutely not. For practical reasons, most biological problems are appropriately tackled at a higher level of abstraction than pure physics.
To me, the interesting question is whether reduction is possible in principle.
Oh, the philosophical benefits of a successful reduction are tremendous. You establish that strong emergence and downward causation are absent, and you eliminate the need for a separate ontology at each level of description.
But that’s a complaint about physics, not about reductionism. You (and Dyson) are expressing disappointment that physics doesn’t go deeper than it does. A successful reduction involves reducing theory B to theory A, not extending theory A to deeper levels.
Kantian Naturalist,
There is nothing dogmatic or religious to say that it is not Dennet’s authority to decide which questions can and can’t be asked, and then claim he has somehow solved the consciousness problem.
When you have a book called Consciousness Explained, and your explanation is that consciousness is not the problem we should be trying to solve, you are a huckster.
You apparently share his huckster tendencies. Likewise it was YOU who claimed that creationists are unable to understand evolutionary theory because quote ” they have deep-rooted ethico-political motivations for not wanting evolutionary theory to be correct.”
I guess you and Dennet have the “ethico-political motivations” vaccine which makes you believe it doesn’t cloud your thinking.
It is not phoodoo’s authority to question what Dennett is permitted to say.
I doubt that Dennett is actually asserting any authority. I’m pretty sure that he sees himself as presenting a persuasive argument, and he hopes that readers will be persuaded.
keiths,
A tautology is a measurement of the average tautologies in the tautology. So the tautology in that tautology is not the tautology in the tautology of the tautology of claiming fitness is not survival.
Since you can’t understand the difference between a statistic and a tautology, you are incapable of understanding where the tautology lies.
Neil Rickert,
Yes I am sure you are right Neil. He is hoping that people will be persuaded that the answer to the difficult question of how to explain consciousness is to tell people not to ask that question. Problem solved. Consciousness Explained!!
You should buy his book.
keiths:
phoodoo:
No answer, eh, phoodoo?
Barry, Eric, William, and phoodoo: four IDers bewildered by a simple batting average.
Vjtorley is trying to save you guys some embarrassment, but you won’t listen to him.
keiths,
And to which Eric shot down your stupid spin quickly and succinctly:
” He suggests that we can “define fitness in terms of the average success of a genotype in a given environment,” adding that “You don’t need to exclude any cases.”
I think KeithS’s proposal merits serious consideration, as it is non-tautological.
No. It makes no difference whether we define natural selection as acting with precision in all cases on every organism or whether we define it as some kind of statistical/stochastic average. That is not the issue.
The issue is defining the resultant in terms of the antecedent.
At the risk of linking to myself again, I addressed this in detail in my essay here (see in particular the beginning 4 pages):
http://web.archive.org/web/200…..tsonNS.pdf”
phoodoo,
Could you please learn to use blockquote tags? I’m sure the folks at UD would appreciate it as much as we would here at TSZ.
Thanks.
phoodoo,
If Eric’s argument worked against fitness, it would also work against batting averages.
You guys are denying something that is obvious to Jane and Joe Sixpack: batting averages are meaningful.
I can somewhat see where you’re coming from here, but I find the idea also quite baffling.
If the autopoeisis of living bodies and the deontic scorekeeping of linguistic norms — to appeal simultaneously to Varela and Brandom — could be “reduced” to physics, maybe. It would depend on what the “reduced to” really amounts to — something a heck of a lot stronger than just “constrained by,” right? It’s one thing to say that organisms and languages don’t involve anything that violates the laws of physics; it’s quite another to say that everything that they do can be explained in terms of those laws!
And then there’s the question of whether or not physics is just syntax. Conceivably this could be made to work if the universe were a computer, or if the essence of the physical is computation, maybe? That seems dubious to me. I like some materiality in my physics, but maybe I’m just old-fashioned that way.
I did — many years ago (probably soon after it was published).
Neil Rickert,
David Hannum was right.
phoodoo,
Yes, and Dembski counts on it.
keiths,
Why, did someone named Dembski say that consciousness is not a hard problem because it either doesn’t exist or if it does exist you shouldn’t ask about it anyway?
KN,
Yes, it would need to be a full intertheoretic reduction in which all of the higher-level truths regarding autopoiesis and linguistic norms could be fully expressed in terms of physics. I think they can — at least in principle.
To me, the idea that autopoiesis is irreducible to physics smacks of the discredited idea of the élan vital — a separate ingredient that supposedly must be added to mere matter to give things the “spark” of life.
It’s much more plausible to me that autopoietic and living systems gain their special properties via specific arrangements of matter and not through the addition (or strong emergence) of some mysterious, irreducible element.
Keep in mind that in discussions of intentionality, “syntax” usually has a broader meaning than just the rearrangement of symbols according to formal rules. That broader meaning encompasses materiality.
Here’s how I described it to Bruce and Neil:
And here’s Dennett:
NO. Biology is of Gods breath/shadow. :Life is not material in its` essence as says the bible.
Only the material can be reduced to physics. So a dead body can be figurd out in its physics but not the same body as to why it works.
ID says biology comes done to information and not mere physics. So why they say biology shows a creator. Yes but even this info system is material however complex.
So, would you agree that a dead body is lighter then the same body when alive due to something “leaving” the body upon death?
I think she says the argument in the first chapter cannot be extended to mental events but then she references the 98 paper for her alternative to cover that case.
Thanks for the reference to the Horgan review which I found quite helpful in making sure I understood her ideas, at least to a first approximation. It does seem she bases her approach on the hybrid causal approach to meaning applied in her “analytical entailment” approach. So if Horgan is right that it fails due to an infinite regress, then that would be a problem for a lot of the book.
The review also mentions a problem which troubled me: at least based the first few chapters, the book seems to be not so much about “what is”, but about what we can conclude based on how competent speakers use words.
Likely I have missed something about how the two are linked.
But regardless, it is still intellectually rewarding to work through the structure of the arguments.
No papers on her U of T web page and a face picture only on her facebook page. Not that that is relevant to anything in particular….
Congratulation on publication, KN, and I will certainly be doing my share to help make it a bestseller when it is available in a couple of weeks.
Would you be open to a thread on the book with people asking questions about it for you or others to answer (or at least give hints), time permitting?
From your summary, I can see it covers a lot of ground that I am not familiar with. I also don’t see that territory being considered in the related discussions here (except by your posts!).
I do have one other question. Rather than me continuing to hurt my head trying to understand the definition of “autopoietic”, can I substitute “what the consensus of the scientific community would consider living” without losing too much? Or is that too much of a Dennettian approach to meaning?
A standard definition, for those who have not resorted to Google yet:
I always thought the essence of the system reply was that it was the virtual process/entity where meaning resided, where that virtual process/entity consists of the human, the rules, and the ongoing processing. That would seem to be above the human, at least in the sense of “above” meaning encompassed by a mechanism.
ETA: My point is that it is not the human where we find meaning, it is the mechanism the human is part of.
It would also seem to be close to saying meanings are dependent in people; brains are not sufficient.
Now I don’t think the systems reply is complete — I think it needs to be extended to the full robot reply. Further, it may very well be that such a robot would need to be sophisticated enough to be considered living by the scientific community. Finally, it is an empirical question whether such a robot could be built using solely GOFAI technology based on the Fodor version of the computational theory of mind.
There used to be a few of Raffman’s papers linked there, but they’ve all been removed, likely at the behest of Wiley’s or Springer or some other bloodsucker that wants to be paid some huge sum if you look at more than the first couple of lines of any paper.
There is a (IMO awful) paper of hers in a book edited by Edmond Wright called “The Case for Qualia” which you can find on scribd.
I’m not recommending any of her stuff, of course. You’re fine without it.
That’s not how I understand the systems reply, fwiw. I have taken the claim to be that the human brain is more analogous to a city than a room because of all the parallel processing, and THAT’s what it’s said to take to get to semantics.
I don’t find that approach particularly compelling, myself, but…..
W
I agree that just specifying some kind of computing architecture for the brain does not do much, but I don’t think that is the systems reply. From SEP:
Keith: KN’s post also included “explained by those laws”. Even if reduction to expression in those laws were possible in principal, would the results still explain?
BTW, it’s is helpful to understand that by reduction you are referring to the concept as originally put forward by Nagel and others in the 50s/60s. At least, that is how I understand you.
Yes, I agree with that.
That’s about the view of the “Chinese Gymnasium” interpretation. But that has to do with connectionism.
Most computer science AI folk do not believe that parallel processing adds anything other than performance (compute speed).
Thanks. I took the Systems Reply to be something like “Well, the guy in the room is only analogous to perhaps one mind/brain module, while real people are more like cities because of the massive capacity of our “processors.” That is, it’s not that something OTHER than the guy (a whole system) understands Chinese, even if HE doesn’t, it’s that the analogy of a (mere) room for the guy is bad, because it’s too feeble. (Not sure that’s going to make sense–it seems kind of confusing/confused). It seems to me from the SEP article you excerpted that some of the systems responders do seem to be taking that position (those who talk about “modules”)–but maybe they comprise but a small subset of those said to be using the “systems reply.”
It may also be true that my interpretation (which was largely based on internet confabs like this one, from a half-dozen years back, rather than upon reading any of the actual papers) doesn’t actually comport with the original Yale response that Searle dubbed “Systems.” I dunno.
ETA: Looking at the Wikipedia page on TCR, it may be that the response I was thinking of about is really called the
And chemistry doesn’t add anything to the solution of protein folding (compared to computation) except speed.
Right. Thanks. I agree with such AI folk, fwiw. (You may remember that from old arguments at Analytic.)
Whereas it raises my pragmatism hackles to assert that this can be done “in principle” if we have no idea how it could be done.
Not necessarily. Even if I’m right in thinking that reduction is unlikely to be possible, that doesn’t translate into any specific ontological claims. We would want to do some ontology in order to explain why reduction is unlikely to succeed, of course.
But that could just as much be ontological claims about how the human mind represents reality as it is claims about how reality is. Specifically, one could simply hold that the sciences are not unified (or even unifiable) because of the constraints in the human mind for representing physical, biological, and psychological domains. Put otherwise, one might think that there are deep-rooted evolutionary reasons for why folk physics (causation), folk biology (teleology), and folk psychology (intentionality) are ineliminable from our cognitive architecture, no matter how much science amplifies and refines them.
Of course — if one were committed to the unity of science. But the proponent of autopoiesis certainly does not insist that living things differ from non-living things by virtue of some constituent! That’s not the view at all. Rather, the view is that the special properties, however causally realized or implemented, pose an obstacle to intertheoretic reduction, because reduction is about understanding one set of properties in terms of another, more ‘basic’ set of properties.
Maybe you also had in mind Ned Block’s Chinese Brain argument against functionalism Excerpt Here (PDF).
I took the connectionism stuff to be about what type of computing architecture could be used in the robot reply or, what likely amounts to the same thing, what type of architecture(s) are needed to scientifically explain how our brains work. I see Barsalou’s perceptual symbol approach as an interesting third alternative (to the computation on abstract symbols approach of GOFAI and to some versions of connectionism). His approach says (very roughly!) that the brain representations are lossy recordings of the some past perception with the computations (eg concepts) being simulations involving such representations. By “computations”, I don’t necessarily mean what happens in GOFAI programs.
I’d be happy with that, if there were interest from others here. And I certainly won’t be offended if there isn’t.
That would be fine. Google’s definition works quite well for my purposes here.
keiths:
Bruce:
That’s right. The human is operating syntactically with respect to the Chinese symbols, because he doesn’t know their meanings and can’t take them into account. He can only follow the purely syntactic rules. The higher-level meanings exist at the system level and are irrelevant to the human’s syntactic processing.
Yes. The meaning of the Chinese symbols is found at the system level, not at the human level. The human is operating syntactically with respect to the Chinese meanings.
However, in reading and interpreting the rules, which are expressed in English, the human is taking meaning into account — the meaning of the English words. His processing is semantic with respect to the English words, but syntactic with respect to the Chinese symbols.
KN:
keiths:
Bruce:
If the higher-level result were explained by the higher-level laws, then yes, the lower-level result would be explained by the lower-level laws, provided that the reduction was successful.
keiths:
KN:
But we do know how it could be done in principle. Autopoiesis is just a name for what happens when matter is placed in a self-maintaining arrangement. No new laws are needed; it’s just an aggregation of particles following the laws of physics, just as any other aggregation would. It can be studied and analyzed on that basis.
Reductionism has been fabulously successful so far, so one needs a pretty good reason to assert that it will fail when applied to teleology or intentionality.
keiths:
KN:
I guess I don’t see why that should pose an obstacle, but perhaps that’s because I don’t believe in original/intrinsic intentionality. Since you do believe in O/I intentionality, are you arguing that reduction can’t succeed because O/I intentionality cannot be reduced?
If so, that’s a double-edged sword. If O/I intentionality can’t be reduced to physics, then it can’t be built up from pure physics either. You have to explain how it arises. To merely assume that it arises nonreductively isn’t enough, because then you are essentially assuming its irreducibility instead of demonstrating it.
Reductionism implies you do not need a new category of physical law to explain emergent phenomena.
I did hear him say in a recent (2012) talk that he is now a “pragmatic realist” about the folk psychology concepts covered in “Real Patterns”. I don’t know if that means he would go along with L&R’s attempt to make his rough ideas precise.
KN:
I noticed your posts on this did not use the word “representation” for somatic intentionality (or at least I did not see it on a quick re-read). And the above quote implies (to me) that you don’t find that a useful concept with respect to that type of intentionality.
Am I reading you correctly? Should I wait for the book for the details?
Humans have capabilities that chimps do not have. Are humans categorically different from mammals? From animals? From living things?
The point of assigning categories is to make useful comparisons.
Darwin made a useful comparison berween Adam Smith’s invisible hand and biological evolution. I see a useful comparison between thought and evolution. I see large differences in the substrate, but I and large numbers of computer scientists see something useful in the concept of variation and feedback.
I don’t use the term “representation” in my book because I wanted to avoid getting side-tracked in those debates.
Within cognitive science, the sort of view I endorse is usually regarded as anti-representational (for a whole host of reasons) and in philosophy of language, the sort of view I endorse is usually regarded as anti-representational (for a whole host of completely different reasons). For the cognitive science story, there’s Chemero’s Radical Embodied Cognitive Science; for the philosophy of language side, there’s Brandom’s Articulating Reasons.
At some point I will need to sit myself down and work through this stuff. I didn’t do that in the book and so I avoid the term “representation” altogether.
fwiw, I struggled with the term ‘representationallism’ when working on my book. Not only does it mean different things to different people, it can mean OPPOSITE things (like ‘virtual’). So I shifted to ‘representationism’ which I got from Block (who opposes it). Not sure it helped, though–and itks in the freaking title!
KN,
If posting about it helps you to work though it, I’d welcome such posts.
With those views of cognitive science and philosophy of language which don’t use representations, I don’t understand what “derived intentionality” could be as something separate from one of the two types of original intentionality.
The only thing that occurred to me was to say derived intentionality was the same thing as taking the intentional stance to a non-living entity when its actions imitated those actions that living entities would do as part of exhibiting intentionality.
If it is something different from that, can you give me an example of derived intentionality and how it differs from either type of original intentionality?
You’ve mentioned Chemero before and I’d googled him. It does seem that this extreme point of view is far from the cognitive science mainstream. Now there are psychologists who support that research program (blog). Still, it seems to me that you are backing a long shot, so to speak, as a dependency in your philosophical program.
But hey, whatever it takes to sell books, right? (insert smiley here)
Keith:
“Explained” is a slippery term, so let me try to give a thought experiment as a way of understanding what you might mean by it.
Suppose evolutionary theory has been reduced to Quantum Mechanics. Consider a physicist who has a deep knowledge of QM but knows nothing of biology nor of any of the inter-theoretic translations that allow evolution to be expressed in QM.
Now consider the type of fitness measuring work Steve S has outlined in another thread: looking at genotype frequencies over time in a population to measure fitness and testing whether drift is sufficient statistically to explain them. Further, assume his research program also includes trying to understand what determines such changes if they turn out to be unexplainable by drift (eg to explain what parts of the genotype NS is operating on in that case).
Given the reduction of biology to physics, it seems to me that the original genotype frequencies would have to be specified somehow as a quantum state, there would have to be kind of Schrodinger equation way of specifying how that state would evolve in time, then a way of conducting a measurement which somehow yielded what a biologist (but not the physicist!) would call genotype frequencies after that time, and then some way of specifying in QM the statistical hypothesis of drift versus NS. Further, that physicist would then be able to describe a research program solely in QM which would accomplish the same goals as Steve S.
Is that would you mean by saying physics can in principle replace biology for explanation?
Physics can’t derive the properties of water from first principles.
That’s probably a wise move.
From a computer scientist’s point of view, information can only exist by virtue of the physical representations in the signals that transmit that information. So I see anti-representationalists denying that there are representations yet embracing the use of information.
I can only conclude that “representation” is a very confusing word that is used in many incompatible ways.
Well, that depends on what you allow “in principle” to mean. Possibly “in principle” physics could derive the chemical properties of H2O from first principles?
I agree it seems very unlikely. But who knows what quantum computation and hence simulation may let those wily physicists do, if quantum computers can be built with enough qubits?
Reminds me of a book I heard of by a hardware engineer denying the value of software since running software is just some hardware configuration in a state space with the appropriate dimensions.
And, as a sequel, some physicist is supposed to have pooh-poohed circuit design because, after all, hardware in operation is just the evolution of the quantum state of some localized part the universe
Unfortunately, I cannot seem to locate either book with Google.
A more general rule would be that you can’t derive all possible combinations from rules. Chess would be an example of a system with just a few pieces and a few rules, but it taxes our most powerful computers to their limits. Add more pieces and rules, and you would tax any possible computer.
I don’t know exactly what it is that makes protein folding such a difficult computational task, but I suspect it’s not entirely unlike chess in requirements. A computer that can win at chess is not necessarily exhausting the space of all winning games. There is no way to ensure that a winning strategy is the best possible strategy, or the only winning strategy.
With protein folding, there can be alternate solutions, and current programs do not guarantee they produce the best or only solution.
Regarding water, we are still finding new configurations.
It has been brought to my attention that the price of the book is actually not that steep, compared with what has become normal for publishers these days. (Compare various hardbacks in philosophy from different publishers and you’ll see what I mean.) And in fairness, this particular publisher has been very good to me; I appreciate what they’d done and I’ve genuinely enjoyed working with them.
I would also like to point out (which I did not actually know, though I should have) that the Introduction and Index are available for free from the website! There is also a “recommend to your librarian” link which I warmly recommend, whether you’re an academic or not. Libraries are fantastic.
petrushka,
Right. If reductionism holds, then any emergent properties are only weakly emergent and thus explicable by the lower-level laws.
For various definitions of explicable. Hindsight, for example.
petrushka,
That’s why I’m careful to separate the question of reducibility in principle from the question of reducibility in practice. The former is a question about the nature of the sciences, while the latter is primarily a question about the capabilities of brains and computers.
They’re both interesting questions, but reducibility in principle is what I had in mind when starting this thread. If in-principle reduction obtains, then any emergent phenomena are only weakly emergent.