The subject of obscure writing came up on another thread, and with Steven Pinker’s new book on writing coming out next week, now is a good time for a thread on the topic.
Obscure writing has its place (Finnegan’s Wake and The Sound and the Fury, for example), but it is usually annoying and often completely unnecessary. Here’s a funny clip in which John Searle laments the prevalence of obscurantism among continental philosophers:
John Searle – Foucault and Bourdieu on continental obscurantism
When is obscure prose appropriate or useful? When is it annoying or harmful? Who are the worst offenders? Feel free to share examples of annoyingly obscure prose.
Why?
I’m an empiricist of sorts. That is, I agree with Locke’s view that we gain knowledge by learning. But I think the philosophy of empiricism was already going wrong by the time of Hume.
I mostly disagree with Fodor. But he is right about one thing. The philosophy of empiricism, as presented in the literature, does not adequately account for concept learning. Fodor’s solution is his nativism (innate concepts). My solution is to look for a different account of learning.
There is something arbitrary about concepts. We can see this by comparing conceptualizations from different languages. If individuals acquire concepts in a learning process, we should expect some arbitrariness. So we should not expect a common mechanism.
Based on my mathematical perspective:
A syntactic engine is what you get when you use logic.
A semantic engine is what you get when you use geometry.
With logic, you start with representations (i.e. syntax), and form diffferent representations based only on form.
With geometry, you start with a world and come up with clever constructions that enable you to form useful representations of the world.
Incidentally, the social constructionists are right when they see scientists doing construction. They are wrong when they conclude that scientists are constructing the world. Rather, the scientists are constructing useful ways of representing the world.
With the syntactic/semantic description above, I’m inclined to say that the brain (particularly the perceptual system) mainly does the semantic work while the mind (or our thoughts) do much of the logic. If you want to insist that thought is also done by the brain, then I would put that in the motor control part of the brain rather than in the perceptual system, though of course there is a lot of interdependence between the two.
KN,
I’m surprised. You’ve expressed interest in the question of whether intentionality can be naturalized, and the issues raised by Putnam seem relevant.
I expect common mechanisms because of evolution and genetics and developmental biology. I am referring to the physical mechanisms for which let people learn concepts and use/generate concepts appropriate to their situation.
The commonality in concepts arises because people have to successfully live and communicate in a society, eg they need to tend to use the same words to describe the same situations. I don’t mean each person’s concepts are identical, just common enough to be successful in his or her society. Of course, the most commonality would be with people who speak the same language and grew up in the same environment as you. But the fact that we can translate languages, even approximately, tells me that there is something that transcends language about concepts. I think it has to do with common learning, perceptual, motor, mechanisms of people, and the common needs of all people in how they survive as social animals.
Fodor’s concepts may be aligned with traditional AI, but not with some of the other research paradigms for understanding concepts and the brain, like connectionism and perceptual/simulation theories. (I am not sure if dynamic system theory deals with concepts in its current form).
Your use of of the word “geometry” in these exchanges is something I have not seen elsewhere, but as best as I can tell, it is similar to the perceptual system paradigm of concepts. That school sees concepts as represented or generated uses neural configurations similar to those produced by perception, and not (as for Fodor and GOFAI) by abstract (mental) symbols.
It depends on how broadly you mean “mechanism”. I normally take it to mean something fairly specific.
We all categorize. But I expect that we all categorize differently.
With that “appropriate to their situation”, you make my point. We all have different situations.
My take is that the need to perceive, and thus the need to categorize, is prior to our acquiring the ability to perceive that we are part of a community. So we must initially develop concepts and ways of conceptualizing before we join the community. Once we are part of a community, it is to our advantage that we attempt to align our concepts with those of others in the community. But the alignment is unlikely to ever be complete. There is generally no external standard to which we can calibrate our conceptualization. We have only mutual calibration between members of the community. The way in which meanings drift over time seems to support this view.
No, it doesn’t. One can translate, perhaps imperfectly, between very different conceptualizations. The something that transcends language, is that there is an actual reality that we can refer to in our language use. (Cue a reference to Quine’s “gavagai” argument).
And, by the way, I see concepts as something analogous to coordinate axes in a coordinate system used for representing. In particular, I see concepts as part of the representation system and not themselves as representations.
Neil,
The drift of meanings over time isn’t due to a lack of external referents. Think of “fantastic”, which is in the midst of a transition from meaning “fanciful” to meaning “extremely good”. It’s not that either of those meanings lacks external referents, but the meaning has shifted nevertheless.
It’s not too hard to see how this happens. Consider that most of our vocabulary is acquired not via definitions, but rather from observing how other people use particular words.
Suppose someone sees a movie that is both fanciful and excellent. She reports that “it was fantastic”, and goes on to describe it in glowing terms. It’s easy to see how her much younger sister would get the idea that “fantastic” means “really good”, and go on to use the word that way.
Something similar happened with “awesome” and “terrific”.
Bruce:
Bruce is right. For example, the persistence of objects is a concept, but it is available to children long before they have the language in which to express it.
I’m not so sure object-persistence is a conceptual ability, though it is certainly an ability! If there’s a distinction between sensorimotor abilities generally and their culturally-specific elaboration into discursive practices, then there’s a question as to whether we classify “concepts” as falling on the sensorimotor side as well, or only on the discursive practices side, of that distinction.
One way to focus the mind on these questions is by asking, “do non-linguistic animals have thoughts and concepts?”
(This might be a bit different from asking, “are their sensorimotor abilities dynamically coupled to their organism-specific affordances?”, and that in turn might be a bit different from asking, “do their neurophysiological states function as map-like representations of their motivationally salient environments?”.)
Tomasello (2014) argues at length that great apes have concepts, that they think, and indeed can reason in causal and intentional domains. As he sees it, that’s what has to be at work already in order for language and culture to evolve. I’m thoroughly convinced that he’s right, but there could be nuanced criticisms of his work that I don’t know about.
I mentioned a lack of external standards. That is not at all the same as external referents. I also mentioned “that there is an actual reality” which you might reasonably have read as saying that there are external referents.
Yes, they do (in my opinion). I don’t doubt that language add richness to thoughts, but I also don’t doubt that thought can exist without language.
I would be inclined to say that all mammals have thoughts. And probably birds too, though that is not as obvious. I really don’t know how to judge this for fish or insects.
The vision system of the brain is a standard example.
When scientists conduct neuroscience or psychological studies with many subjects, then don’t report their results as “Subject 1’s brain works this way, subject 2’s brain works this way, …”. Rather, they build models of how THE human brain works. That is why I think there is a common mechanism.
Further confirmation of common mechanisms comes from innate concepts and universality of some concepts as studied by developmental psychology and anthropology.
Well, maybe those four words taken out of context make your point, but my whole post says something different: essentially, that different individual categories does not mean disjoint categories.
Experience with Feral Children makes me doubt that the processes can be separately that way. Instead, I think the evidence from extreme cases like feral children as well as from studying normal children as they mature is that we can develop our concepts only by being part of a family and a community at the same time.
Our common genetic heritage and the social nature of humans is part of that actual reality. You have to allow for that in understanding why and how translation works.
Maybe simply mentioning Quine’s armchair approach is enough for you, but I need an actual argument. Let me try a few from how I understand him:
#1: We cannot be sure of what terms in a foreign language refer to by spending a day with a native speaker randomly pointing to features of the flora or fauna. No doubt this is true. That is why the anthropologists who do this work in the real world and not from armchairs spend months or even years living within a culture and becoming successful participants in that culture as part of being able translate the language.
#2. We need a shared system of beliefs and concepts in order to make translation possible. No doubt. The fact that way can do successful translations is evidence that such a shared system exists.
#3. Even so, there may be errors or words that cannot be translated which means concepts are not shared Sure, all knowledge is fallible. That does not mean we cannot be reasonably certain of many things. And the existence of untranslatable concepts does not disprove there are no translatable ones. Further, since there is likely a dosage effect here (the more different the cultures, the more such hard-to-translate terms there are), it gives evidence for the existence of common concepts within a culture and between similar cultures.
#4 Translation is an underdetermined process; in theory there are always alternatives which implies that we cannot be sure we have the same shared concepts either. In theory that may be true, but in reality there are a helluva lot of translators employed at the UN, EU, in treaty negotiations, etc. So the money says underdetermination is not a real issue. No doubt unintentional errors are made. Further, sometimes intentional ambiguities are inserted: this proves a sophisticated, simultaneious understanding the concepts of both sides.
Of course, you are entitled to put forward your own pet theories. I personally try to stick with currently active research paradigms.
But then I don’t have a blog title to live up to.
For a fascinating series of novels on the difficulty of understanding an alien entity, I highly recommend Jeff Vandermeer’s Southern Reach trilogy published this year.
This post is very helpful and aligns with my views.
Here is an example of a cosmologist asking for that kind of help in “Philosophy of Cosmology”
No, that is not correct. Neuroscience is mostly based on intelligent design. That’s “id”, not “ID”. They design what they think would be a way of achieving vision, then they attempt to interpret the neural evidence in terms of that design. So the “mechanisms” actually come from their own designs.
As best I can tell, we do not actually know how vision works. J.J. Gibson’s theory of vision is radically different from that of David Marr.
I’m not understanding the point there. Who said anything about “disjoint”? The issue is with the commensurability of different ways of categorizing.
I don’t know how you are reading that. I see the evidence from feral children as supporting my view.
Our conceptualizations are structured. As you build concepts on top of concepts, those at the core of the structure become fixed because later concepts depend on them. What I see happening with feral children, is that they are introduced to a community only after some of their core concepts have become fixed, so have lost their plasticity. That’s why they have difficulty integrating well into communities.
I was merely trying to give you some context. I’m not a disciple of Quine (I find him hard to read). Most of my own views come from my attempts to understand the problems that a cognitive system must solve, and to find plausible mechanisms that could solve them.
Perhaps that distinction is one of the “deep conceptual muddles” that KN mentions? Teed Rockwell’s ideas (mentioned by KN) seem to be along that line. Don’t let the book summary put you off — read some of his posts. Not that I agree with some of the weirder things he says…
“A man will be imprisoned in a room with a door that’s unlocked and opens inwards; as long as it does not occur to him to pull rather than push.”
― Ludwig Wittgenstein
Sure they have to theorize a model. Then they check it by experiements from various angles. As do others scientists. If the whole scientific process completes with a consensus in the relevant community, then it is valid to claim that the model is the way the brain works.
The vision model has moved well beyond Marr. I just gave it as an example of a specific mechanism because you seemed to not understand what I was getting at. It is specific because scientists use it to do experiments. Whether the eventual consensus model will be based on it or something else is not relevant to my point in using it.
I understood you to be saying that we first build conceptualizations, then we participate in society. If that were the case, feral children would not be different from normal children. Another way of making the point under a plasticity approach is that the context for the plasticity exists matters.
I have to admit, I find it strange that you’d drop a one sentence reference to Quine into a post and then say you don’t even agree with him.
keiths:
Bruce:
It doesn’t seem like it. For instance, consider a self-driving car out by itself on a mountain road. It comes around a corner, sees a large rock that has just fallen onto the road, and brakes. It slowly drives onto the inner shoulder to get around the rock, then returns to the pavement and proceeds on its way.
I think we would agree that the self-driving car isn’t conscious. Yet there are data structures in that car’s memory that have meanings, like “there’s an object in the road at this relative position”, “the object is large”, and “the object is not moving”. No human is conscious of that rock, but the car contains structures that mean something about it.
Are you advocating truth by consensus?
There’s a reason that I prefer to view theories as neither true nor false. The experiments can, at most, show whether the theory-laden data is consistent with the theory. The question of how well it explains vision has to come from outside the theory.
My main point was that concepts are not genetic, not innate, and vary between people. And the evidence from feral children seems to support that.
That’s a bit harsh. I understand Quine to have been deeply skeptical about meaning. I don’t have the link nearby, but somewhere there’s a video where Quine explains this. He agrees that we have meaning, seen as a mass term. He is skeptical of meanings, seen as objects or things. I actually agree with a lot of that. To say that I am not a disciple of Quine is not to say that I disagree with everything that he writes.
My views on meaning do not originate with Quine’s writings. And I don’t like that Quine casts skepticism on the traditional view of meaning, but does not suggest an alternative as best I can tell.
Keith:
I posted that at the time to be provactive, although why I thought then that I’d have to be provocative to get a response from you escapes me for the moment.
I have no idea what KN actually meant. Further, I don’t really know if Rockwell’s ideas should be read to say that an agent’s consciousness and intentionality are best explained together. I was simply riffing on Rockwell’s radical version of extended mind (“the behavioral field”).
The best two reasons for me posting that are that I needed a bit of fun after my long rant in reply to Neil and I thought you might be interested in Rockwell’s ideas.
I hope KN decides to post more about what he he meant at some point. Or maybe his paper will cover that.
I think the consensus of the relevant scientific community is the way to find the best theory.
Truth? That is matter of philosophy I think.
As a still-hoping scientific realist, I think the that consensus theory is the best way of finding out some truth about the structure and/or entities in the real world.
But a scientific anti-realist could also interpret truth as is a way of describing scientists accepting the theory as the best way of explaining and predicting and acting as if it is true (in the realist sense). I mean this as a very rough approximation of von Frassen’s much more nuanced view.
Or one could be the type of scientific anti-realist who completely avoids the issue of truth and simply says the best theory means only that it is the most useful to explain and predict.
I do think best explanation is a valid way to describe a best theory. But I agree the justification comes from outside the theory; namely from the completion of the scientific process.
OK, I meant the feral stuff to refer to my understanding of your ideas on timing of acquiring concepts and integrating them with society’s.
I agree concepts vary. But I also think there can be a common core of a concept shared between some individuals. One reason is that science says the brain-implementation of concepts plays a role in driving behavior and people must align behavior to live in a society.
I don’t have a good idea of how you mean “incommensurate” (form your preceding post) but I think I’ve been down similar roads with you enough to know I probably am not going to understand you better by further exchanges on the topic.
Sorry if it came across as harsh. I meant that I personally sometimes find the reasoning behind your posts hard to decode.
In any event, responding helped me improve my understanding of Quine.
I believe that it is not meaning he is skeptical of, but rather he thinks that meaning is solely determined by pragmatic interpretation of public behavior. There is no further fact of the matter in a person’s head. All we know about what is in the head is knowable from behavior.
But if a scientific model links brain-based mechanisms explaining concepts to how an agent behaves, then I think there is a case to be made for same behavior implying sameness of concepts to the extent needed to generate that same behavior.
A bonus would be if science could objectively measure the sameness of concepts based solely on observation of the corresponding brain in action and dealing with the world. I think that is possible. But not soon.
I’m reading those now myself. Along the same lines, I highly recommend Mieville’s Embassytown and Watts’ Blindsight. Watts has a sequel to Blindsight called Echopraxia that I’ll read right after I finish Southern Reach.
I feel as though I was asked to elaborate on Rockwell, or my take on Rockwell, or something like that. Not sure what was asked so I’m not clear on what you’d like me to do.
I have seen an African Gray parrot that exhibited approximately as much awareness as a dog. It acknowledged a specific person as its friend (need a better word here), and would wait for him the way a dog waits. On one occasion, the bird watched from a second story window and saw its owner arrive, at night, from a block away. I don’t think this was exceptional for the species.
One can only guess at what it feels like to be a parrot.
keiths:
KN:
I think your dichotomy is too restrictive. There’s a large middle ground between sensorimotor abilities and culture-specific concepts.
Object persistence isn’t a sensorimotor ability, because our sense data by themselves can’t tell us that objects persist when they are out of view. And it isn’t culture-specific by any stretch.
Do you think you can distinguish this sort of semantics from that of a thermostat “meaning” something about the weather, and/or from the turning on of an outboard engine’s “containing structures” that result in it cranking when someone tugs on a rope? Or do you think those, too, are semantical activities?
For myself, I think I’d probably try to divide up the sorts of things “meaning” could mean.
Another approach is to expand your consciousness, or at least the definition thereof.
In Tononi”s theory of consciousness, the thermostat has a level of consciousness, although a rather low one. His theory is panpsychic in a limited sense. (I may be exaggerating a bit on the thermostat example, but only a bit) *
Did I mention the theory was the complex mathematical work of neuroscientist, and not a mystic?
—————————————————————
* That’s a joke, son. Information theory humor, that is. — F.L.
My post was actually an off-kilter response to your post about the “deep conceptual muddle in the problem being posed ” as studied by Dennett, the Churchlands, Searle, and Fodor.
If it is not already covered by the paper you mentioned, and if you have time, I’d be interersted in learning more about what you mean.
I would guess it has something to do with thinking about mind/body/world in the wrong way, but I’d like to understand your take.
Thanks for the book recommendations.
I think corvids (crows etc) are smarter than dogs at least for tool using and problem solving. Dogs are great at looking at us for to tell them what to do when given a problem to solve; those bird species figure it out on there own.
A lot of panpsychists around these days, I guess–including Galen Strawson, who, as I mentioned somewhere, doesn’t do nearly as much for me as his dad. FWIW, I had my own dabble, back when I did my Ph.D. thesis on Spinoza. Read a lot of Fechner in those days too.
That seems to be answering the question by definition: since only people are known to have language, only people can have concepts.
Yet a quick look at Wikipedia on Animal Cognition shows that there is a ongoing research program on the use of concepts by animals.
So I agree with Keith’s other post regarding the need to have a more granular definition: an either/or split into language-based concepts or no concepts is not consistent with the research program of scientists addressing the issue.
I should clarify the “limited panpsychism” by saying only that objects which had a complexity meeting the mathematical standard of the theory would be considered conscious. People have pointed out to Tononi that there are some objects that meet that standard but which are considered inanimate. Tononi bites the bullet and says they have a limited consciousness.
So his theory does not fit the usual definition of panpsychism which says, I believe, that everything possesses consciousness to some extent.
BTW, thermostats don’t possess the right structure or anything close.
I am not sure about driverless cars though.
I don’t see how you inferred my answer to the question based on how I posed it. In fact, I have the opposite view: I think that non-human animals and pre-sapient infants do have concepts and thoughts. In fact, in that very same post where I posed the question, I signaled my vociferous agreement with Tomasello about great apes. And I see no reason to deny that all sorts of mammals and birds have concepts. I might be inclined to draw the line at reptiles, though.
Regarding my self-driving car example, walto asks:
In one sense, no. I think that meaning is present whenever one thing stands for another, and this expansive definition encompasses phenomena as dissimilar as novels and tree rings. I don’t object when someone says something like “the sun is high in the sky; that means it’s late morning or early afternoon”, and I wouldn’t feel the slightest compulsion to put scare quotes around the word “means” in that sentence. Whether in brains, novels, thermostats, tree rings, or outboard motors, something is standing for something else, and that counts as meaning in my book.
Another commonality among those phenomena is that they build semantics out of pure syntax. Everything is physical at root, and the behavior of the elementary fields and particles is independent of any meaning that accrues at higher levels of organization. The laws of physics are syntactic, not semantic.
Having said all that, there certainly are useful distinctions to be drawn between different kinds of meaning. Conscious meaning is usefully distinguished from unconscious meaning, and the meaning of a data structure within a self-driving car is usefully distinguished from the meaning of the position of a bimetallic strip in a simple thermostat. The behavior of the thermostat is stereotyped and rigid. It boils down to something like “if (strip position means ‘too hot’), then turn on”.
The behavior of the car is far more interesting. The car isn’t just considering the meaning of one data structure. It’s looking at many such structures, building up an entire model of its surroundings, and then plotting a very non-stereotyped course of action to achieve its goals within the context of that model.
I’m guessing that those who don’t think that semantics is reducible to syntax are talking about meaning in the sense that tree rings would mean (or refer to) years even if no think thing ever existed.
That should read: “I’m guessing that those who don’t think that semantics is reducible to syntax are NOT talking about meaning in the sense that tree rings would mean (or refer to) years, even if no THINKING thing ever existed.”
Sorry about that.
walto,
Yes, that rewording is clearer.
The tree rings mean years (in some sense). But they mean years to us. I doubt that they mean anything to the tree. If a tree could think, it would probably think of the tree rings as just dead skeletal structure that provides mechanical support.
Those of us who think that interaction with the world is important to meaning might also suggest that tree rings are not syntax, but rather are a remnant of prior interaction.
What about this sequence:
1. A representation like tree rings and age, which is a consequence of natural law.
2. A representation implemented externally by an agent to serve some purpose eg “X loves Y” carved on a tree or dogs urinating on trees.
3. A representation within the agent’s body/brain which serves that agents purposes. The driverless car might fit here.
4. The representations within an agent which must compete for survival in the world. It must feed itself as needed to keep running, repair and/or reproduce itself, defend itself. Assume the agent also has all sensorimotor capabilities needed to do so. For maintaining homeostasis (ie feeding and repair), it must (ETA: or might?) have representations of its internal state as well as of the world outside of the agent.
5. The representations of an agent which can conceive of possible world situations and plan its actions to account for their advantages or disadvantages to the agent.
Would consciousness be a necessary outcome of being at one these points? Or would it depend on something else? If so, what?
ETA: I think the answer is that consciousness would be a property of 5 for sure, and possibly some cases of 4. But drawing the line would also require understanding everything needed to achieve those points. For example, does surviving in the world require global integration of multiple representations of current and past world situations from various senses and memory? Does it require focusing of attention on only a subset of the sensorimotor input most relevant to the agents survival?
Sorry for missing your point. It is true that I do not focus enough on the last sentence of your point.
When you said
One way to focus the mind on these questions is by asking, “do non-linguistic animals have thoughts and concepts?”
I took it as posing and either/or condition in the sense of “how CAN it be possible to have concepts without language?” and incorrectly read into that an implication that it was doubtful they could.
I understand now you meant it to mean “HOW can it be possible to have concepts without language”.
I am not sure if plants and microorganisms can be said to have representations of the state of the world or of their internal state, but they compete and survive. So maybe there needs to be an extra category to allow for creatures that compete and survive without internal representations.
I don’t think they could be conscious.
BruceS,
I’d think that (2) requires intentionality–or did at some point. Not sure about “consciousness”–a lot of “cogitating” “meaning” “referring”–semantics generally–seems to go on UNconsciously.
BruceS,
No worries! In any event, here’s the best answer I’ve yet been able to come up with — from a paper I published a few years ago:
———————————————————————–
It is often thought that if animals do not judge, then they lack concepts as such, and since McDowell denies non-conceptual content, he has sometimes been interpreted as having no way of understanding, or even acknowledging, animal mentality at all. Yet since McDowell regards sentient animals are genuine (and not merely “as if”) semantic engines, it does not seem plausible that we can construe them as entirely shorn of conceptual capacities, since it does not seem a hopeful line of thought to stipulate that there is semantic yet non-conceptual content. At the same time, the very idea of sapience, and the distinction between sapience and sentience, requires that the idea of judging play a central role in our account of sapience. Thus sentient animals are regarded as having concepts but not as judging. Is this coherent?
I suggest that it is coherent if one entertains the following line of thought. In the language that Frege taught us to speak, and which McDowell (along Sellars and Brandom) accepts, we distinguish between the sense and the reference of judgments. On the classical picture, sense and reference are distinguished in terms of whether truth-value is preserved under substitution of synonyms for synonyms in order to avoid ascribing contradictory beliefs to rational beings. If we regard sentients as using concepts but not forming judgments, then the sense/reference distinction does not apply to the semantic content we ascribe to them. The concepts of animals, at least certain kinds of animals (the so-called “higher animals”), have neither senses nor referents. Yet they count as genuine concepts because they allow for certain kinds of generalizations: an animal can respond similarly to different occasions of perceptual stimuli if it has a concept for classifying those stimuli as similar.
I was thinking a bit about this whole semantic engine/syntactical engine business. It struck me that from an enactivist perspective, Dennett could well be mistaken in thinking that brains are syntactical — in which case the whole problem is the result of a muddle.
Dennett is strongly sympathetic to functionalism; as he sees it, the brain is basically a computer. And if one endorses that view, then one will have to face the syntax-to-semantics problem (“content”) and where does awareness come from (“consciousness”)? And that’s just what Dennett has addressed in Content and Consciousness, The Intentional Stance, Consciousness Explained, and since.
But, enactively construed, the brain is not a computer; it is a biological organ, and in fact it is a constituent of a second-order autopoietic system (the organism) that is comprised of first-order autopoietic systems (cells). Certain aspects of its functioning can be modeled by certain kinds of computers, but that doesn’t mean that neuron firing rates are just syntactical any more than it means that convection cells are just syntactical because we can build a computer model of the storm.
So where is the semantics? Enactively construed, semantic content is not located in the brain but through and across the brain-body-environment interactions. And the brain is a part of the semantic engine, not a syntactic one at all. So there is no problem of how to get from syntax to semantics if we reject functionalism for enactivism.
Functionalism is also the background theory for Chalmers; Chalmers’ account is that functionalism can account for everything except qualia, whereas Dennett’s account is that if you can’t specify what would verify the occurrence of something, then you’re not really talking about anything real.
(In other words, the heart of the Chalmers-Dennett debate is the semantics of epistemology — what’s the right way to pose questions of meaningfulness that are relevant for knowledge? Chalmers’s post-Kripkean two-dimensional semantics leads in a completely different direction that Dennett’s post-Quinean verificationism.)
Rockwell has a really excellent (I thought) assessment of the Chalmers-Dennett debate and proposes his own Deweyan/Sellarsian alternative (which is also quite closely aligned with enactivism, though Rockwell doesn’t use that term.
I can’t figure out what that means. I find it not unlike the claim that DNA is just like computer code. More specifically, it’s a kind of un-useful reductionism. It’s a bit like saying that water is just like hydrogen and oxygen.
Brains are not like any kind of computer we have built, and we find it difficult to emulate any but the simplest brains. Function is emergent. Brains are like computers in the same way that chemical compounds are just like their constituents.
I find both the brain-as-computer and DNA-as-code metaphors deeply misleading and unhelpful. But it is worth taking a moment to realize why the brain-as-computer metaphor was a significant breakthrough at the time.
Before the cognitivist revolution in the 1960s, the dominant paradigm in empirical psychology was behaviorism. This treated the brain as a black box; nothing going on inside could be posited. This was perhaps an over-reaction to the introspectionism of the 19th century (think of William James). The introspectionists talked about mental contents, but only from the first-person perspective. The behaviorists objected that science deals with what it verifiable from the third-person or impersonal standpoint, and so said that we can’t talk about mental contents at all.
The computer revolution made it safe for cognitivists to talk about mental contents, because we could now talk about them from the third-person standpoint. After all, the reasoning went, we talk about internal representations in computers in the third-person standpoint! So treating minds as computers made talk of “inner representations” scientifically respectable.
The heyday of cognitivism was the functionalist paradigm, according to which the mind is basically the “software” and the brain is the “hardware”. This solved (or seemed to solve) the mind/body problem that had perplexed everyone since Descartes invented it in the mid 17th century. It also meant that, just as we can study the software itself without worrying too much about the hardware — since the same programs can run on different machines — we could make psychology a respectable science without reducing psychology to biology. That’s the heart of the cognitive science revolution — the early Putnam, Fodor, and many others.
It was correct to treat the brain as a black box at a time when it was in fact a black box. Richard Feynman has described some kinds of science as akin to trying to discern the rules of a game by watching the play. When all you have is observed interactions, you build and test models.
The mind is still a black box, even though we have computers and have MRIs and such. We cannot say the mind is a computer until we can build a computer that behaves like a brain. The usual description of the Turing Test is rather naive, but the principle is correct. We can say we understand minds when we can build something having behavior indistinguishable from human behavior.
My gut feeling, based only on intuition, is that even then we will not understand, because the only way we will build an artificial mind is to evolve one in silicon, and while we can observe and record the evolution, it will not explain what is going on and why it is a mind.
I just want to add the observation that recent developments in computer science are all geared toward build computers that can learn. The goal is to build learning hardware rather than to emulate learning on traditional computer hardware. IBM has a lot invested in this paradigm.
The problem at the moment is how do you get a general purpose learning device booted. It strikes me as the same kind of problem as origin of life.
But machine learning doesn’t work very well.
So Dembski takes research from machine-learning theorists to show that evolution doesn’t work very well (and therefore ID). But evolution does work.
So there’s a problem with machine learning. The underlying problem is that epistemology does not work very well. Yet science works in spite of that, probably for the same reason that evolution works.
walto,
It seems to me that meaning is an evolutionary prerequisite for thinking, rather than vice-versa. Meaningless thoughts confer no selective advantage.
That’s pretty much the argument against computationalism.
Neil,
How so?