There’s a nice little discussion going on at Uncommon Descent (see here) about whether concepts are consistent with naturalism (broadly conceived). Here I want to say a bit about what theories of concepts seem to me to be most promising, and to what extent (if any) they are compatible with naturalism (broadly conceived).
The dominant position in philosophy of language treats concepts as representations: I have a concept of *dog* insofar as I am able to correctly represent all dogs as dogs. It is crucial that concepts have the right kind of generality — that I am able to classify all particular dogs as exemplifying the same general property — in order to properly credit me with having the concept. (If I only applied the term “dog” to my dog, it would be right to say that I don’t really have the concept *dog*.)
On the representationalist paradigm, rational thought has a bottom-up structure: terms are applied to particulars, terms are combined to form judgments about particulars, and judgments are combined to form arguments, explanations, and other forms of reasoning.
The contrary position — a fairly minor one, but with prominent and forceful advocates — is an inferentialist semantics: a concept is a node in an inferential nexus. My grasp of the concept *dog* consists in two distinct abilities: (1) being able to make correct inferences: “that is a dog, therefore it is not a cat; that is a dog, therefore it is an animal”, and so on, and (2) being able to correctly apply the concept to sensed particulars. But the conditions of correct application aren’t constitutive of the very meaning of *dog*: it is the inferential role that constitutes the sense of the concept.
One important feature of inferential semantics is that it goes hand-in-hand with normative pragmatics: the criteria of correct inference (and correct application, for empirical concepts) are intersubjective or social. What can tell me whether I’m using the term “dog” correctly? Only my fellow language-users! The layout of reality can certainly constrain how I apply concepts, but the layout of reality cannot itself tell me whether my concepts are correctly applied to it. (Whether this kind of pragmatism is inconsistent with objectivity is hotly contested. My own view is that it is, but this is controversial.)
The inferentialist view of concepts has a number of significant and wide-ranging implications. One is that concepts turn out to be “non-relational”, in the following sense: if concepts are nodes in an inferential nexus, then there is no mapping from the concept to the things it picks out. There is no direct relation from words to the world. The inferential nexus as a whole is put into play by being used to do things with objects (properties, relations, etc.).
Another important implication is that the familiar discourse of metaphysics — particulars, universals, generals, etc. — all gets ‘linguistified’ by being treated as metalinguistic sortals. They make explicit what we are doing when we classify good and bad inferences. The categorical structure of language does not represent the categorical structure of reality, if reality has a categorical structure at all.
There is a further question lurking in the background here concerning non-linguistic concepts and non-discursive thoughts. What I’ve sketched here is a theory of language, and it is an open question what kinds of concepts are found in non-language-using animals, and also what role non-linguistic concepts play in our own rational cognition. It is almost certainly the case that focusing solely on language will not give us a firm grip on what it is for something to count as a mind, and that we should build up to an account of a rational or sapient mind from an account of non-sapient, merely sentient minds. (For more on this, see “Thinking about the mind: an anti-linguistic turn“.)
Eventually, I suspect, we will need to abandon the belief that all concepts are of one kind. Instead we will need at least two different kinds of concepts: “simple concepts”, at work in the thoughts, beliefs, and desires of animals and infants, and “complex concepts,” at work in language.
Finally, as this bears on “naturalism”: I am not interested in defending naturalism. (Recently I’ve begun to have grave doubts as to whether naturalism is defensible.) I’m interested in figuring out the best account of concepts, and on the account presented here, “simple concepts” depend upon the properly functioning brains of living animals, and “complex concepts” depend upon both the properly functioning brains of living animals that are also members of a linguistic community. So I don’t think there is any route from our best theory of concepts to any ontological commitments to abstract objects or immaterial minds.
I guess you are referring to the UD thread that I have been commenting in.
People use the word “concept” in a variety of ways, so I’m never quite sure what they mean. In such a discussion, I attempt to use the word in the same way that the other parties are using it — if I can work out what that is. I guess I am using it in something approximating the inferentialist position.
I don’t doubt that StephenB sincerely believes that he is making a good argument. But I have been unable to find any meat on that bone.
I don’t doubt that StephenB is sincere, but I see it as one vast circular argument. He is trying to argue for the claim that minds are immaterial, but continually relies on that claim in his arguments for it.
The best I can make out, the argument seems to be something like this:
(1) we use categories such as “universal”, “sort,” “general,” and “particular” in our descriptions of the world;
(2) under a representationalist theory of concepts, particular concepts represent particular objects (this dog, and that dog, and that other dog . . .), general terms represent general objects (“dog-ness”), etc.
(3) everything material is a concrete particular;
(4) if the mind were material, then it itself would be a concrete particular;
(5) but a concrete particular could not even represent generals or abstracta.
(6) hence, since we do represent generals and abstracta, the mind cannot be material.
I don’t understand what the argument is that gets us to (5). It just seems to function in his argument as a “self-evident truth” (and you know what I think of “self-evident truths”). I also think there’s a bit of equivocation on (4) — from the fact that all material entities are concrete particulars, it doesn’t follow that all concrete particulars are material entities. The mind could be an immaterial concrete particular.
But rather than criticize him there, I’m more inclined to attack (2) — the representationalist theory of concepts. On the inferentialist view, the categories (“universal”, “particular”, “sort”, “kind”) are metalinguistic sortals — they are ways of saying what it is we are doing when we make inferences. So there’s no direct mapping from the categorical structure of language and thought to the categorical structure of reality (if reality has a categorical structure).
Indeed, at least if people are not able to present coherent definitions. What is a dog? What is a quale? What is a sortal? 😉
““simple concepts” depend upon the properly functioning brains of living animals, and “complex concepts” depend upon both the properly functioning brains of living animals that are also members of a linguistic community. So I don’t think there is any route from our best theory of concepts to any ontological commitments to abstract objects or immaterial minds.”
Yes you as far as we know the only beens that use concepts have functioning brains and live in society sharing the concepts. That seems necessary but you didn´t make the case to say that that are sufficient conditions to build a concept. Describing the thoeires you said:
“On the representationalist paradigm, rational thought has a bottom-up structure: terms are applied to particulars, terms are combined to form judgments about particulars, and judgments are combined to form arguments, explanations, and other forms of reasoning.”
But there is no reference to how we get the terms to apply to any oarticular.
And also in the other teory
“(1) being able to make correct inferences: “that is a dog, therefore it is not a cat; that is a dog, therefore it is an animal”, and so on, and (2) being able to correctly apply the concept to sensed particulars.”
How I can make the inference “that is a dog”
Other point of conflict I see is “What can tell me whether I’m using the term “dog” correctly? Only my fellow language-users!” Then the only way to know if a concept is correct is sharing it? I can´t found if my concept is correct by myself? What is 30% of th people I share a concept says wrong 30% correct and 40 % I do not know?
The only problem with doing philosophy by definitions is that a definition of a term won’t add to one’s understanding. It’ll only convey the understanding that one already has.
As I see it, the main problem with StephenB’s line of reasoning is that he doesn’t appreciate how categories are constructed. (Think of how neural nets work.) He wants categories to be permanent, absolute, invariant, unchanging. There doesn’t seem any room in his thinking for a pragmatist theory of categories (which needn’t be inferentialist) according to which categories are themselves mutable, fluid, and creative. And so he doesn’t appreciate how the categories are themselves revised over the course of inquiry. Although we bring our categories with us to the act of inquiry, they are not insulated from its consequences.
Ernst Mayr might have said the problem is essentialism.
The statement, “if it rains, the streets will be wet” is an interesting example of what I have in mind here.
The form of inference here is what Sellars and Brandom call a “material inference”, as distinct from the purely formal inferences that we deal with in logic. In formal inferences, the goodness or badness of the inference is strictly about form and not at all about content. In material inferences, the semantic content of the inference contributes to the goodness or badness of the inference.
Now, it would be a disaster if the world got no vote at all in what we say about it. (McDowell’s lovely phrase for this is “frictionless spinning in the void”.) So the question is, how to account for “friction,” such that the world gets some vote in what we say about it? How do the causal regularities and irregularities constrain our judgments such that “if it is raining, the streets will be wet” counts as a good material inference?
The key, I think, is embodied perception: it is through our embodied perception that our cognitive capacities mesh with the causal regularities and irregularities of the world. The key problem that StephenB has with his entire approach is that he wants to explain the relation between mind and world without taking the body into account. (This is part of his Platonism.) In his account, mind and world just line up in a way that is essentially magical. (Making God the guarantor of adequacy just pushes the magic up a level.)
In various things I’ve written here, I’m inveighed against “the Myth of the Given,” and what this means has been, perhaps, less than clear. Here’s a slogan, then: the Myth is what happens whenever adoration of a mystery replaces explanation of a fact. And that is precisely what happens in the Platonic-Aristotelian-Thomistic tradition (of which StephenB, Kairosfocus, and Barry Arrington are proponents).
This is not to say that the alternatives — empiricism, naturalism, or “materialism” — are entirely free of the Myth. I think that empiricism is, indeed, exceedingly problematic just because it is, in the classical version, an instance of the Myth. And traditional naturalism avoids the Myth only by denying the facts that need explaining: intentionality, consciousness, and rationality.
Yes, I think that’s right. But as Mayr’s point isn’t that there are no essences, but that Darwinism entails that species aren’t essences. He’s actually very good at showing that an anti-essentialist account of biological species is precisely the deep insight of Darwinism: that a species is just a population.
On the other hand, the anti-essentialist implications of Darwinism were drawn with extraordinary insight by the classical pragmatists, esp. Dewey. (Dewey’s lecture, “The Influence of Darwinism on Philosophy” is still very much worth reading.) There are similar anti-essentialist moves in Wittgenstein’s Philosophical Investigations, though at the level of language per se.
Blas,
First, “that is a dog” isn’t an inference; it’s an assertion. As for what makes it correct or incorrect, it should be clear that there are two different conditions, individually necessary and jointly sufficient: that the world be a certain way, and that the language be a certain way. But the link between world and language is perception.
That is, to be correct in asserting “that’s a dog!”, I need to both be in the right circumstances with regard to the animal in question (good enough lighting, close enough, etc.) to make the right perceptual discriminations, and how to use the word “dog” correctly. And the criteria of linguistic correctness are social. If I’ve been properly enculturated, I’ll have internalized the norms, so that my assertions are intelligible by my fellows. (There is no wholly private language.)
I would argue there are no “real” essences.
There are constructs, such as mathematics and formal logic that have essences, but not so much physical entities.
I wouldn’t even go that far!