I was asked if I have any thoughts about Michael Egnor’s article “Why Aristotle and Aquinas?”.
I think that there some pretty serious confusions here, about the history of modern philosophy and also about the relation between science and metaphysics.
Firstly, Egnor writes as if systematic philosophy begins with Aristotle and ends with Aquinas. I disagree with both claims. There’s no reason to think that comprehensive metaphysics ends with Aquinas — at least, one would need a really good argument for why Spinoza, Leibniz, Hegel, or Whitehead don’t make the cut. (Let alone non-Western metaphysics such as Madhyamaka Buddhism, etc.)
Secondly, Egnor conflates hylomorphism and realism about universals. These are different issues, and they need to be teased apart.
Hylomorphism says that for something to be a thing is for it to be a structured stuff: there is a structure that a thing has and there’s the stuff of which its made. So there’s the structuring aspect, what unifies something as a thing and makes it the kind of thing that it is. and then there’s the stuff which is held together by the structure.
Realism about universals says that concepts are part of the furniture of the world: the world has its own conceptual structure, and then what we need to do is figure out what that structure is. Inquiry is ultimately about bringing our own conceptual structures into alignment with the underlying conceptual structure of the world.
What holds these two ideas together, and makes them seem so wedded to one another, is the idea of essentialism: the idea that things have essences, or structures or forms that make it the kind of thing that it is, and which are also reflected or expressed in the concept of that thing.
(As an aside, I regard essentialism as both false — there aren’t any such things as essences — and also as evil — the belief in essences has legitimized millennia of atrocities.)
Now, I am (probably) a kind of nominalist. I don’t think that the world has any underlying conceptual structure. I think that conceptual structures are part of how complex organisms navigate their environments. I don’t think that the causal relationships between phenomena have the same underlying structure as the inferential relationships between thoughts. Egnor seems to think that nominalism is the Root of All Evil, and that just seems utterly baffling to me.
I can agree with Egnor that it is a profound error of much of modern thought that mind is conceived of as being independent of the world, so that the relationship between them is then a problem to be solved. And we do find the canonical version of this problem presented in Descartes. But this situation — what Jay Rosenberg calls “the Myth of the Mind Apart” — is really quite separate from whatever one’s views are about the nature of concepts.
It may be a contingent truth of the history of philosophy that the rise of mechanistic physics and nominalism about concepts rendered Thomism untenable. I think the Egnor simplifies the history in unhelpful ways. But Egnor seems to think that Darwinism and nominalism are part of the Highway to Hell, and I think he’s completely mistaken about that.
How do you get “the set of all dogs” to define “dog” without assuming that either things are, or are not in the set?
but prime numbers would work too
I’ll note that I am not a fan of “extension” and “intension”.
Roughly speaking, the extensional definition of dog would be to define “dog” as the set of all dogs. Except that you cannot do it that way, because “dogs” has no meaning until after you have defined it. So you need a different way of specifying that set.
It is also ambiguous. Does it mean “the set of all dogs that currently exist”, or does it mean “the set of all dogs that have ever existed or will ever exist.” It seems to me that Walto is using the first of those, while you are using the second. If you are using the second of those, it is very hard to see how you could specify that set. Using the first version, just listing all of the dogs would work as a specification.
The nice thing about intensions, is that they can be vague enough that one never has to fill in the details.
I don’t understand how the starting assumption matters. I don’t see how you can define any term without assuming that some things are part of the set and some things are not.
From a practical standpoint, I personally mean it as the set of all dogs that I am familiar with. That is why I said earlier in the thread that I suspect everyone’s definition of ‘dog’ is slightly different. While the members of the set will line for the most part between us, there may/will be questions about the extent of the set and where it transitions into a different set (e.g. wolf, coyote, jackal).
Unless I am mistaken, a definition need not be fully enumerative to be considered extensional.
Before you can say that some things are part of a set, and some things are not, you must already have things.
The basic issue is where do things come from. And they cannot come from merely defining sets of things, since such defining already presupposes things.
But then nothing can be an unfamiliar dog. How could that work?
I’ll agree that everyone’s meaning of “dog” is slightly different, which is to say that meaning is unavoidably subjective. As for everyone’s definition of “dog” — I’d say that most people don’t actually have a definition of “dog” other than “I recognize one when I see it.”
How does one define things that do not yet exist? The presupposition of things existing before they are defined seems obvious and trivial to me.
How do ‘meaning’ and ‘definition’ differ in any useful way? To me the basic issue seems to be how someone recognizes one when they see it. Is it by comparing the object (in this case a quadrupedal mammal) to an intensional definition or by similarity to other particular objects already considered to be part of the set.
I don’t think it does work. I think that we must become familiar with (or at all events encounter) any object in order to classify it.
One specifies some criteria. And then one defines “dog” as anything which matches those criteria.
How does this “compare” actually work?
As I mentioned earlier in the thread, given the diversity in many sets to which we give a designation, I don’t see how the criteria can be defined broadly enough to capture all members while remaining useful.
Not to get off on too much of a tangent, but wouldn’t this issue also be a problem for a set with an obvious intentional definition like ‘even numbers’? You can’t specify that ‘even numbers’ are those which divide by 2 to give an integer without already having the concept of ‘2’ and ‘integer’ established.
It isn’t a problem. In practice, we already know about numbers and about the number 2 before we become concerned about even numbers. If you define them in terms of criteria, those criteria can make use of what is previously established.
I’m puzzled by this. It seems contrary to the view that you had previously expressed.
You now seem to be saying that “dog” refers to members of an existing set, and it is only up to us to define that set. This seems to make the gods the definers of what is a dog.
I thought you had previously argued that it was man who determined what is a dog. So there isn’t any “capture all members” problem.
My view: we come up with criteria (but not explicite criteria), and a dog is anything that fits those criteria. This is a pragmatic way of identifying what is a dog. And, as pragmatists, we can adjust our criteria from time to time, base on experience. With such adjustments, the criteria can become quite complex.
In a way, this is related to Wittgenstein’s argument on the impossibility of following a rule.
If I appeared to move away from that position, it was not intentional. I do think that man determines what a dog is.
I agree that criteria are drawn up, but they appear to me to be added post-hoc after we have already populated the set. If the criteria are to be the arbiters, why would they ever need to be adjusted?
Peer pressure.
We notice what other people call “dog”, and we want to fit in with the community. So we adjust our criteria so that what we consider a dog more closely matches other members of the community.
Both in the sense of peer pressure and in child education, that sounds plausible to me. But just to be clear, do you think the peer pressure causes us to change our intensional or extensional definition of ‘dog’? If we’re ‘noticing’ what other people call dog, that sounds awfully extensional.
RoyLT,
The concept of ‘dog’ cannot be purely extensional, and a couple of thought experiments will show why.
1. You’re familiar with the common breeds of dog. Now suppose you are presented with an odd mix, or with a dog of an exotic breed you’ve never seen before. You are told that the new animal is a dog, and you are not surprised. You were thinking the same thing yourself. Why? Because the concept is not purely extensional, and you can see what the new animal has in common with the dogs you are familiar with.
2. Someone presents a turnip to you and declares that it is a dog. You are surprised and inclined to disagree. But why? No one has explicitly told you that turnips are not dogs. If the concept were purely extensional, then merely designating a turnip as a dog would be enough to settle the issue.
To be honest, I’m a skeptic of the attempt to explain meaning in terms of “extension” and “intension”. I don’t think that works.
But that’s not quite what we are doing. We are noticing the behavior of other people in using “dog”, and seeing discrepancies between that and the way that we use “dog”. So we are adjusting our behavior to reduce that discrepancy.
I’m not sure what I’m missing, but that is not at all obvious to me. What are some examples of purely extensional concepts for comparison?
keiths:
RoyLT:
That’s exactly what it means for a set to be defined extensionally. You learn which things are designated as in the set and which are not, and that’s the extent of it (so to speak). Any other properties are irrelevant.
The set {volcanologist, teaspoon, hagiography}. Easy to list the elements, but I challenge you to specify that set intensionally.
Thanks for the clarification and the example.
If ‘dog’ cannot be a purely extensional definition, do you consider it purely intensional, a hybrid of the two (along the lines of what walto suggested), or a different type that I’m not familiar with?
I’m finally catching up on this discussion after a restful few weeks of Internet-light vacation.
One big issue that I’m not entirely clear on is the relation between nominalism and extensionalism.
Does nominalism (denying that abstracts and universals exist independently of our conceptual frameworks) entail extensionalism (the ‘meaning’ of a term is just whatever actual and/or possible entities are referred to by that term)?
It’s pretty clear to me that one can be an extensionalist without being a nominalist — abstracta and universals would be among the entities that a term refers to! But can one be a nominalist without being an extensionalist?
On the face of it, I don’t see why not: one can just say that terms have intensions as well as extensions, but still think that universals don’t have any existence independent of conceptual frameworks. (Of course there are universals internal to our conceptual frameworks!)
But are there as many conceptual frameworks as there are people? Could be, if by “conceptual framework” you simply mean “individual mind”, but when you also mean that each such individual mind contains its own independent internal set of universals, then no two people are talking about the same universals, right?
Therefore it seems that by “conceptual framework” you must mean the concepts, not just of universals, but the sum total of concepts, whereof some with certain properties or with ways of application are designated as universals. Now, the thing is that such a “conceptual framework” is not particular to any individual person. Rather, people share them by talking, teaching and learning. One may entertain several conceptual frameworks at any time and favor one or some over others. Since this is so, then in what sense can it be said that “conceptual frameworks” and universals in them “don’t have any existence independent of conceptual frameworks”?
Some conceptual frameworks are better than others, are they not? If yes, then they are better in relation to something else, namely in application to the outside world. Insofar as a given conceptual framework with its universals is applicable to the outside world, universals must be referring to some thing in the outside world just as particular concepts like “dog” do. Or is there some good reason why universals “don’t have any existence independent of conceptual frameworks” whereas particular concepts must have some different sort of existence?
In my view, all concepts have the same sort of existence. No exception. But they have different sort of relations to each other and to the actual world.
In defense of modest conceptualism.
Actually, a lot of people have told me that turnips are dogs. Cousins, that just haven seen each other for a while.
They are called evolutionists.
If each mind had its own conceptual framework, communication would be impossible.
That’s right. And I’d also urge the indispensable role of rules or norms to the process of talking, teaching, and learning.
I do think that conceptual frameworks of the sort that are expressed in languages are necessarily shared, but I would have gotten to that conclusion by appealing to Wittgenstein’s private language argument. (The key moment in the argument, as I understand it, is that norms are essentially public; they can’t be reduced to conscious mental images.) But I see no way that concepts can be shared without a public language, which has rather interesting implications for a theory of non-human animal minds.
I tend to think that terms like “universal”, “particular,” “kind”, “general”, etc are metalinguistic sortals: they are terms of a language that is about a language, and their function is to indicate where in a conceptual framework a term is to be found.
On my version of inferentialism, the meaning of any concept consists in the inferences that it licenses. The differences between particulars and universals amounts to a difference in the scope of the inferences that are licensed by the correct use of that term.
Yes, but here it’s important to slow down and ask: “applied how? and what are the criteria we use to determine which conceptual frameworks are better than others? and do we use the same criteria for all conceptual frameworks? or are there important differences in the criteria used to evaluate different kinds of conceptual frameworks (empirical, ethical, mathematical, aesthetic, etc.)?”
It’s at this point where I’d focus on our disagreement.
Firstly, I don’t think that concepts work like tags or pointers matching up with a bit of reality. I think that concepts are understood in terms of norms of correct and incorrect use, and that our semantic vocabulary — including concepts like “means” and “refers to” is best understood as a metavocabulary. The reference relation (‘___’ refers to —–) is itself a basically intralinguistic relation, not a relation between language and the world.
Secondly, I think that the world consists, when you get right down to it, of causal regularities. In light of that, the adequacy of a conceptual framework (and it must be stressed that adequacy comes in degrees!) is itself a matter of how well the organisms that use it can reliably track those causal regularities that lie in their perceptual compass.
This is not to deny that we can reflect on the similarities and differences between causal regularities — similarities and differences with respect to what organisms like us can notice as well as what are motivated to notice given our species-specific goals and interests — and thereby come to have terms that function as particulars and as universals. Nor is it to deny that we can reflect on that language with a metalanguage that contains the concepts of “particular” and “universal” (as well as concepts like “sort”, “kind”, “resemblance”, “repeatable”). But it is to deny that the meaning of terms with those linguistic functions have those meanings by virtue of picking out or referring to entities, with different kinds of entities picked out by different kinds of terms.
To round out this whole line of thought: I don’t think that semantic notions, such as “means” or “refers to”, are relations between language and the world. I think that are metalinguistic concepts that indicate how a piece of language functions. Our explanation of the relation between language and the world is the work of cognitive science, not semantics.
(Put that way, it’s a fairly bold claim. I might need to tone it down after I’ve reflected on it for a few hours. We’ll see.)
I humbly agree, conceptually.
Just a mess that I created by conflating the terms in response to a comment by Walto.
There’s a lot to reject, but I bring up just these most obvious points.
From the fact that you don’t see a way for non-human animals to share whatever mentality they share, it does not follow that there are interesting implications for the theory of non-human animal minds. It follows that you have no theory to account for non-human mentality. It follows that you need a complete overhaul of your theory/philosophy of mind from scratch.
You are playing with terms without any clue as to their import. Very pomo of you.
Contrary to your first sentence, claiming that “means” and “refers to” are “metavocabulary” does not add anything to the understanding of “means” and “refers to”. Contrary to your second sentence, the reference relation referred to by “refer to” is precisely the relation between the language and the world, not something purely intralinguistic. The words “means” or “refer to” are intralinguistic when studied the way a grammarian studies them, but they have referential properties (referring to the world, duh) when studied from the semantic, pragmatic and cognitive points of view. So you need a complete overhaul of your philosophy of language too.
All along, Moderate Nominalism, whatever it may turn out to be, remains undefended.
This is unclear to me. Part of what led me down the extensionalist rabbit-hole was the fact that I don’t see how any two people can have the exact same conceptual framework.
The two frontal lobes of a human don’t have the same conceptual framework. Only the corpus callosum stops us from being two people! 🙂
That cannot be correct.
That may well be a requirement for strictly formal communication in an abstract world. But what makes communication possible for human minds, is that those minds share a common world. And because of that shared common world, we can get by without having identical conceptual frameworks.
Sounds like a great argument for Realism.
I’m okay with realism. That is, I’m okay until I discover what philosophers mean by “realism”.
And that is rather routinely cut, resulting in two people in one head.
I hope that everyone hasn’t totally deserted this thread for greener pastures, as I’m still trying to understand exactly how to apply some of these definitions which are still quite new to me.
In theory, according to Nominalism, wouldn’t all definitions start out as extensional (i.e. the first thing that was called ‘dog’)? When a label is given to a single item, that would seem to me to be extensional (assuming Nominalism). When the decision is made to use the term to refer to a second item, intensional criteria are added. Eventually, a definition such as ‘dog’ would seem to end up as a hybrid between intentional and ostentional definitions.
Feedback would be greatly appreciated.
But what allows us to have a shared world is that we have roughly similar conceptual frameworks, and what allows us to have roughly similar conceptual frameworks is that each cognitive system is able to constrain the other by virtue of a shared language.
My basic thought here is that public languages allow converge between high-level generative models, which then constrain the downward-propagating predictions.
Here’s how Andy Clark puts the point. He describes a contrast between experimenters simply telling experimental subjects what to do in the Wisconsin Card Sorting Task and training a monkey to do the same thing: “In the case of the monkey, script-synchronization required the experimenters’ top-level understanding to be recreated via a truly bottom-up process of learning, whereas in the case of the human subjects, this arduous route could be avoided by the judicious use of language and pre-existing shared understandings” (Surfing Uncertainty p. 287).
What’s interesting to me is the contrast between “top-top control of action” in the human cases and top-bottom and bottom-top control of action and perception in the non-human cases.
Nonhuman animals have complex models of their environments (including, in many cases, their social environments) that generate predictions for cognitive representations at lower levels, terminating at the sensory transducers — and the sensory transducers issue prediction errors for those models. That’s the top-bottom and bottom-top directions of information transmission in a non-human cognitive system.
What we seem to add to this is a top-top information flow, via public languages, that allow for cognitive systems to be coordinated — in exercises of joint intentionality (I-You) or collective intentionality (I-We).
But it’s not part of my model that joint or collective intentionality require identical conceptual frameworks. (That’s impossible.) What they require is that there are norms by which incompatibilities between high-level generative models can be detected and mitigated, to the extent that those incompatibilities would interfere with successful cooperation if left undetected and uncorrected.
It is far from clear what “roughly similar conceptual frameworks” even means. I don’t think we have a metric whereby we can measure similarity of conceptual frameworks.
I do agree that a shared language is important. But we don’t use language to describe our conceptual frameworks. Rather, we use them to describe our shared world. There could, quite possibly, be very different conceptual frameworks which can nevertheless come up with descriptions that we can communicate.
I don’t know the details of what he is describing. But it seems to me that there is a huge difference in motivation. The monkeys are presumably motivated by some reward, perhaps a food morsel. The humans are motivated by being part of a community and seeking consensus within the community.
I don’t think there’s any such metric, nor can I even conceive of a metric. But in lieu of that, there are rough-and-ready pragmatic criteria, such as ease of translatability and how readily disagreements can be resolved.
True, we don’t use language to describe our conceptual frameworks (unless we’re doing philosophy). But my thesis was that we use language to describe our shared world (and, in a more difficult sense, construct our world as shared and shareable) by way of using language to navigate, detect, and resolve incompatibilities in our conceptual frameworks.
(I’m taking on board here the idea that the basic function of a conceptual framework is to allow for the more-or-less skillful coping of an organism with an environment. I can elaborate on that idea if you want, since I’ve spent most of the summer trying to flesh it out, using some clues from Sellars and the recent work on predictive processing.)
I really like this point! Yes, the emergence of sapience in evolution and its acquisition in early childhood involves a profound restructuring of motivations and not just conceptual abilities.
If there can be such a thing as incompatibilities between conceptual frameworks, then it is not clear that they need to be resolved. I would guess that there are incompatibilities between the conceptual frameworks of young earth creationists and those of evolutions. And those incompatibilies don’t seem to get resolved. Yet we can be members of the same communities, albeit with disagreements.
You seem to be arguing that a shared language together with a shared world is enough to ensure that we have roughly the same conceptual frameworks. But I think that would imply the strong form of the Sapir-Whorf thesis, which is widely regarded as being mistaken.
I can agree that there are some sorts of consistency requirements between conceptual frameworks. But I see those requirements as weaker than you seem to be suggesting.
Kantian Naturalist,
This is slightly off topic, (and I wanted to ask privately but I’m not allowed to use the message system) but I wanted to ask your opinion on the dualism of Aristotle and Aquinas. Recently it was defended by David Oderberg in his paper “Hylemorphic dualism”.
He starts by defending a “phenomenology of psychology” which is, as he defines:
“By a phenomenology of psychology I mean simply the “what it is like” of ordinary psychological operations such as judging, reasoning, and calculating. There is, I claim, even “something that it is like” to calculate that two plus two equals four. It may not be qualitatively identical for all people, but then neither is the taste of strawberry ice cream exactly the same for all people, one might suppose, while at the same time noting that our similar physiological structures imply that the individual experiences for each kind of act should be highly similar. Indeed, one might assert that these experiences contain a certain phenomenological core, and that the class of such experiences is such that its members are all more similar to each other, all things being equal, than they are to any experience of a different mental act, state, or process”.
He then defends it from some objections and argues that we have a priori reason for thinking that whatever physical model is proposed, it will never capture what we humans do while we calculate, for example.
His core argument however relies on Aristotle’s metaphysics. His central thesis is:
“1) All substances, in other words all self-subsisting entities that are the bearers of properties and attributes but are not themselves properties or attributes of anything, are compounds of matter (hyle)and form (morphe). (2) The form is substantial since it actualizes matter and gives the substance its very essence and identity. (3) The human person, being a substance, is also a compound of matter and substantial form. (4) Since a person is defined as an individual substance of a rational nature, the substantial form of the person is the rational nature of the person. (5) The exercise of rationality, however, is an essentially immaterial operation. (6) Hence, human nature itself is essentially immaterial. (7) But since it is immaterial, it does not depend for its existence on being united to matter. (8) So a person is capable of existing, by means of his rational nature, which is traditionally called the soul, independently of the existence of his body. (9) Hence, human beings are immortal; but their identity and individuality does require that they be united to a body at some time in their existence”.
http://www.newdualism.org/papers/D.Oderberg/HylemorphicDualism2.htm
felippenarciso,
Very odd that you couldn’t message me!
Oderberg’s essay is, to be polite, not the kind of philosophy I enjoy reading, so I only skimmed it and didn’t read it carefully.
I think that hylemorphism is useful as an explication of how we usually experience the world around us, but that it can’t be true. My most basic objection to it is that hylemorphism thinks that “the form” of a thing explains both the unity of the thing itself and the unity of the concept which refers to the thing.
Here’s why I think that’s a profound error.
Firstly, the unity of the things themselves, insofar as things are unified at all, is explained in terms of convalent bonds between atoms. When you get right down to quantum mechanical level, there really aren’t any “things” at all. There are fields, processes, and structures — insofar as the metaphysics of quantum physics allows to say anything at all about that level of reality. And while we certainly encounter things as unified, that’s partly because of how our sensory receptors, conceptual classifications, and behavioral responses are ‘geared into’ how things are — with respect to us. What we call “things” that seem to be “unified” is at best an emergent, relational, contingent, and temporary property that holds across a multiplicity of processes: quantum mechanical, chemical, neurophysiological, psychological, etc. In short, our phenomenology is generated by a whole host of processes that are not themselves immediately accessible from within the phenomenological standpoint.
The situation is just as muddled when it comes to “concepts.” I understand concepts as in the first instance nodes in an inferential nexus; e.g. one understands the concept of parrot if one understands that “if that’s a parrot, then it’s a bird” is a good inference and also that “nothing can be both a parrot and a pet” is a bad inference. Inferences like this are called material inferences, because the validity of the inference depends on semantics and not just on logical form or syntax.
If one embraces an inferentialist account of concepts (as I do), then there’s no reason to think that concepts are in any sense “immaterial”, and one will approach the whole “problem of universals” in terms of the rules that govern human linguistic behavior rather than in terms of adding non-material things to the metaphysics.
Aristotle’s metaphysics is a nice attempt to make coherent and systematic the underlying intuitions of someone who was excellent at describing the world as he encountered it but had no awareness of modern science and no awareness of human cultural and conceptual diversity. It’s a cool system to play with, and I enjoy teaching it, but I see no reason to think it’s better than Spinoza’s rationalistic monism, Madhyamaka Buddhist metaphysics, Aztec metaphysics, or any metaphysics informed by contemporary science.