Is a dog with three legs a bad dog? Is a triangle with two sides still a triangle or is it a defective triangle? Perhaps if we just expand the definition of triangle a bit we can have square triangles.
There is a point of view that holds that to define something we must say something definitive about it and that to say that we are expanding or changing a definition makes no sense if we don’t know what it is that is being changed.
It is of the essence or nature of a Euclidean triangle to be a closed plane figure with the straight sides, and anything with this essence must have a number of properties, such as having angles that add up to 180 degrees. These are objective facts that we discover rather than invent; certainly it is notoriously difficult to make the opposite opinion at all plausible. Nevertheless, there are obviously triangles that fail to live up to this definition. A triangle drawn hastily on the cracked plastic sheet of a moving bus might fail to be completely closed or to have perfectly straight sides, and thus its angles will add up to something other than 180 degrees. Even a triangle drawn slowly and carefully on paper with an art pen and a ruler will have subtle flaws. Still, the latter will far more closely approximate the essence of triangularity than the former will. It will accordingly be a better triangle than the former. Indeed, we would naturally describe the latter as a good triangle and the former as a bad one. This judgment would be completely objective; it would be silly to suggest that we were merely expressing a personal preference for straightness or for angles that add up to 180 degrees. The judgment simply follows from the objective facts about the nature of triangles. This example illustrates how an entity can count as an instance of a certain type of thing even if it fails perfectly to instantiate the essence of that type of thing; a badly drawn triangle is not a non-triangle, but rather a defective triangle. And it illustrates at the same time how there can be a completely objective, factual standard of goodness and badness, better and worse. To be sure, the standard in question in this example is not a moral standard. But from the A-T point of view, it illustrates a general notion of goodness of which moral goodness is a special case. And while it might be suggested that even this general standard of goodness will lack a foundation if one denies, as nominalists and other anti-realists do, the objectivity of geometry and mathematics in general, it is (as I have said) notoriously very difficult to defend such a denial.
– Edward Feser. Being, the Good, and the Guise of the Good
This raises a number of interesting questions, by no means limited to the following:
What is the fact/value distinction.
Whether values can be objective.
The relationship between objective goodness and moral goodness.
And of course, whether a three-legged dog is still a dog.
Meanwhile:
Imposter?
The closest anyone has come so far is to just deny the reality of society. I mean seriously, using that logic we could just say there really is no family. And of course, we can reduce that to there is no marriage.
But then we’re right back where we started. If there is no marriage [marriage is not real], then what are you celebrating?
Marriage is just an idea?
In that case, it seems as though you and I have nothing more to discuss as far as this goes. That said, I do enjoy your challenges to my way of thinking and I look forward to continued debate on other topics.
hotshoe_,
You’re so cold hearted that you don’t feel Erik’s pain.
A cold hearted _____ with a hot____. ok, I should shut up now
More precisely, pragmatism (beginning with Peirce) does not take Cartesian skepticism as a starting-point for philosophical activity. This is a prominent theme that runs from Peirce’s early essay “Some Consequences of Four Incapacities” through Dewey’s The Quest For Certainty (spoiler alert: he’s against it) to contemporary work in pragmatist epistemology by philosophers like Hilary Putnam, Michael Williams, and Susan Haack.
I don’t reject universals! Wherever did you get that idea?
It’s quite obvious that the ability to generalize and predict is absolutely central to human (and indeed to primate) cognition. In that sense, a conceptual category for universals is indispensable to any cognitive system that can adapt creatively to novel situations. What I reject is the idea that our cognition of universals (or of any other category, including particulars!) is a mere beholding of the structure of reality.
That said, I do think that since abstraction is also a loss of information, the more abstracted from particulars, the more vague the resulting concept and so more difficult to determine its conditions of application and its inferential consequences. That loss of information does, of course, come with the benefit of our being able to imagine new conditions of application and new inferential consequences.
Do you have a point, Mung?
Guano.
Fer chrissake, Gregory.
The last point flatly contradicts your first sentence (which is getting sort of common with you). So, may I ask what you really think?
You also say,
If a conceptual category of universals lets us creatively adapt to novel situations, then in what sense is there, as you claim next, “loss of information” in universals? Can you describe the process by which generalisation loses information?
I think it all hinges on your theory of mind, which is most likely of the mind-brain identity variety. This thesis normally goes along with valuing particulars over universals, feeling insecure about generalising, and either failing to distinguish relevant attributes from irrelevant or rejecting the distinction altogether.
I don’t see the contradiction, but let me rephrase the basic idea and see if this gets around your objection.
On the view I accept, conceptual content — what the concept is about — is determined by two things: the norms of applying the concept in perceptual experience and the norms of inferring further consequences, including actions, from judgments in which the concept is embedded. For example, part of what it is to have the concept *red* is to be able to distinguish between things that red and things that are not, and also to be able to tell that “if X is red, then it is colored” is a good inference, and so on.
(This is called inferentialist semantics — conceptual content is determined by inferential role. In contrast, representational semantics holds that conceptual content is determined by what the concept represents. Inferentialism is a pretty new idea, and a lot of philosophers think it can’t be right, but I personally find it much more promising than the alternatives.)
In “abstraction”, we do a couple of different things. One thing we do is we remove specific, actual information about the conditions of application and inferential consequences. The concept is, so to speak, “de-actualized” (or “virtualized”?). In doing so, we construct new conditions of application and new inferential consequences. The concept becomes less determined in order to be determinable in new ways.
In other words, I’m only appealing to the actual/possible distinction — actual and possible determinations of conceptual content. Granted, I’m sketching out the account in pretty broad strokes — this is the Internet, after all! — but I’m not clear on what the contradiction is.
I should have been more specific — there is a loss of information about the environment (when we “de-actualize” conditions of application) and a loss of information about the actual norms of correct and incorrect use (when we “de-actualize” inferential consequences). I didn’t mean to deny that imaginative activity supplies its own kind of information.
In philosophy of mind, I’m a lot more clear on what I think is wrong then I am on what I think is right! Substance dualism is incoherent, and so is mind-brain identity. The main problem with mind-brain identity is that it ignores the role of bodies and environments in cognitive and affective experience. I think that something like enactivism is on the right track. My current research is an attempt to integrate inferentialism about higher-order conceptual content with enactivism about cognition generally.
Two other minor points:
(1) I don’t value particulars over universals. Heck, I don’t even know what that would mean — clearly both particulars and universals will be categories in any conceptual framework complex enough to generalize, predict, explain, anticipate, invent, and adapt. Many mammals and all primates display those cognitive activities.
There’s a traditional philosophical picture of non-human animals as only being able to cognize particulars, and cognition of universals is supposedly unique to human beings. I don’t doubt that human cognition is unique, but I’m not sure that the particular/universal distinction bears on that uniqueness.
(2) My suspicion of the distinction between essential and accidental properties doesn’t mean that I’m unwilling or unable to distinguish between relevant and irrelevant features of a situation; it means that what is relevant and irrelevant is always constrained by context and by the question one is asking or the problem one is trying to solve. The essence/accident distinction would have us believe we can specify the relevant and irrelevant aspects of a situation regardless of context and inquiry, and that is what I deny.
From a paper by Douglas Windblad called “SCEPTICISM, SCIENCE, QUINE AND WITTGENSTEIN” in the anthology Wittgenstein and Quine (Arrington & Glock, eds.)
walto,
Yes, nicely put. Dennett makes a similar point against Hacker and Bennett in Neuroscience and Philosophy, when he’s responding to their accusation that philosophy of neuroscience is prone to “the metonymic fallacy”. As he puts it, there’s nothing suspect about either introducing new terms in specific explanatory contexts if they get the job done or about taking old words and giving them a new, analogical sense — again, if it gets the explanatory job done.
Much as I love both phenomenology and ordinary-language philosophy, they are, at the end of the day, quite conservative projects — and clinging to them in the face of science (esp. cognitive science) is downright reactionary.
Maybe you don’t see the contradiction because it’s diametrical? The bit which says “since abstraction is also a loss of information, the more abstracted from particulars, the more vague the resulting concept and so more difficult to determine its conditions of application” is the direct opposite of “loss of information does, of course, come with the benefit of our being able to imagine new conditions of application”. So the reader, such as myself, gets no answer as to whether you think abstraction helps or hinders to determine the “conditions of application”. The first part says it hinders because of “loss of information”. The second part says it helps because of “the benefit of our being able to imagine new conditions of application”.
Kind of limited (my view of semantics appears to be a third thing altogether), but let’s go with this for now.
Whoah. If I understand you rightly, then the ordinary view is precisely the opposite. It’s the “specific, actual information” which lacks any determinacy in the sense of general applicability. For example a storm is an occurrence or an event or a phenomenon, not a law of nature. Generalised climatological regularities are a law of nature that has general applicability: anywhere in the atmosphere you look, it’s applicable.
Or I’m not getting your point because your premises are vague. Your premises are necessarily vague because you refuse to rely on universals. Universals are indispensable.
I have hopefully clarified well enough where the contradiction lies. And beside actual/possible distinction here you have appealed to the concept of necessary and sufficient conditions earlier, which is basically the same as essential or at least relevant conditions. So you are alternately denying and relying on universals and essentialism.
But maybe, instead of “de-actualize” we actually “de-contextualize” the particular case in the process of abstraction. In abstraction it’s discovered, by comparative means (i.e. comparing with other comparable cases and by determining what cases are not comparable and why), how representative the particular case is and, more importantly, representative of what. This process does not lose any information (unless you forget things fast and you cannot follow continuity). It provides a broader perspective.
From what you write next, I see what you mean. I can tell you that enactivism is also a bad choice.
Should the distinction of particulars and universals be unique to humans in order to be properly valuable?
You are so totally wrong to assume that I required you to give up the inquiry by requiring you to determine essential and accidental properties of whatever was under debate (yeah, I remember – marriage…). Determining of essential and accidental, relevant and irrelevant properties is the inquiry, the process by which we arrive at the conclusion what the thing is.
Erik,
If the distinction between relevant and irrelevant properties is context-dependent, and the distinction between essential and accidental properties is context-independent, then one can maintain the former distinction without being committed to the latter.
“Decontextualization” works as well, if not better, than my “de-actualization” — though I don’t think that all universals are constructed by induction over particulars, the way that Locke and Hume (for example) seem to have thought.
I’d also like to stress the difference — which I think of as being quite significant — between (1) maintaining that categories like generals, sorts, kinds, resemblances, and particulars are indispensable for many kinds of cognition, including rational cognition and (2) maintaining that categories like generals, sorts, kinds, resemblances, and particulars are themselves structures of reality.
It was suggested above that I might be a nominalist of some sort. Sometimes nominalists are taken to hold that only concrete particulars are real. But that seems problematic to me; I would be more inclined to think that realty has no categorical structure at all, and particulars are no different from resemblances, sorts, kinds, and generals — they are all categories of our animal cognition. If I am a nominalist, it is in this respect: I do not think that reality has categorical structure.
What, after all, is cognitive activity? On my view, the most basic kind of cognitive activity consists of producing rich, fluid, real-time, context-sensitive adaptive bodily responses to ongoing motivationally salient sensory stimuli. This means that cognition is for successful coping in an environment, not for a contemplative beholding of the structure of reality. More sophisticated kinds of cognitive activity, such as rational cognition, should be understood as modifications of this more basic kind. But even rational thought is a distinct kind of successful coping in an environment — namely, in the kind of social environment that depends on massive degrees of coordination, cooperation, and collaboration quite unlike the social environments of other primates.
On this view, then, all of the categories — particulars, sorts, resemblances, and kinds — turn out to be metalinguistic concepts. They make explicit the inferential roles implicit in our discursive practices. From the fact that they are indispensable for cognition, it does not follow that they mirror or copy the fundamental structure of reality, because cognition — including rational cognition — is for successful coping (mediating between perception and action), not for mirroring the world.
And what does it mean to be context-dependent versus context-independent? Aren’t you, again, assuming rigid immutable quasi-material “essences” around which everything else changes as Aristotelianism suggest, and you impute the same view on me? Have I not explained enough?
Let’s suppose so. Does your alleged “loss of information” in the process of abstraction also fit in here somewhere?
Granted. And you impute the latter on me, right? And you dismiss it without scrutiny, assuming it to be an Aristotelian error?
Let’s take the two views together. How does it compute for you that generals, sorts, kinds, resemblances, etc. are NOT structures on reality, while at the same time they are indispensable, i.e. real for practical or pragmatic purposes? If they are as good as real, then what’s the problem in saying so?
And what is human/animal cognition? Not reality and not a category of reality? Because if it were real, categories in it would be as real as cognition itself, right? So, where do you place this cognition that you keep talking about?
The paragraph where you try to say what cognitive activity is does not answer my question. You say,
I cannot help but read your “coping in an environment” as “groping in darkness – and there’s nothing else but darkness”. If you sincerely maintain that this is your idea of reality, then why should I be convinced of how you describe it? Why should I trust a blind man?
Yes, and I would add that those adaptive bodily responses include control over what sensory stimuli are encountered.
I’m sure you were including this, but it’s a point often missed – that not only are we not simply “passive observers” of the world, but that the process of active observation itself involves selecting what we observe, in an ongoing feedback loop.
How are “rigid immutable quasi-material ‘essences’ around which everything else changes” involved in either the distinction between context-dependent and context-independent descriptions, or in my view that context-independent descriptions are pretty much useless for everyday perception of and action in the world as we experience it? (They might play some important role in purely formal domains.)
I don’t know if it is your view or not, but there are philosophers who have held that view, and I regard it as mistaken. I am not entirely sure if Aristotle himself had that view or not.
That would entail giving up on metaphysical realism, which I’m not prepared to do. The interesting question is, of course, whether metaphysical realism of any sort is compatible with this view I’m proposing of cognition.
I took it that by “reality” it was clear that I meant “reality as conceptualized independently of the cognitive structures and processes of a human or animal”. In any event, I would “place” cognition — in both humans and animals — as spatially and temporally distributed across the ongoing transactions between brain, body, and environment.
I don’t understand why you are using this metaphor.
To some extent, yes — there’s actually a debate going on here between those who say that perception is action or essentially involves action, and those who say that perception is for guiding action. I don’t have a view there, though I suspect that those who take the first position are making a very subtle mistake: they are confusing the fact that perception is a kind of activity for the claim that perception is an action.
Now, “making an observation” is an action, because it is the kind of thing that can be done intentionally and voluntarily. And could not observe if one could not perceive, just as one could not perceive if one could not sense (i.e. have sensations). But I think there are important distinctions here, and the philosopher’s urge to make things more simple than they are really are must be resisted!
Well, I’d be in the “perception involves action” camp 🙂 Not sure if that means I am making a subtle mistake! But perception is pretty limited without action.
I’d say that perception guides action, and that action brings into play new stimuli that feed into perception. And by action I mean actions involve in orienting towards a stimulus and exploring it, including eye movements, head movements, touching, feeling, weighing even ear movements, in other words constructing a flow of feed-forward models that are adjusted by feedback.
Without the ability to test the feed-forward model (i.e. without any action), I propose that our perceptual capacity would be severely curtailed, and probably our ability to parse the world into objects would be impossible.
Elizabeth,
As the old saying goes, “what we have here is a failure to communicate”. The problem is that
cannot be actions, if (a) actions are intentional and (b) an action is performed by an agent. Turning my head in the direction of a sudden sound isn’t something that I’ve chosen to do — it’s something that my body does. It is, perhaps we might say, an activity of the organism that I am rather than (as driving to the market would be) an action of the agent that I am.
I don’t want to resurrect substance dualism, but rather point out there is something right about substance dualism — namely, that as rational animals, we human beings can experience fissures between our culturally-constituted, symbolically-mediated rationality and our “merely” biological animality.
(The fact that we can experience these fissures at all is probably best explained by the hypothesis that there is some modularity in the brain, esp. some degree of functional independence of subcortical processes from cortical processes.)
I’m inclined to think that “conceptualized independently of cognitive structures” doesn’t make sense.
I know you think that, Neil. That’s why you’re not a metaphysical realist.
I used to think it was incoherent, too. Then I discovered scientific realism. Now I’m even more confused than ever.
I’m closer to the first of those positions. No, I don’t say that perception is action, but I do say that perception is behavior and involves actions.
I disagree with the last bit of that. I’m inclined to agree with Gibson’s view that perception is prior to sensation. The sensations are our experience of the perception, not a component of that perception.
We could not have perception without sensory receptors. But having sensory receptors is far short of having sensations.
The eye moves in saccades. As the eye moves, the path to a particular retinal receptor sweeps across the visual field. This results in sharp signal transitions as the path crosses an edge. My view is that the perceptual system uses these transitions to locate features in the visual field. I don’t see how vision would be possible without that. The designers of bar code scanners use the same idea to locate bar codes.
I’m not seeing how that could be other than an action (or actions).
And whether metaphysical realism is compatible with your view depends on what you take metaphysical realism to be. What do you take it to be?
And you conceptualise independently of your cognitive structures and processes exactly how?
“Across”? I.e. brain, body, and environment, that’s the range of cognition? One might wonder, if that’s the range of cognition (which is not reality, because reality is, according to you, “conceptualized independently” of cognition), then what’s the place left for your supposed reality?
Hopefully it’s clearer now.
I disagree. Sure, some actions are entirely involuntary – entirely stimulus driven, e.g. blinking at a loud sound. But many others are under inhibitory control. We can choose not to respond. Sure, the default is to orientate to a exogenously salient stimulus – a loud sound, a moving object, a flash of light. But we can, and do, exercise an over-ride, all the time. Not being able to is a problem, and as it happens, is the focus of my research (as it’s key to ADHD, and schizophrenia).
Yes indeed. But I don’t think that negates my point about perception and action. Just because much of the action involved in perception happens below the level of conscious detailed control (though I’d say it is mostly still under the control of executive inhibitory processes) doesn’t mean that it is separate from action. I just don’t think you can separate the two – perception involves the binding of attributes of an object, and I would be very surprised to learn that much binding can go on in the absence of some kind of motor behaviour. And what does, probably still uses motor programs.
hmm. Depends what you mean by independence. If you mean we can separate exogenous from endogenous salience, I would agree. That’s what our current experimental paradigms are designed to pull apart. But there doesn’t seem to be much cortical-subcortical splitting going on. It looks more like temporal extent of the cascade, and the degree to which multiple brain regions are recruited.
But I could be wrong!
Yes, except I don’t think you need a saccade to detect an edge. Edges are typically detected at fovea, specialised for high spatial frequencies like edges. But we do need them to detect low spatial frequencies, and many objects are incomprehensible without those low frequencies.
And you probably know that there is, broadly, a “where” and a “what” visual pathway. Peripheral vision is specialised for “where” and foveal for “what” – but you can’t get the what without getting the where, so one function of the where pathways is to elicit saccades to the object stimulating it, so it can be foveated.
Neil,
As Lizzie notes, saccades aren’t necessary for edge detection.
Remember that the retina is spread out, so it can be “wired” to detect spatial, as well as temporal, variations. The sensor in a bar code scanner isn’t spread out, so the beam must sweep over the code, thus converting spatial variation into temporal variation.
Yabbut Neil is basically right. You still need saccades for visual perception, it’s just that edge detection is a poor example.
Lizzie,
Nah. Neil’s argument is that vision requires saccades because without saccades, you don’t get the temporal variations required for feature detection. That’s not right — the retina can and does exploit spatial variation as well as temporal variation. The visual system is not like a bar code scanner, where spatial variation must be converted into temporal variation.
The “standard” definition of metaphysical realism is that reality has features that can be characterized independently of how any cognitive agent takes it to be.
This is the right question to ask, no doubt about it, and it’s an extremely difficult one. But here’s the view that I’m currently exploring — that there are grades of objective cognition.
Grade-1 objective cognition: an animal has grade-1 objective cognition (or “proto-objectivity”) if it can reliably track the persistence of an object over time and attribute more than one property to an object (e.g. through multi-modal perceptual awareness). Both of those capacities are dependent on sensorimotor coordination. This can be thought of as egocentric multiperspectivalism — there are multiple perspectives, but all tied to the egocentrism of the cognitive agent.
Grade-2 objective cognition: an animal has grade-2 objective cognition if it can reliably track both its own egocentric multimodal perspectives and the allocentric multimodal perspectives of another animal. (This is the origin of objectivity strictly speaking, if one wants to call grade-1 objectivity “proto-objectivity”.) The animal’s perceptual awareness of the object is mediated by its awareness that the perceptual features occurrent in its sensory consciousness differ from those occurrent to another animal, and also aware of the other animal as encountering occurrent sensible features differ from its own. We can call this “I-You objectivity”. (We need to flesh this out a bit more to accommodate non-perceptual properties, such as causal and dispositional properties.)
Grade-3 objective cognition: an animal has grade-3 objective cognition if it can reliably track the perceptual and non-perceptual discrepancies between its own encounters with objects and those of anyone in the community of which is a part. This almost certainly requires language, whereas grade-2 objectivity probably doesn’t. (We can call this “I-We objectivity”.) Under specific circumstances, grade-3 objective cognition also makes possible:
Grade-4 objective cognition: an animal has grade-4 objective cognition if it and the other members of its discursive community can systematically and intentionally manipulate its environment in order to distinguish between which observable regularities and irregularities are due to real patterns (patterns that are there anyway) and which are due to the constraints of biology and culture. (Grade-4 objective cognition is science.)
In other words, I’m trying to do the very thing that Neil Rickert (and many others!) thinks can’t be done: chart a naturalistically plausible pathway from direct realism (about perception) to scientific realism.
If my account of the grades of objective cognition is adequate, then we can systematically exploit our own cognitive structures and processes in order to disclose the underlying real patterns — in fact, if I’m right, that’s exactly what science does. (That’s also part of the paradox of science — it depends on empirical verification and yet often yields counter-intuitive results.)
.
Eh, somewhat. I understand, I think, why one might think that I’m appealing to a “leaping out of one’s own skin” kind of move. And I agree that that would be bullshit (to use the English term). But I don’t think that that’s what I’m doing.
I’d put it rather that it’s the claim that there is a world both external to and independent of all cognizers. I think “can be characterized independently” might suggest more than that. And that more seems to me to go beyond what I take “metaphysical realism” to mean to most philosophers–whether or not it’s true.
I see what you mean, but there’s a reason I phrased it as I did. If MR were simply the claim that there is a world both external to and independent of all cognizers, then it would be utterly vacuous if we could say nothing at all about it. I put the stress on characterizability to make MR non-vacuous.
But it surely isn’t.
Your grades all talk of tracking objects. But on what basis are there objects independent of animal cognition?
I last visited Australia (where I grew up) 25 years ago. And, of course, I chatted with my brother. If I were to visit again, I would again want to see him. But it is likely that every material atom in his body has been replaced since then. And the configuration of atoms changes from sitting down to standing up, without even mentioning the effects of aging. So I what sense should I count him as an object, other than that he is an object of our conception derived from how our cognitive systems function?
Our whole cognitive apparatus depends on treating different things as the same thing. That, roughly, is what categorization does for us.
Maybe this seems too fussy. And it probably is unnecessarily fussy for ordinary epistemology. But my interest is in cognitive systems, and that’s why I see it as important.
People have trouble understanding cognition, because they don’t understand what cognitive systems do. And the reason that they don’t understand what cognitive systems do, is that much of the very important things they do are instead credited to metaphysics. And until that changes, they will never be able to understand cognition.
Yes, that seems right to me. I’m a realist in the sense of your first sentence. I am skeptical of the “characterized independently” part.
By contrast, I think it should be vacuous.
From my point of view, the world in itself consists of stuff — of undifferentiated stuff. Cognizers such as ourselves make distinctions and thus differentiate different kinds of stuff. But what constitutes a kind of stuff is going to depend on the cognizer.
Neil Rickert,
It’s controversial anyhow. And it’s only vaccuous IF we can’t say anything about it other than that it exists.And MR proper is silent on that bit..
You know, one thing at a time.
ETA: ALMOST vaccuous.
Since I’m in hearty agreement with 95% of what you say at TSZ, I’ll focus my attention on the 5% that makes a difference. For example, I completely agree with your point about the importance of understanding cognitive systems. (Why else do you think I’ve been reading so much cognitive science over the past year?)
In particular, I’m in complete agreement with this:
In fact, I would go perhaps a bit further than you do here and say that understanding cognitive systems is extremely important for any epistemology, metaphysics, or ethics that is not mired in the entrenched dogmas of traditioal philosophy. (That is one of the senses in which I am a “Quinean naturalist”.)
But here’s where we come to one of the points of 5% divergence:
I have two related qualms about this.
Firstly, if the world-stuff were wholly undifferentiated, then affordances are completely constituted by the relevant sensorimotor abilities. But then there would be no way of making sense of the coupling of abilities and affordances as adaptive for the organism, because there would be nothing for the sensorimotor abilities to “gear into” or “mesh into”. So while I grant that different organisms will categorize their environments in different ways, there must be some underlying organism-independent structure. If there weren’t, then nothing the environment does would ever push back against the organism.
Secondly, scientific practices are (as I understand them) precisely this attempt to tease apart which patterns are real (underlying, organism-independent) and which patterns are constituted by the structures and processes of the brain-body-environment cognitive system — what Lizzie above nicely called the difference between “exogenous and endogenous salience”.
Now, it might be that when we get right down to it, metaphysical reality as I’m describing it here will be deeply uninteresting. As a friend of mine put it in conversation the other day, “the more dehumanized your conception of reality, the less it’s going to matter for human life”.
Put otherwise: on your version of pragmatism, metaphysical reality is something that we should stop talking about altogether; on my version of pragmatism, we can see why metaphysical reality isn’t worth talking about (much) once we understand it correctly.
I know. Sheesh. The “Enlightenment.” What a mess that made of things in it’s rejection of traditional philosophy. 🙂
This might be a miscommunication.
When I say that world-stuff is undifferentiated, my point is that differentiating between different kinds of stuff is a cognitive action. I want to emphasize that cognitive systems have to do this — it is not automatic. And presumably the kind of differentiation possible will depend partly on the abilities of an organism.
I guess my criticism of philosophy is that it puts too much emphasis on the what, and not enough on the how. Instead of studying the categories, I would like to see it study the principles of categorization. Science does the actual carving up of the world (well, we all do some of it). But there are principles involved in how to do that carving. And I would like to see philosophy place more emphasis on developing those principles. Perhaps this comes from my being a mathematician. Generally speaking, mathematicians are far more interested in method than in formal facts.
You are, as I said, correct, that edge-detection does not require saccades (which is why we can identify letter forms with a single fixation.
But I would strongly argue that it is the case, as Neil agrees, that saccades are necessary for visual perception. I don’t know how bar code scanners detect the location of bar code (as opposed to reading it once it’s found it), but it may well be similar to the way in which our visual system detects the features it needs to foveate.
But whether it is or it isn’t, what is pretty clear is that our visual system utilises saccades to bring to fovea features it needs to identify. In other words, the visual perceptual system is a motor system.
Yes, “the retina can and does exploit spatial as well as temporal variation”. But it does so via the saccadic system. Features in peripheral vision that are salient (typically, moving things, things with low spatial frequency) induce a saccade that brings that feature to fovea where its identifying features (colour, high-frequency patterns) can be processed. And we select, automatically, what to foveate all the time – and can choose suppress that automatic process if we wish (e.g. suppress the drive to foveate that blinking, jiggling ad telling you that you are the millionth visitor to the site and you’ve won a laptop).
One of the things I am working on just now is actually a gaze-control training system, designed to help people with poor control over where they look (and thus what they attend to) to improve their selection of what they foveate. It’s a computer game where you use an eye tracker to control the game play. It’s neat.
Yes, I saw that! Someone just walked on Thatcher’s grave….
Lizzie,
I’m not denying the importance of saccades — just pointing out the errors in Neil’s statement:
He has it backwards. Feature detection doesn’t happen during saccades. In fact, the visual system actually suppresses input during saccades to avoid blurring — hence our inability to see our own saccades when looking in a mirror.
Far from depending on sharp temporal transitions during saccades, as Neil claims, our visual system actively suppresses retinal information during those times.
Yes, you are correct. As I said, his example is not correct, but his general point about perception and action is, IMO, correct.
Interestingly, what appears to happen is that the receptive fields of many visual system neurons shift in advance of the saccade so as to become sensitive to the location that the stimulus of interest will occupy after the saccade
If that were not the case, we’d never see anything for “camera shake”, even with saccadic suppression.
ETA: and we also require saccades to foveate the feature-of-interest. And those saccades are induced by stimulation of the periphery.
The Predictive Coding stuff as outlined in this Clark Paper says both action and perception are tied in a feedback loop, although the starting point for the loop differs between perceiving and acting. (Possibly, this is not quite your sense for “acting”, however).
Perceiving involves bringing our prior models of what we expect to sense in line with what our current sensory input and would include acting (like saccades or bigger, body movements) in order to gather more input to test the model.
Motion involves setting a target model of (mostly proprioceptive) perception, then continually adjusting movement so that the actual perception matches the target.
In both cases, there is multilayer neural model of increasing abstraction by layer for the expected sensory response.
Radical enactivists can reply that such brain models are not needed; the world itself can serve as the model. Clark questions the viability of this approach given all that cognition does.
Such an enactivst claim reminds me of the error the original GOFAI workers made: from some initial success with simple symbol manipulation, eg checker playing programs and proving simple geometric theorems, they extrapolated that such abstract symbol models underlay all human cognition. Analogously, I see some enactivists as extrapolating the success of some robots dealing with simple worlds without using internal representation and of DST models of simple animal actions as jointly justifying the theory that no representations are needed for any form of cognition.
Extra added bonus paper based on discussion at end of thread: Transfer of predictive signals across saccades. As I understand abstract, it shows there is time for a presumed prediction hierarchy to be updated between saccades. And its authors are from Elizabeth’s neck of the woods too! (At least until the next referendum).
BruceS,
Thank you for alerting me to the Clark paper. I’ve just downloaded it now.
I’m inclined to agree with the thought that enactivism goes too far — or maybe I’m just impressionable and I’ve been reading too many criticisms of enactivism lately? There are two main sources of anti-representationalism in enactivism, and neither is strong enough to do the work.
Firstly, you have the appeal to the phenomenology of Heidegger and Merleau-Ponty in Dreyfus, Varela, Thompson, and a few others. Chemero takes his inspiration from Gibson, who is part of the American tradition that goes back to William James. (However, Mark Rowlands in The New Science of the Mind argues that Gibsonian ecological psychology is consistent with representationalism, because he defined representations strictly in terms of information-bearing to the animal.) Rorty’s anti-representationalism is also an influence on Varela and Thompson, but Rorty’s anti-representatonalism is mostly drawn from Davidson and is a piece of epistemology, not cognitive science.
In any event: no appeal to agential or person-level descriptions can show us that we don’t need some concept at the level of subagential or subpersonal descriptions and explanations. Granted, there might be something inelegant or ungainly about explaining Heideggerian/Dreyfusian skilled coping in terms of a Language of Thought (rule-based manipulation of context-independent symbols), but there’s nothing deeply incoherent about.
Secondly, you have the appeal to Brook’s early work in robotics. His remark, “the world is its own best model” is widely cited by enactivists. But Brook’s robots were extremely simple, and more complicated robots that can carry out a wider array of tasks certainly seem to have something like “representations”.
Thirdly, and most importantly, it seems pretty clear to me that the enactivists assume that the only conception of representations on offer is, roughly, Fodorian: rule-based manipulation of context-independent symbols. At any rate the enactivist criticism of representationalism are all directed against that specific conception of what representations are. And yet this just isn’t so — Millikan, Clark, Rick Grush, and Michael Wheeler have all developed new accounts of what representations can be that are action-guided, dynamically coupled to bodies and to environments, and everything else that fans of Dewey and Merleau-Ponty could want.
Lizzie,
I would never treat the Iron Lady so disrespectfully!
Lizzie:
Bruce:
All of which is further bad news for Neil and his fellow direct perceptionists, since they insist that perception does not involve models.
I also insist that perception involves models.
Elizabeth,
Certainly any animal that displays a wide repertoire of behaviors in response to a variety of sensory stimuli will have perceptually-guuded mental models for regulating action.
KN,
We’re talking about models within the perceptual process rather than models guided by perception.
Direct perceptionists deny the former, but my impression is that most of them accept the latter. How else to explain our ability to evaluate what-if scenarios?
Neil may be an exception. He claims that the brain doesn’t do computation, whereas to me it is obvious that our ability to evaluate what-if scenarios depends on computation. We model a situation and update the model as the scenario unfolds in our imagination. What is the process of updating a model, if not a form of computation?
(Neil’s ideas on perception and cognition don’t make a lot of sense to me.)