In the ‘Moderation’ thread, William J Murray tried to make a case for ideological bias among evolutionary scientists by referencing a 2006 Gil Dodgen post, in which numerous authors emphasise the lack of teleology within the evolutionary process. I thought this might merit its own OP.
I disagree that authors are showing a metaphysical bias by arguing against teleology. I wrote
Evolutionary processes, conventionally defined (ie, variations and their changes in frequency due to differential survival and reproduction), do not have goals. If there IS an entity with goals that is also directing, that’s as may be, but the processes of evolution carry on regardless when it isn’t. It is important to erase the notion of teleology from a student’s mind in respect of evolutionary mechanisms of adaptation, and most of those quotes appear to have that aim. Organisms don’t, on the best evidence available, direct their own evolution.
To which WJM made the somewhat surprising rejoinder: “how do you know this”? Of course the simple answer is that I qualified my statement ‘on the best evidence available’ – I didn’t claim to know it. But there is a broader question. Is there any sense in which evolutionary processes could, even in principle, be teleological? I’d say not. You have a disparate collection of competing entities. Regardless whether there is a supervening entity doing some directing, the process of differential survival/reproduction/migration cannot itself have goals.
An example of evolution in action: the Chemostat.
The operator of a chemostat has a goal – often, to create a pure cell line. The process by which this is achieved is by simultaneous addition and removal of medium, which causes purification by random sampling, which is evolution (a form of genetic drift). How can that process have a goal? There is no collusion between the cells in the original medium to vote one to be the sole ancestor of all survivors. How do I know this? That would be a pretty daft question. I think it would be incumbent on the proponent to rule it in, rather than for me to rule it out.
walto,
If, as many neo-Aristotelians (Flangan, Foot, Nussbaum) have suggested, moral values are ultimately grounded in facts about what is and is not conducive to human flourishing, then moral values are both objective (since the goodness of the values is not dependent on the beliefs and desires of any particular person or group of people) and also human-dependent (since if there were human beings, there would be nothing quite like moral values — though dolphins, elephants, and great apes certainly seem to have something very similar).
It’s not easy to make the necessary distinctions to avoid confusion!
I think I largely agree with that bunch, KN. The only amendment I’d make is that I think even if values are “dependent on the….desires of a particular person or group of people” (you’ll see I left out “beliefs” there) I think it could be objective anyhow. That could happen, e.g., if desires are precisely what create values.
And it is their directedness towards an end that makes them teleological.
walto,
You’re right that “human-dependent” doesn’t automatically mean “subjective”. The assertion “Mormons regard Joseph Smith as a prophet of God” is objectively true but highly human-dependent, hinging as it does on what Mormons have chosen to believe. (Their belief that Smith is a prophet of God is of course subjective.)
The situation with values is similar. It’s objectively true that orthodox Jews value circumcision, that Christians value baptism, and that Muslims value obedience to God’s will as expressed in the Q’uran — but none of those things is objectively valuable, full stop — they are all valuable to specific valuers.
Likewise with the philosophers that KN mentioned. It’s objectively true that they regard human flourishing as the foundation of objective morality, but their belief is itself highly subjective. It clashes with the Muslim’s belief that absolute submission to God is the sole standard of objective morality, for example.
The Guardian comments drily on Wickramasinghe:
I think you need more than the fact that two sets of views clash to infer that they are subjective. Maybe one set is just wrong. After all the heliocentric and geocentric models clash too.
If you two are going to re-fight this battle, perhaps it would be helpful to agree on whether you are fighting over science or ethics?
I think the sentence “x is valued by members of culture y” is a scientific statement whose truth would be established by an anthropological study of culture y. Like solar system motions is a subject for astronomy.
The sentence “x is good” on the other hand, assuming good is trying to express ethically good, could have its truth value looked at in at least these ways:
1. It is true if uttered by a member of culture y but may not be true if uttered by a member of another culture
2. Its truth does not depend on the which culture the utterer belongs to (or on the utterer’s personal beliefs).
3. The sentence is not a proposition, ie it is not something which is true of false. Rather, it expresses an emotion (“Yay for x”) or a command (“Do x”).
4: “x is good” iff x is good, that is a general notion of truth adds nothing. Simply address the merits of whether “x is good” by rational argument on a case-by-case basis for each x.
Those are my understandings of metaethical positions of cultural relativism, non-relativism, non-cognitivism, quasi-realism (Blackburn)
Of course, I am assuming the participants in the discussion do not enjoy arguing at cross-purposes.
That is a assumption about values that I have made which may not be correct.
< /lurk back on>
Speaking about language, this popped up in my Feedly today (from The Browser site). It is a quick and fun read, especially for the sample conversations from the experiment: A Neural Conversational Model (pdf). The experimenters use recurrent neural networks trained on human-human conversation on various topics and give examples of the trained network conversing with humans on various topics
One topic is a conversation at a help desk where the machine converses with a human having computer issues. The machine seems to do well at “understanding” and solving the issue.
There is also a conversation on the nature of morality.
It goes nowhere fast.
I take that as another success for machines imitating real human conversations.
Hi, Bruce. Thanks for your summary and list–it’s pretty slick, but, in the immortal words of Skip James when asked to comment on The Cream’s cover of his “I’m So Glad”–it still needs a little more grease.
What I am suggesting is something like–but a little different from–your (2) above. Like in that it makes value judgments objective, but a little different because I don’t claim that such judgments are ‘non-relative’. I claim, that is, that they are both objective AND relative, like my judgment that the computer mouse is on my right.
Also, I don’t think this view has been stated here before, by me or anybody else, so I don’t think it can really constitute a replay of any prior argument on this board. (I absolutely agree, though, that its suggestion will end in a disagreement that nobody will budge on. C’est la vie philosophique!)
Now, unlurk!
I’ve long been a defender of the view that all judgments are objective and relative.
If they weren’t objective, then they couldn’t count as judgments (ok, there’s a long argument cribbed from Kant and from Davidson as to why that’s so, but basically the idea is that to judge is to take up a position on how things are, yadda yadda yadda).
And judgments can’t be absolute, because they are made by critters with finite bodies and contingent histories. Judgments are at least relative with respect to the kinds of perspectives that are species-typical for us (“species-relativism”), not to mention the other kinds.
So while I agree that values are (or least can be) objective and relative, so too are all judgments — factual, normative, etc.
KN, I think that (interesting) suggestion is somewhat more radical than my own much more meager claim. Probably deeper too.
I don’t doubt that you can defend it ably, I just don’t want to be put in the position of having to!
walto,
Oh, yes, I wasn’t intending to saddle you with my more radical (and completely unoriginal) claim. The thought that judgments can be both objective and relative is, of course, nothing other than Kant’s Copernican Revolution. William James and John Dewey take this insight, de-transcendentalize it, and naturalize it.
Yes, I guess that’s so! That’s a good point that I hadn’t considered.
I can only say in my defense that, on purpose, I made no attempt to say what could make a make a non-relativists statement about moral values true.
Now I need to think more about the exchange between you and KN.
walto,
Yes, it isn’t the clash per se that renders the issue subjective — it’s that we have no means of resolving the clash.
Suppose that God exists and that he demands obedience, even in cases where obedience is injurious to human flourishing. Do we have an objective moral obligation to obey him? To defy him? Neither?
How would one make an objective determination? I don’t see how it could be done, even in principle.
Now contrast that with the geo/helio debate, where there is a means of making an objective determination. It boils down to this:
walto:
A statement like “the Pope thinks contraception is wrong” is objectively true, but what people really want to know is whether “contraception is wrong” is objectively true.
I claim that the latter question cannot be properly answered, and that anyone who purports to answer it is relying — whether knowingly or not — on a subjective judgment such as “human flourishing is the ultimate good” or “God’s will trumps all else”.
Ok, now we’re at the stage that I think we’ll end up having to agree to disagree. I am fine with about 90% of that post, but there is a 10% subtext that I can’t accept. That is that, while we reach a point in value theory where we have to make a kind of leap of faith–say that “human flourishing is the ultimate good,” nothing like that happens at ultimate stages in the rest of philosophy.
So, for example, take
What we see are physical objects.
What we see are retinal nerve endings.
What we see are sense-data.
What we see are ideas in our minds.
These I take also not to be amenable to proof.
Now, someone might say, “Well, it depends on what you mean by “see.” And it sort of does, and sort of doesn’t. None of those is analytic, by any ordinary understanding of ‘see.’ And the same thing goes for the value claim: somebody might say “Well, if I MEAN human flourishing by ‘the ultimate good’ it MUST be true.” But Moore’s open question argument put the kibosh on that: it doesn’t seem to be a contradiction to claim that some item might be conducive to human flourishing and still not good. The point is that all deductive systems must come to an end somewhere.
But does this mean that every ultimate philosophical position is just as good as any other? I would say NO to that. The better ones comport more closely with ordinary language and common sense, are simpler, fit more nicely with the findings of modern science, etc. But none of those are proofs–either in the case of values or in the case of facts. In my view, that’s just life in the philosophy world.
So getting back to the case at hand, one may take as an ultimate axiom–on faith as it were–that, e.g., human flourishing is the ultimate good or that desires create values or that there are nothing but human emotions and no values, and see where it takes one, just as Carnap experimented with both a sense-data or physical object approach to the “external world.” I myself am a common-sense realist and a value objectivist, not because I think I can prove them, but because they seem to me to comport best with common sense, etc. But I don’t claim that values are in any sense independent of human desires or non-relative just because I take them to be objective. Anyhow, other folks’ mileages will vary, and the vehicles will be tested by their handling, longevity, comfort on long trips, etc.
The point is that, IMO, just as ultimate values aren’t provable, ultimate _______ {fill in the blank} aren’t either. But on this, Bruce is right when he says that we’ve been all over this before….
Since we talking about some philosophy of mind in this thread not long ago, this seems as good a place as any to post a thought I had earlier today:
The past few months I’ve been wondering, off and on, about whether the concept of representation is important for cognitive science — especially if one thinks that the cognitive system is the whole animal in its ongoing transactions with its environments.
It now seems reasonable to say that the structural coupling of sensorimotor abilities and environmental affordances might be causally mediated (at the subpersonal level of description and explanation) by representations qua relatively stable patterns of neuronal activity (modeled as attractors in state-space) aimed at successful coping. If so, the enactivist challenge to cognitivism is aimed only at rejecting any attempt to model those representations as tokens of a formal language (as Fodor and Pylyshyn do).
Strangely enough, in the last half hour I read this passage with the same sentiment from How the body shapes the way we think:
That’s another thing about some enactivist writing that’s bothered me: the implication that if it involves representation and computation, then it must be Fodor’s version of symbolic computation over amodal (ie non-sensory-based, abstract) representations.
But neural networks compute and their activation patterns can be interpreted as representations.
There is also Barsalou’s work, which attempts to ground symbols in stored sensorimotor, (that is, modal) memories, and concepts as simulating-type computations over those modal symbols.
Or the predictive approaches, which I need to read more on, but which I believe some think use representations of the probability functions involved in Bayesian analysis.
Then there the view that we didn’t develop language because our brains already worked that way (a la Fodor) but that our brains are able to think discursively because we developed language for other reasons. I think Dennett may have been one of the originators of that with his “virtual machine” concept for rational thought running on a neural substrate. Virtual machines: another idea he adapted to philosophy from work with computer scientists.
walto,
It isn’t that we don’t take those leaps elsewhere — it’s just that the chasms aren’t so wide and people tend to agree on the leaps that need to be made.
To conclude that the solar system is heliocentric, for example, we need to (provisionally) assume that we aren’t being fooled by a Cartesian demon who is manipulating our sensory experiences in order to create a heliocentric illusion.
Most of us are willing to make that leap, and justifiably so in my opinion.
The other leaps are similarly justifiable, with the result that scientists and scientifically literate folks embrace heliocentrism almost unanimously, leaving geocentrism to the crackpots.
I don’t see similarly compelling justifications for the assumption that human flourishing is an objective moral good. It’s important to many of us, obviously, but what about that race of intelligent cockroaches from Beta Aquarii?
I suppose it depends on what one means by “representation.”
Count me as skeptical that dynamical systems and attractors are going to be of much use here.
The aspects of cognitive systems that seem particularly important to us, involve information. I’m not a computationalist, so I don’t see the brain as doing computation. But I do see it as very much engaged with information. And information, at least as I use the term, can only exist in the form of representations. However, I disagree with the Fodor-Pylyshyn view of the nature of those representations.
Neil,
You accused me of misrepresenting your views, but you haven’t backed up your claim. Do so or withdraw the accusation.
keiths,
Keith, I agree that the question of whether there are objecive values isn’t like the question of whether the planets revolve around the sun. The former is an ultimate question in philosophy, the latter is not.
I gave an example of a non-value ultimate question (about what it is that we see). Here are a couple more: Are there universals (like properites) in the world, or only individuals? Are mental events caused by neurophysiological events, or identical to them? Are any human actions free or are they all entirely determined by prior events? All are almost–but not quite–definitional.
(You can almost delimit them by noticing whether they are found fascinating by our anti-philosophy friend Neil and our anti-metaphysical friend KN.) 🙂
They’re what has been called ‘categorial’ since they involve our most basic categories. And, as I discussed above, their being first principles or axioms means they aren’t subject to traditional proof–either deductive, like a mathematic theorem, or empirical, like heliocentrism or special relativity.
They’re stinkin’ perennials, which is part of what makes them endlessly interesting to me (and annoying to Joe F.)
walto,
But the latter can be demonstrated objectively, while the former cannot — and this is exactly the issue we’ve been discussing.
If they can’t be demonstrated deductively or empirically, then what qualifies them as objective? You mentioned common sense earlier, but much of common sense is just informal empirical and/or deductive reasoning.
keiths,
Well, you could say (and many have) that ALL of those so-called ‘ultimate’ questions of philosophy, in not being either like heliocentrism (empirical) or mathematical (deductive) statements, are for that reason nonsense.Smart people, like Carnap and Schlick, have taken just that position. Most philosophers have disagreed, however, but have conceded they’re importantly DIFFERENT from questions of science or math. If you want to restrict the use of ‘objective’ to the more regular sort of propositions, you can do that, but that too seems categorial, since it doesn’t seem to follow strictly from the MEANiNG of ‘objective’ that there be only empirical and deductive paths to knowledge. If that’s true, it too seems another sort of truth.
But all that’s neither here nor there at present. If you want to make the positivist claim that there are NO philosophical truths that can be known, OK. My point was only that value claims are no different from other philosophical claims in that regard. If the others may be known (and maybe they can’t!) then the value ones may be as well. That is, the status of being a value statement doesn’t automatically make a proposition subjective, if ANY philosophical statements can be objective.
Epistemology != metaphysics/ontology
walto,
I’m not arguing for positivism or claiming that the only paths to objective truth are empirical and deductive. I’m asking what the ‘third way’ looks like:
A philosopher might say: a reprentation is something with semantic properties (content, reference, truth-conditions, truth-value, etc. — from SEP on Mental Representation). Or even more simply, a representation has content and content has accuracy conditions — from SEP on Contents of Perception.
So if we can specify accuracy conditions for an attractor state, then it could count as a representation.
Representations need vehicles (physical substrates) too, so there would need to be a reasonable way to interpret the attractor as having spatio-temporal causal efficacy.
The quote I gave about is using neurons and the dynamics of their electrochemical interactions to provide those vehicles, probably something to do with spiking patterns in a neural network. Each attractor in the state space would be some subspace of the state space of those spiking patterns. If we could find accuracy conditions for the region of neural spiking patterns specified by that attractor, then that might mean we could call the attractor a representation. You’d have to have some actual neurons to observe reacting to stimuli to do that attribution of accuracy conditions of neural patterns to stimuli.
Categorial analysis!
Bruce, you should mention SEP Representational Theories of Consciousness–I finally got Lycan to mention Hall!
walto,
Could you demonstrate the use of categorial analysis to establish a value as objective?
KN,
I think it’s indispensable. For instance, how would planning be possible in the absence of representation?
Well, Hall wrote a big book on that (What is Value?), but you can get a smaller taste here:
http://www.jstor.org/stable/184990?origin=JSTOR-pdf&seq=1#page_scan_tab_contents
walto,
I was hoping that you could show us a specific example of categorial analysis being used to establish a value as objective.
We’ve discussed this before. The basic idea is to note the apparent intentionality of value language, and the analogousness of inferring values from emotional responses to the inferring of external physical objects from apparent perceptual experiences.
Not sure why you’d want to get into this again. You can just look up one of our prior arguments on the subject if you like.
Yeah, I see the point about the indispensability of “representation” for at least some kinds of “higher” cognition, like planning — or, more generally, for what Dewey called “inquiry”.
The sense I’m getting — both from what’s been said here and from other discussions I’ve had, plus other things I’ve read — is that the debate about “representations” in cognitive systems turns on a couple of different points.
If the really important notion is the idea of information, and esp. information available for use by the cognitive system, then we have a very liberal or broad conception of representation that should not give enactivists any pause for concern. On the other hand, if representations are construed as Fodor and Pylyshyn do — as meaningless tokens in a rule-governed formal language implemented somehow in the brain/mind — then it’s pretty clear that there aren’t any such things.
I agree with BruceS and Neil that enactivists tend to associate representations with F&P-style representations, then toss the baby out with the bathwater. Chemero argues that there’s just no baby there to begin with, and others like Teed Rockwell, Paul Churchland, and Andy Clark disagree.
For the time being, I think I’m going to adopt an official stance of neutrality as to whether enactivism can or should dispense with representations as such, and stress only the constraints that enactivism imposes on a viable concept of representation.
Thank you for the feedback — it’s been very helpful!
walto,
The analogy breaks down because the correctness of moral intuitions can’t be established the way the correctness of perceptual experiences can. But we’ve had this conversation before.
Alex Rosenburg has penned a NYT Stone article on non-cognitivism and why he thinks it is the only option.
BruceS,
Thanks for bringing that to our attention — but, man, what a terrible argument!
You’d think at least he’d be pushing a book or something, right?
More at Leiter.