In the ‘Moderation’ thread, William J Murray tried to make a case for ideological bias among evolutionary scientists by referencing a 2006 Gil Dodgen post, in which numerous authors emphasise the lack of teleology within the evolutionary process. I thought this might merit its own OP.
I disagree that authors are showing a metaphysical bias by arguing against teleology. I wrote
Evolutionary processes, conventionally defined (ie, variations and their changes in frequency due to differential survival and reproduction), do not have goals. If there IS an entity with goals that is also directing, that’s as may be, but the processes of evolution carry on regardless when it isn’t. It is important to erase the notion of teleology from a student’s mind in respect of evolutionary mechanisms of adaptation, and most of those quotes appear to have that aim. Organisms don’t, on the best evidence available, direct their own evolution.
To which WJM made the somewhat surprising rejoinder: “how do you know this”? Of course the simple answer is that I qualified my statement ‘on the best evidence available’ – I didn’t claim to know it. But there is a broader question. Is there any sense in which evolutionary processes could, even in principle, be teleological? I’d say not. You have a disparate collection of competing entities. Regardless whether there is a supervening entity doing some directing, the process of differential survival/reproduction/migration cannot itself have goals.
An example of evolution in action: the Chemostat.
The operator of a chemostat has a goal – often, to create a pure cell line. The process by which this is achieved is by simultaneous addition and removal of medium, which causes purification by random sampling, which is evolution (a form of genetic drift). How can that process have a goal? There is no collusion between the cells in the original medium to vote one to be the sole ancestor of all survivors. How do I know this? That would be a pretty daft question. I think it would be incumbent on the proponent to rule it in, rather than for me to rule it out.
I think you have it backwards. There might be definitions of teleology that require an intender that are not covered by by “teleology,” but I can’t think of any.
http://www.astronomycafe.net/gravity/gravity.html
http://www.scientificamerican.com/article/physicists-eye-quantum-gravity-interface/
http://www.dailygalaxy.com/my_weblog/2013/11/our-understanding-of-gravity-is-fundamentally-wrong-two-conflicting-theories-of-the-universe.html
http://m.livescience.com/1770-greatest-mysteries-gravity.html
http://helios.gsfc.nasa.gov/qa_gp_gr.html#gravcause
LoL! Thanks Robin, that was worth the price of admission.
William:
The real question isn’t whether they’ve been “sufficiently described in mechanistic terms”, but rather whether they are thus describable in principle.
Detailed mechanistic models can be impractical, as Lizzie points out, and we may choose to adopt a teleological stance to simplify things. It’s a convenience, not an in-principle necessity.
Addendum: some time ago I noted my support of Jay Rosenberg’s argument for “convergent realism” (the paper is here, but it’s on JSTOR and so probably inaccessible to most of you).
I still find convergent realism quite appealing. But on the approach I’m now considering, convergent realism is even more limited than Rosenberg admitted. He thought that CR only held for quantitative natural science, because only when you have exact quantities that you can generate model theory change in terms of the Cauchy convergence.
By contrast, if Ladyman and Ross are on the right track, convergent realism is true only of fundamental physics. There could perhaps be different convergences in the different sciences, but I’m more inclined to think that theories in the sciences are too domain-specific and context-dependent to generate the really strong inter-theoretic comparisons that convergent realism requires.
I’m not so sure about that. A teleological stance can disclose real features of a system that exhibits organizational closure and thermodynamic openness (cf. Mossimo and Bich, cited above), where the systems have the right kind of dependence-and-independence from their environments.
The only requirement that physics imposes is that our description of the difference between the linear causality of “mechanistic” systems and the circular causality of “teleological” systems does not involve backwards causation (violation of the arrow of time) or violation of the principle of causal closure of the universe.
As far as I can tell, there’s nothing in fundamental physics (QM + GR + thermodynamics) which requires that the teleological stance is a mere convenience, a second-rate bookkeeping device, more of a heuristic than the mechanistic stance, less “real,” etc.
Actually, I think it’s an in-principle necessity. For one reason, complex systems are chaotic – you can’t predict outcomes by simply knowing the starting values, as it were, even if the system is deterministic – you’ve still got to run the whole model to find the answer, which means the model is as large as the system you are trying to model. And with a non-deterministic system, you will have very little chance of getting anything like the same outcome twice. So chaotic systems are intrinsically unpredictable.
But they nonetheless have higher-level properties – we can say things like: usually when we see A emerging, B follows. Or, when A intends B, B usually occurs.
The classic example is weather patterns – we can’t make detailed forecasts of weather more than a few days ahead. But we can make general principles, that allow us to say, for instance, that this summer will be probably be a cool one, because of the el Nino, or that the temperature in November will probably be cooler than the temperature today.
But those are system-level models, not models based on the parts, and they are have much larger error-bars on their predictions. That’s why statistics is so important in psychology! We can make general predictions about behaviour, but we are often wrong. We just hope that our generalisations are “statistically significant”.
If the gravitational mass is equal to the inertial mass does it mean that they are the same thing? Do we know why the two values are equal, or is that still a mystery?
Mung:
Einstein addressed that in his famous statement of the equivalence principle:
keiths:
KN:
I’m skeptical, but let me read Mossimo and Bich before I respond.
I should have said it constrains scientist theories in the special sciences. My bad.
The equivalence principle is one of the corner stones of general relativity. Now physicists have used quantum mechanics to show how it fails.
New Quantum Theory Separates Gravitational and Inertial Mass
A quick question while I think about a more substantive reply to the bulk of your posts.
What do you mean by a DST?
To me DST is just mathematics: viz, a coupled set of differential equations, usually including derivatives wrt to time (hence dynamic), along with the techniques for analyzing the trajectory of the variables through phase space.
But when you say DST may be a fundamental physical theory, that makes me think you mean something else by DST.
What’s more:
“This theory deals with the long-term qualitative behavior of dynamical systems, and studies the solutions of the equations of motion of systems that are primarily mechanical in nature”
https://en.wikipedia.org/wiki/Dynamical_systems_theory
Edward Feser writes:
So no.
Mung: The problem with any computational theory is that it is necessarily based upon switching, and that ignores the actual dynamics of living organisms.
Discrete and continuous processes in computers and brains
I’m not sure what you mean by “bleed into”. So let me try my point again.
Many scientific models based on mechanisms simultaneously incorporate two sciences. For example, neuroeconomics tries to link people’s decision processes with neuroscience. The phenomena and context come from economics; the components and their inter causal relations come from neuroscience.
In your doctor scenario, an analogy would be explaining and treating a patient’s depression by drugs whose development was based on neurochemistry.
It is not that someone takes two stances at the same time to look at different aspects of a situation, it is rather that two stances/sciences are integrated in a single mechanism crossing both.
Again, I have no issues with rainforest realism. I just want to say science does build integrated models which mix ontologies from different sciences. So one’s approach to ontology and stances has to recognize that.
BTW, I make no claims about the success of neuroeconomic models or whether we really understand the neuroscience of depression. Only that scientists are trying. to build models of that form.
ON a related note, here is a skeptical article about neuroscience and politics. I thought it might interest you as it calls on the enactivists for support of its critique
Can you summarise why you find this essay convincing? It concedes that neurons are switches. Why do you think that a “computational theory” of cognition “ignores the actual dynamics of living organisms”? What essential feature is, in your view, ignored?
I appreciate your emphasis on that point. And thank you for the article on neuroscience and politics!
I am not sure that emergetism requires that the base has ontological primacy.
In fact, despite Ladyman and Ross, there still seems to be a lot of philosophical ink continuing to be spilled on defending the ontologically independent causal powers of the entities of the special sciences from Kim’s overdetermination arguments.
For example, the Mossio and Bich paper you linked above cites an earlier paper of theirs which appears to be doing exactly that for their model of biological closure of constraints (I’ve only read the overview).
Robin,
I’ll take that to mean that you cannot answer my questions.
Perhaps not, but it does strike me that emergentism is going to require something like the emerging and the emerged-from has ontological priority over the emerging/emergent. But even relative ontological priority is a slippery slope — which is different from the mere sanity that simpler systems appeared earlier in the being-becoming of the cosmos than more complex system. Cosmological priority is not (necessarily) ontological priority.
That’s a nice observation! But if Ladyman and Ross are basically right, then this ink spillage is unnecessary — rather, under the rules of ‘rainforest realism’, all of the special sciences characterize the entities of their respective domains as having independently specifiable causal powers! This gives us a very different way around Kim’s objections.
I should note that, in what I’ve said here so far, I’ve been writing as if Ladyman and Ross’s scientific metaphysics (“the world is real patterns”) is roughly consistent with the process ontology that Evan Thompson inherits from Deleuze and Merleau-Ponty. I actually have no idea if those two approaches cohere or not, and I’m still waiting to find someone who can tell me if I’m talking sense or not.
From a recent paper by Mark Okrent, author of Rational Animals. (The full paper is a criticism of Brandom’s interpretation of Heidegger; I assume this will not of much interest to most of you.)
——————————————————
As I see it, here are all of the five autonomous layers of intentionality. (To say that the layers are autonomous is to say that it is possible for an agent to exhibit any of the lower layers without exhibiting the higher, but that it is not possible for an agent to exhibit any level of intentionality without also exhibiting all of the levels below it.) At the lowest level is the type of goal-directed teleology that is displayed by virtually all of the animals. The actions of such agents have goals, even though the agents themselves have no states that are about or directed towards anything. At the next highest level are those agents whose behavior exhibits instrumental rationality. Such agents, including many mammals and birds, not only do things in order to accomplish ends, they also do things for reasons of their own. There is no reason that such instrumentally rational agents can’t use found tools, and many of them do, even if such agents neither intend tools as to be used in socially prescribed ways nor act to improve the tools that they use. . . . At the next level are those animals, if there are such, that display both instrumental rationality and the rudimentary form of culture that is necessary to institute tool types as to be used in various socially prescribed ways, but do not display the kinds of interpretive activity, such as improvement and repair, that Heidegger marks as the necessary condition on discourse. Even such agents, however, are significantly different from us. As Bert Dreyfus puts the point in the title of the paper he has just given, ‘skillfully coping human beings differ from animals’. I would add, however, that such animals are essentially different from human beings, because human beings are self-interpreters. One level up from the social tool users are those non-linguistic tool users, if such exist, who not only use socially instituted tools, but interpret the roles that define those tools, and thereby themselves. . . The last layer at the top of the cake is language. It tastes good, and it is good for you, but unless all of the other layers are capable of autonomous existence, the linguistic top layer is impossible.
To me this just sounds incoherent. =P
Is there such a thing as non-goal-directed teleology?
What does it mean to say that an agent has goals but no states that are about or directed towards those goals?
In context, by “state” Okrent means “mental state,” and specifically, mental states such as beliefs and desires. An animal can have goal-directed behavior without having any beliefs about the goal or desires to attain it. The main example Okrent uses in his book is the egg-laying behavior of the Sphenx wasp: fully teleological but not intentional.
keiths, to William:
Lizzie:
You’re right that we can’t predict the longer-term behavior of deterministic, chaotic systems, but that is only because we don’t have the computational resources and can’t do infinite-precision math, in practice. A Laplacian demon wouldn’t share these limitations and could predict the long-term behavior of any deterministic system, chaotic or not.
The limitations are in practice, not in principle.
True, but that’s because of the non-determinism, not because of the chaos. A Laplacian demon with knowledge of the outcome of every nondeterministic event could in fact model a chaotic system perfectly.
My overall point being that we include teleology in our models for practical reasons, not because it has a genuine existence apart from mechanism.
I’ve read this several times, but I can’t parse it. Can you try again?
I think that is a distinction without a difference. Even the Laplacian demon with infinite computational resources and infinite precision maths would still have to run the entire simulation in order to find the answer.
Reciprocating Bill,
I haven’t heard about Dennett’s upcoming book. What do you know about it?
Lizzie,
Sure, but my point is that the demon’s simulation is nothing but a simulation of mechanism. It makes correct predictions without invoking teleology.
Teleology is something we must invoke in practice, but it is not needed in principle. Mechanism is enough.
But practice is all we’ve got, keiths.
From within the system, the only way of modelling what happens in the system, when it concerns intentional agents, is at the intentional level. The point about the demon is that it is outside the system
If it wants to model a fellow demon then it’s stuck.
ETA: another way of putting this might be to say: a simulation can only be a good predictive model of a chaotic system if the simulation is not itself part of the model (and the system is deterministic, and all the starting values are known yadda yadda).
In fact, of course, simulate is exactly what we do as intentional agent – we simulate the effects of alternative courses of action, and choose the course of action whose simulated outcome we prefer. And when predicting the actions of other intentional agents, we simply model this process at one remove: “If I was that person, I would do X”.
evolution by definition must be teleological or it runs aground extremely fast.
This is why the only way the darwinian evolution narrative survives is by co-opting 98% of what needs to be explained.
Assume OoL. Assume reproduction, Assume multi-cellularity. Assume all the mechanisms required to handle the ramifications of reproduction and multi-cellularity.
And viola, evolution works baby…….no shit, sherlock.
Can you explain what you mean by this?
Not really.
Certainly.
That’s implicit in OoL
Nope.
Nope.
I think you’ll find that Sherlock played the violin, not the viola.
I suspect L-R does not help, but to explain why I need to review the usual approach to detemining whether emergent entities are real.
The philosophy on the topic that I’ve read accepts the terms of the arguments as presented by Kim in Making Sense of Emergence. Roughly: Physicalism requires supervenience on the entities of physics at a minimum. Emergentism claims that novel irreducible entities emerge while still supervening. To gain reality for the emergents, these entities are claimed to have irreducible, underivable causal powers. Kim tries to show how the assumption of supervenience contradicts that possibility, eg by over-determining causes if one allows unique causal powers for the emergents.
One counterargument to Kim is via multiple realizability. If higher level entities are can supervene on different physical substrates while retaining a single type, then that would make the causal properties of that type irreducible to the any supervened-upon substrate. But MR has dubious scientific credentials, at least in issues of mental causation. Also, there are arguments that if a type is truly multiply realizable, then it cannot be legitimately considered a single type.
A more common approach is to look at the nature of the causal properties of the upper level entity and show they stand on their own. One way to doing this is by using a plain counter-factual approach to causation, as with early Davidson. But this approach has serious philosophical issues (eg how to derive the outcomes of the counter factual cases without appeal to eg regularity principles). And Kim claims that it is unintuitive to apply counterfactual causation to mental causation in particular, since it ignores our intuition of agency which seems to involve some kind of process/transfer of energy in causation, not just counterfactual cases.
Instead, a more scientifically-friendly and sophisticated notion of causality is used to argue against Kim. Woodward’s interventionist approach seems helpful: it combines counter-factualism with the concepts of Bayesian networks, an important tool for determining causality in the special sciences. For example, his Mental Causation and Neural Mechanisms does an nice job of examining Kim’s concerns and how to counter them via interventionist models of causation.
Now why does this relate to L&R? I know they reject Kim’s metaphysics and concepts like supervenience. Instead, I understand them to be saying the real patterns detected by each special science provide all the needed ontological justification for the entities and their associated causal powers within each separate special science.
But how do you justify the reality of the patterns in those sciences? I think through the scientific disciplines each applies. And that means you need to use the scientifically accepted notions of causality in those domains. Which brings us back to the considerations raised (eg) by Woodward.
unsurprisingly, keiths is oblivious to the teleological nature of the word mechanism.
watch him wiggle free of it though, through pedantic feats of semantic wizardry.
Well, can you explain what you mean by this?
Dr. Liddle retorts “NOPE”. Gotta love the speed at which she rips apart a logical observation.
Thanks though, for catching that transposition. Its a common enough spelling error for that particular arrangement of letters.
Voila does sound a whole lot better than viola.
Kantian Naturalist mentioned it upthread. That’s all I know about the book. I know Dennett was very favorably impressed with Millikan’s Language, Thought and Other Biological Categories, which directly deals with the nature of “function” in biology, as far back as The Intentional Stance (I think).
actually in the darwinian evolutionary narrative, reproduction is not implicit in OoL. first comes the most basic proto-cell by sheer culmination of undirected, goalless particles of matter clumped together by happenstance.
THEN, somehow basic reproduction. THEN, somehow multi-cellularity. THEN, somehow complicated mechanisms to handle the reproduction of multi-cellular organisms. THEN, somehow a digestive system (mindful of the notion that digestion presupposes the need to nutrients), THEN somehow a motility mechanism (mindful that movement presupposes a need to move). THEN somehow a sensory mechanism (mindful that senses pre-suppose a need to navigate).
The evolution narrative only gets off the ground by assuming early life miraculously avoided shipwrecks over and over again. Not just reproductive shipwrecks, but sensory wrecks, motility wrecks, digestive wrecks, defensive wrecks, all kinds of wrecks. On and on and on.
I enjoyed that paper and its new approach to Dennett.
But I am concerned that her arguments lead to “Richard Rorty’s milder-than-mild irrealism, according to which the pattern is only in the eyes of the beholders” (quoted from the concluding section of Real Patterns). Dennett says he wants to and does avoid this irrealism.
I’m thinking of statements like these from her paper:
As I understand her argument, it is saying that the objects of the intentional stance gain their reality by the success of each individual’s coping strategies. In particular, I don’t see anything specific in the paper about moving beyond the individual in an objective way. Dennett, on the other hand, talks both about the winning of bets by the objectively correct balancing of simplicity and predictive power, and also about the “statistical effect of very many concrete minutiae producing, as if by a hidden hand, an approximation of the ‘ideal’ order”, which he relates to Churchlandian connectionism and so neuroscience, I believe.
I believe you raise a similar concern when you differentiate a stance from a pose in your discussion of religion and magic. You say:
I think there is a typo there which means I am having some trouble parsing your point, but I believe you are saying that stances require objective testability, poses do not. I that that appeal to testability requires the objective disciplines of science, and it seems to me that her arguments lack access to those resources.
Unless of course you recognize “Goethean science” as a science. Modern “Science” is very well suited to the study of inorganic nature where things are reduced to their fundamental properties. But I would say if we are looking for the fundamental entity in the organic realm it would be the whole organism. When an organism is reduced to fundamental particles it is no longer a living being but becomes dead matter.
Gordon L. Miller in the introduction to The Metamorphosis of Plants writes:
From Craig Holdrege
Goethe’s empiricism goes beyond the senses but he adds nothing to the phenomena which does not already belong to the real nature of those phenomena. He did not deny teleology in nature. He just thought that there were better avenues of enquiry than to ask, “what purpose is served by some form or other”.
BruceS,
That’s interesting objection to Kukla’s argument. In response, I would want to stress the social as well as embodied character of stances, in part because of how one learns to take up a stance, and in part because the testability of subjunctive conditionals requires shared criteria for testability.
I also suspect — though this is more tentative on my part — that one way we can retrospectively distinguish between stances and poses is based on whether different participants in the stance exhibit any convergence in their judgments over time. Convergence in judgment over time (under “ideal” communicative conditions — communication not distorted by power dynamics, all have equal access to relevant evidence, etc.) is, by pragmatist lights (ever since Peirce!) a reliable indicator of objective reality, even if the objective realities in question are only disclosed from the perspective of a particular stance.
I only heard about it from a friend of mine the other day. All I was told is that it returns to questions about teleology and function. In some recent work he’s been arguing that we can’t explain biological phenomena without talking about reasons.
CharlieM,
Nice to see someone here who admires Goethe’s holistic methodology of science!
But if we need objective, “shared” criteria to claim reality for whatever the stances “grapple with”, why wouldn’t that show that whatever they are grappling with must have been already there? Then we could use that justification of pre-existence as the basis of the target’s reality, not its detection by stances.
Which is what I take Dennett to be at least gesturing at in his spin of Churchland in Real Patterns when he tries to explain his version of realism.
Why is this “irrealism”.
I see nothing more realistic, than that patterns are of our creation. What I see as unreal, is the widely held view that there are human independent patterns and that cognition works by finding them. I see this mistake as a major failure of philosophy.
Neil,
So you would say that the pattern of, say, a salt crystal is not human-independent?
Lizzie,
It’s all we’ve got, but the demon doesn’t share our limitations.
In which case a meta-demon is required. 🙂
Remember, we are responding to William’s claim that intention is ontologically real and independent of mechanism. According to William, the fact that we need to invoke teleology in our models is evidence for the reality of that mechanism-independent version of intention.
If we can predict outcomes without recourse to anything other than mechanism — even if this is possible only in principle, not in practice — then we have falsified William’s claim. We have shown that intention arises from mechanism and is not independent of it.
Separating out something and calling it a salt crystal is already human dependent.
HI Neil:
I think we’ve been through our positions about scientific realism in past threads.
In this thread, I understand that “domains of separate sciences” is sometimes replaced by and possibly expanded by “stances”. I think and I understand KN thinks that to consider the entities of each to be real requires objectively-justified explanations, where the norms/guidelines for assessing explanations as objective are themselves pragmatically justified as I’ve outlined previously in the thread.
So it comes down to whether the success of sciences in predicting novelty and in manipulation justifies a philosophical conclusion that the entities and/or structures captured by the models of the theories can be considered objectively real.
I say they can. I understand from past discussions that you don’t see it that way. Fair enough.
(My concern with Kukla is that she seems to have omitted the objectivity that Dennett was concerned with. Of course, she may not think that is an important omission).