Let’s suppose there really is a Ghost in the Machine – a “little man” (“homunculus”) who “looks out” through our eyes, and “listens in” through our ears (interestingly, those are the two senses most usually ascribed to the floating Ghost in NDE accounts). Or, if you prefer, a Soul.
And let’s further suppose that it is reasonable to posit that the Ghost/Soul is inessential to human day-to day function, merely to conscious experience and/or “free will”; that it is at least possible hypothetically to imagine a soulless simulacrum of a person who behaved exactly as a person would, but was in fact a mere automaton, without conscious experience – without qualia.
Thirdly, let’s suppose that there there are only a handful of these Souls in the world, and the rest of the things that look and behave like human beings are Ghostless automatons – soulless simulacra. But, as in an infernal game of Mafia, none of us know which are the Simulacra, and which are the true Humans – because there is no way of telling from the outside – from an apparent person’s behaviour or social interactions, or cognitive capacities – which is which.
And finally, let’s suppose that souls can migrate at will, from body to body.
Let’s say one of these Souls starts the morning in Lizzie’s body, experiencing being Lizzie, and remembering all Lizzie’s dreams, thinking Lizzie’s thoughts, feeling Lizzie’s need to go pee, imagining all Lizzie’s plans for the day, hearing Lizzie’s alarm clock, seeing Lizzie’s watch, noting that the sky is a normal blue between the clouds through the skylight.
Somewhere an empty simulacrum of Barry Arrington is still asleep (even automatons “sleep” while their brains do what brains have to do to do what brains have to do). But as the day wears on, the Soul in Lizzie’s body decides to go for a wander. It leaves Lizzie to get on with stuff, as her body is perfectly capable of doing, she just won’t be “experiencing” what she does (and, conceivably, she might make some choices that she wouldn’t otherwise make, but she’s an extremely well-designed automaton, with broadly altruistic defaults for her decision-trees).
The Soul sees that Barry is about to wake up as the sun rises over Colorado, and so decides to spend a few hours in Barry’s body. And thus experiences being Barry waking up, probably needing a pee as well, making Barry’s plans, checking Barry’s watch, remembering what Barry did yesterday (because even though Barry’s body was entirely empty of soul yesterday, of course Barry’s brain has all the requisite neural settings for the Soul to experience the full Monty of remembering being Barry yesterday, and what Barry planned to do today, even though at the time, Barry experienced none of this. The Soul also notices the sky is its usual colour, which Barry, like Lizzie calls “blue”.
Aha. But is the Soul’s experience of Barry’s “blue” the same as the Soul’s experience of Lizzie’s “blue”? Well, the Soul has no way to tell, because even though the Soul was in Lizzie’s body that very morning, experiencing Lizzie’s “blue”, the Soul cannot remember Lizzie’s “blue” now it is in Barry’s body, because if it could, Barry’s experience would not simply be of “blue” but of “oh, that’s interesting, my blue is different to Lizzie’s blue”. And we know that not only does Barry not know what Lizzie’s blue is like when Barry experiences blue (because “blue” is an ineffable quale, right?), he doesn’t even know whether “blue” sky was even visible from Lizzie’s bedroom when Lizzie woke up that morning. Indeed, being in 40 watt Nottingham, it often isn’t.
Now the Soul decides to see how Lizzie is getting on. Back east, over the Atlantic it flits, just in time for Lizzie getting on her bike home from work. Immediately the Soul accesses Lizzie’s day, and ponders the problems she has been wrestling with, and which, as so often, get partly solved on the bike ride home. The Soul enjoys this part. But of course it has no way of comparing this pleasure with the pleasure it took in Barry’s American breakfast which it had also enjoyed, because that experience – those qualia – are not part of Lizzie’s experience. Lizzie has no clue what Barry had for breakfast.
Now the Soul decides to race Lizzie home and take up temporary residence in the body of Patrick, Lizzie’s son, who is becoming an excellent vegetarian cook, and is currently preparing a delicious sweet-potato and peanut butter curry. The Soul immediately experiences Patrick’s thoughts, his memory of calling Lizzie a short while earlier to check that she is about to arrive home, and indeed, his imagining of what Lizzie is anticipating coming home to, as she pedals along the riverbank in the dusk. Soul zips back to Lizzie and encounters something really very similar – although it cannot directly compare the experiences – and also experiences Lizzie’s imaginings of Patrick stirring the sweet potato stew, and adjusting the curry powder to the intensity that he prefers (but she does not).
As Baloo said to Mowgli: Am I giving you a clue?
The point I am trying to make is that the uniqueness of subjective experience is as defined as much by what we don’t know as by what we do. “Consciousness” is mysterious because it is unique. The fact that we can say things like “I’m lucky I didn’t live in the days before anaesthesia” indicates a powerful intuition that there is an “I” who might have done, and thus an equally powerful sense that there is an “I” who was simply lucky enough to have landed in the body of a post-anaesthesia person. And yet it takes only a very simple thought experiment, I suggest, to realise that this mysterious uniqueness is – or at least could be – a simple artefact of our necessarily limited PoV. And a simple step, I suggest, to consider that actually a ghostless automaton – a soulless simulacrum is – an incoherent concept. If my putative Soul, who flits from body to body, is capable not only of experiencing the present of any body in which it is currently resident, but that body’s past and anticipated future, but incapable of simultaneously experiencing anything except the present, past, and anticipated future of that body, then it becomes a redundant concept. All we need to do is to postulate that consciousness consists of having accessible a body of knowledge only accessible to that organism by simple dint of that organism being limited in space and time to a single trajectory. And if that knowledge is available to the automaton – as it clearly is – then we have no need to posit an additional Souly-thing to experience it.
What we do need to posit, however, is some kind of looping neural architecture that enables the organism to model the world as consisting of objects and agents, and to model itself- the modeler – as one of those agents. Once you have done that, consciousness is not only possible to a material organism, but inescapable. And of course looping neural architecture is exactly what we observe.
I suggest that the truth is hiding in plain sight: we are conscious because when we are unconscious we can’t function. Unless the function we need to perform at the time is to let a surgeon remove some part of us, under which circumstances I’m happy to let an anaesthetist render me unconscious.
Of course, most dualists think that the mind/soul does more than merely experience. They usually ascribe memory, will and morality to it. For these folks, a soulless automaton would not be a simulacrum of a normal human, even when viewed solely from the outside.
Except that you draw no distinction between having access to that knowledge and having access to the experience of consciousness. Most of us can conceive of beings that have the former but not the latter. Daniel Dennett refers to this as “the zombic hunch” and has battled mightily to show that it is a false intuition. I’m not yet persuaded.
Lizzie, you are directly going to say Coyne is right, everything is an illusion. We act as if consciussness, free will and then moral were real because is better for us, but unfortunatly we know they are an illusii+on.
I originally started thinking about consciousness (or, more correctly, about learning) in terms of what a homunculus would have to do. And I am pretty much at the point where there is sufficient homunculus already in the kind of homeostatic processes that we find in biological systems.
Incidentally, I have been posting a little about this at my blog (at a slow post rate).
In that case, we have unnecessary duplication. We don’t have to invoke a soul to account for memory, nor volition, nor even morality by some yardsticks (it’s would be easy enough to program an altruistic robot, for example).
So what can a soul remember that a body can’t? Anything? What can a soul intend to do that a body can’t? Anything? What can a soul decide that a body can’t?
Well, I agree that I did not definitively zap that bug. But I think I have shown that whenever my putative experiencing Soul arrives in a new body, it has all the information it needs to experience the whole caboodle, including memory and plans (for which we have good materialistic models), but none of the information it needs to experience anything else.
So what does it have that the body does not?
Could we say, for instance (in my scenario) that Barry has an intermittent Soul, but that because the Soul has access to all the stuff it’s missed, it’s the same as having a continuous Soul? And, if so, could we say that “consciousness” is the process of accessing that information, whether intermittently or continuously?
And, going with the intermittent idea, could we posit that this Soul (or whatever you want to call it, A Consciousness, if you like – I try not to reify Consciousness!) is something that “samples” the material processes in the brain at intervals (however short – people even talk about the “frame rate” of consciousness, which I think is ill-conceived, but I see what they are trying to get at), creating “experience” out of the currently available or active information?
And could we not go from there (if you are with me thus far) to the notion that consciousness itself is simply what we call the act of “being conscious of something” whether it is our plans, our past, our itchy nose, our self, our mortality, not a continuous state at all.
Another not-so-abstract thought experiment I like (well shudder) to try is the idea that anaesthetics may not actually put you “out” at all, but may give you retrospective amnesia. This is just slighlty true. But let’s say it is completely true – that what happens when you undergo surgery is that the anaesthetist paralyses you so you can’t jiggle about under the knife, but you feel every cut and wrench. But afterwards, they give you a quick shot of something that erases all memory of the “experience” – re-sets your neurons, if you like, to their pre-operative condition.
Everyone would report that the “experience” of the operation had been completely painless, and recommend it to all their friends. Would they “really” have experienced the scalpel or not?
I suggest that the question is incoherent for just the same reason as “a consciousness” is incoherent – experiencing is something we do, not something we are. And, post operatively, we are unburdened with the experience of being sliced open while fully conscious – nontheless during the operation we were very much burdened with it. And that’s because experience and memory are intimately related, as are memory and imagination (both involve the construction of experience).
Rather than an illusion, I would argue that the problem is undefinable.
No, I am not. I don’t think they are “illusions” at all. I think they are constructions – models that, unlike “illusions” are perfectly coherent and functional.
I think our entire perceptual system consists of constructing and testing models of the world. I don’t think construction a model of the world that consists of volitional conscious agents of which we are an examplar is a poor model at all. I think it’s a very good model.
Unlike, say, the “moon illusion” in which we perceive the moon as larger at the horizon than at the zenith, which persists despite the fact that it is contradicted by any simple measurement made between finger and thumb.
There is no inherent contradiction in thinking of ourselves as volitional agents capable of rational and moral decision-making. It’s what we do.
Cool. Will check out. Why not post a link?
What would the Ghost experience if it transferred into someone on anesthetics while undergoing major heart surgery with a small likelihood of successful surgery? Would the Ghost have the consciousness and “will” to decide to transfer out before it’s too late?
And, by the way, what is the mechanism by which the Ghost “attaches” itself to and “reseases” itself from the individual it inhabits? What is the Ghost’s “self?”
Clicking on my avatar should get you there.
Well, I guess my rather clumsy point is that in order to have the ghost experience what “we” experience we have to make sure it doesn’t experience anything that “we” don’t!
So it doesn’t add anything – all the memories, plans, decision-making, sensory inputs, out of which the ghost must make “experience” are already there. It doesn’t seem to have anything left to “do”.
Indeed; your point was well-made. 🙂
I often like to demonstrate the same kinds of issues in the form of questions that require thinking through the details of how such non-physical entities interact with the physical world.
It is one thing to postulate such entities; it is a much more difficult task to explain how they would actually work.
I really like what you wrote here:
I think that last thing is really critical.
Lizzie,
Exactly. I continually ask those questions of dualists, but I never get a coherent answer. Most dualists, especially the naive ones at UD, are completely unaware of the neuroscientific evidence showing that all of these things can be impaired or even eliminated entirely by disrupting the neural mechanisms that implement them.
Even the will itself is a neural phenomenon. From a 2005 comment of mine at UD:
Yes indeed – but a point I would also make is that just as the “soul” doesn’t add anything, nor do we lose anything by removing it. Mr M has a problem, but his problem exemplifies just how free our normal decision-making is, not how unfree it is!
Simple Lizzy, a soul can intend to stop the addictions of the body, like smoking, over-eating, drugs, etc.
The body never intends to stop smoking, over-eating, getting high. It likes those experiences. It does not intend to stop; whereas the soul can/does intend to redirect behavioral patterns back to physical and emotional health.
Addiction in a nutshell is the soul’s abdication of its superior power of intent; allowing the body free reign to indulge.
~~~The soul governs the mind, the mind governs the body. The body protests and submits in turn.~~~
Lizzie,
Nothing that I can see, which is why I am not a dualist. Arrington et al claim that a soul is necessary to explain qualia, but that just pushes the problem back a level. How does immaterial soul-stuff account for qualia? Barry certainly can’t tell us.
Well, I think it’s clear that we can’t have consciousness without that information, but it’s not so clear that having that information necessarily brings about consciousness (in the sense of subjective experience). The zombic hunch rears its head again. Just as it seems possible for a computer to process information — including information about itself and its “sensory” inputs — without having subjective experiences while doing so, it also seems intuitively possible for a biological organism to do the same.
Is there something about human-level information processing that necessitates an accompanying subjective experience? Not that I can see, although it would be great to be persuaded otherwise, since that would solve a lot of philosophical problems.
Well, consciousness certainly isn’t a binary phenomenon. As a meditator, I’m very aware (I almost said ‘conscious’) of how the object of our consciousness is constantly shifting, and how we are sometimes conscious of our own consciousness and sometimes not. So in that sense, at least, consciousness is not continuous.
And it’s possible to imagine consciousness (in the sense of subjective experience) winking on and off without our “knowledge”, as in the case where Barry remembers his consciousness as being continuous when it really wasn’t.
The important point here is that the “zombic hunch” entails epiphenomenalism. Subjective experience, per se, cannot have any causal power if the zombic hunch is correct. If it did, then the behavior of philosophical zombies would differ from that of people having subjective consciousness, thus violating the requirement that they be externally indistinguishable from normally conscious humans.
I’ve thought about that also. I think the answer has to be “yes”, unless you want to deny all subjective experience. If erasing the memory means that the experience wasn’t real, then none of our experiences are real. The memories all get erased when we die.
I think that’s just a linguistic quibble. To have consciousness is to be conscious, just as to have happiness is to be happy. We don’t believe that happiness exists in some Platonic realm, independent of the process of being happy, and we don’t think that about consciousness either.
Of course the soul adds something.
Ideas.
What is the supporting evidence for this? Find the molecule that triggers an idea!! But then you have to ask what triggered the molecule, etc. etc.
I think the problem Lizzie has with ‘soul’ is a mis- characterization of soul as being unnatural. Of course it is natural; but simply immaterial. Kinda like gravity. It too is immaterial but can only be understood from its effects.
Well, we can understand soul from its effects: ideas.
keiths,
Problem is Keiths, if you are aware of your consciousness and then not, then the I that does the ‘aware’ thing, is different than consciousness.
It is the I that is the soul. Consciousness is its current dwelling place.
Isn’t it the point of meditation to realize the I as distinct from consciousness?
Steve,
You don’t think it’s possible for consciousness to be conscious of itself? Why not?
That’s an assertion. Can you provide supporting evidence?
That’s one metaphysical interpretation of meditation, but it’s not one I share. I see meditation as the cultivation of certain kinds of consciousness, but not the cultivation of something distinct from consciousness.
In the case of Mr. M, why doesn’t his soul provide the idea “I’d better get my head above water before I die”?
In the case of people with Cotard’s syndrome, why doesn’t the soul provide the idea “I’m not really dead”?
In the case of people with severe Alzheimer’s, why doesn’t the soul provide many coherent ideas at all?
The obvious answer is that there is no soul, and all of those functions are carried out by the brain. Damage the brain in certain ways and you impair or obliterate those functions.
It isn’t a single molecule that triggers an idea. It’s a complex collection of molecules (the brain) interacting with a complex environment (the body, the senses, and the rest of the universe).
Well, possibly, but my point is that we don’t have to postulate a separate “soul” to account for ideas. Indeed it isn’t clear to me how a non-material soul would help explain ideas anyway. “Ideas” aren’t a particularly challenging problem for neuroscience, unlike “consciousness”, which some claim is. Genetic algorithms readily come with “ideas”, which is why we use them to find solutions to tricky problems that require “lateral” thinking (and I suggest that brains use a similar system).
Could be stochastic noise, it doesn’t really matter. The point is that once you have a feedback system, you will tend to generate solutions. It’s not the trigger that matters, it’s the feedback.
I’d have no problem with such a conception if it was borne out by data. In fact, I don’t even have a problem with the concept of a soul – I think Hofstadter’s version “natural” soul is an excellent concept. But we have far more evidence for such a soul being the emergent property of a system of forces we know about, rather than a separate new force.
But we could be wrong. Nagel thinks so 🙂
I don’t think a soul is necessary to account for ideas. See above.
Cheers
Lizzie
Well, I don’t, and you don’t, but it’s a usage I hear (usually from atheists who don’t want to use the word “soul” but can’t shed the intuition of the little man!)
Well, it’s a neat thought, Steve, but not borne out by research.
And to extend this thought: I’m not saying that we are powerless in the face of addiction, but that the competition between our immediate desires and our more distal goals is as well (actually better, I would argue) accounted for by means of mechanisms we already know about, than by invoking an additional force.
In other words, I agree that we can conceive of a Soul in natural terms – I just don’t think we need to invoke an additional, hitherto unknown, fundamental force.
A construction made by what? By a construction? And what made the construction?
Coherent evauated by a construction? Haw do we know the construction can be evauated as coherent? Functional for what?
Are you sure it is what we do? Or we are just a more compex Dr. Nim?
Is water more complex than hydrogen and oxygen taken separately? Is it qualitatively different? Can you predict the properties of water from the properties of hydrogen and oxygen?
It strikes me that those who question the adequacy of materialism do not understand matter and present a billiard ball caracature.
billiard?
Fat fingers and tablets don’t mix well.
My point is that if your mental model of mater doesn’t match reality, perhaps your model needs adjusting. If your model of matter doesn’t support consciousness, your model of matter is simply wrong.
I suspect this will become increasingly obvious as AI takes over more and more intellectual work. I don’t know if we will reach a tipping point in my lifetime, but it’s on the way.
For brevity’s sake, “CFW” = Conscious Free Will
The question is, is the CFW referred to by materialists the same as the CFW referred to by theists? Using the same terminology is not the same as using the same concept.
Under materialism, all experience, even that of CFW, is manufactured by biological physics. There is no abstract thought, idea, will, consideration or sense of self and other that is not manufactured, in some way, by biological physics. Whether or not CFW is a strong or weak emergent property, under materialism CFW is not “autonomous” in that it can do or experience anything other than what biological physics dictates.
Under materialism, CFW is not “autonomous” wrt “biological physics”. Given the same input and conditions of biological physics, the organism will experience, think, and decide the same thing every single time (ruling out any random influences).
It doesn’t matter if there is a self-referential feedback loop or not; it doesn’t matter if an organism “learns’ or not; it doesn’t matter if experiences (biological states) of CSW are an essential and necessary part of the computational (in terms of what the biological physics produces via cause and effect) system, rendering the organism dysfunctional if the “CFW experience” module is shut down (unconscious).
Biological physics cannot produce CFW as conceptualized by idealists, which is autonomous wrt biological physics. What Liz and others here argue is entirely a straw man constructed by using the same term for something entirely different at the conceptual level.
The concepts of responsibility, morality, choice, etc. under materialism are entirely different than what those same terms mean under theism. Under materialism, everything an individual is, does, thinks, decides and believes is a computation of biological physics. Nothing more. Nothing less. Even if it generates the experience of CFW, and the experience itself becomes a necessary part of the functionality of the organism, it is still a computation that is part of a computation. Nothing more or less.
Materialists would have us believe that because biological physics computes and produces experiences, then processes those experiences with other sensory input and computes decisions, that the entity can be a moral agency. Without autonomous (wrt biological physics), operational command of the decision-making process, all materialists here are doing is obfuscating the point that under materialism, people cannot be anything other than biological automatons, regardless of how complex the programming is, and regardless of if it involves self-referential feedback loops, and regardless of if biological physics produces the experience of CFW.
(I don’t believe anyone is a biological automaton; but that is the consequence of materialism, whether or not that biological automaton experiences what it considers to be CFW.)
For the sake of argument, William, let me accept your characterisation of [ETA my model of] people as “biological automatons”, as long as you allow me to retain my conviction that these biological automatons are both conscious (which we are) and intend things (which we do). In other words, have CW, if not CFW,
In your view, why (i.e. in what way) would my CW biological automaton be any less autonomous than your being with CFW?
And what does that F allow your CFWs to do that my CWs can’t?
(Not trick questions – I would really like to know your response);
It doesn’t matter if your entities are conscious or not, if that consciousness is generated by biological physics.
Your CW is not autonomous wrt biological physics.
What my CW’s have is Freedom wrt to material causation.
What do you mean, “it doesn’t matter”? Are you agreeing that consciousness can be generated by biological physics? Or that they are not really conscious if the consciousness is generated by physics?
OK, so are you saying that my CW is not autonomous because there are factors further back in the causal chain that the automaton’s decision?
If so, does “autonomous” for you mean “causeless cause”?
They are causeless causes?
It doesn’t matter wrt the decision that it reaches is necessarily a computation of biological physics, eliminating the F aspect of CFW. So what if I’m conscious of myself as I compute outputs? That might add another input to the computation, but it doesn’t change the fact that what is going on is a computation in the first place.
I’m saying that your CW module is not autonomous because at no time can it deviate from what the biological physics involved dictates, whether that is further back up the causal chain or contemporaneous as part of outputting function, mechanisms and input.
In the sense that the premise of causeless cause is necessary to avoid being nothing more, ultimately, than a computer, even if that computer has self-aware consciousness, experiences qualia and makes what it experiences as willful decisions.
Liz,
Do you agree that under your paradigm, all choices and decisions are arrived at via the computation of biological physics, and that given the same conditions and inputs (omitting any random variations in they biological physics), a person will always think the same thoughts, experience the same qualia, and reach the same decision if the decision-making process was replayed?
William apparently thinks that if consciousness has anything to do with physics it is therefore deterministic.
That is the typical ID/creationist argument; but it is based on their usual misunderstandings of physics and determinism.
What are we to think of 50 years of these kinds of repetition? Whose thoughts are constantly repeated over and over and over?
There is a different word for this; it’s called perseveration.
To quote the Reverend Lovejoy: “Short answer: Yes-with-an-if; long answer: No-with-a-but.” 🙂
Yes-with-an-if first:
Yes, IF, as you say “the decision-making process was replayed” But that is one HECK of an “if”. What would that actually mean? In an identical parallel universe? In what sense would an identical parallel universe BE a different universe to the one we are in? I’d say that phrase “if the decision-making process was replayed” is borrowed from the lab, in which we repeat experiments under similar, but not identical, conditions in order to establish what is signal and what is noise. We do not repeat experiments under identical conditions (although in computer simulations we can) because we already know the answer. If conditions are truly identical, then in what meaningful sense are we doing the same thing twice, as opposed to simply, literally, “replaying” what we did the first time? So, yes, if there is some alternative universe that will replay this one at some future, or past time, or even in some parallel time now, I will do the same things, think the same thoughts etc. But in that case, if my alternative self in that identical world is truly identical, it is simply a reflection of me, Lizzie, and no threat to my autonomy. It’s no more a threat than to my autonomy than is my reflection in a mirror.
So here’s my no-with-a-but (which is shorter, as it flows directly from my yes-with-an-if):
No, but that is because there is no conceivable universe that would be both identical in every respect (including the history of my own decision-making) that could be considered a “replay” except in the sense that a film is literally “replayed”. And a video is no more a threat to my autonomy than is a mirror, much though I might wish I’d done something different.
After all, I’ve got loads of CDs of me out there that are replayed the whole time, some with phrases I wish I’d played slightly differently. On the other hand, I have never, in fact, ever played two phrases identically, because every time is very slightly different. It’s what makes concert-giving so rewarding. And I’d say that’s because I am autonomous in the only sense it seems to me it’s worth being autonomous, ie. able to react to context appropriately, rather than a-causally. Playing with an a-causal musician is no fun 🙂
William and his compatriots appeal to a physics that hasn’t been seen in a physics classroom in over a hundred years. I call it billiard ball determinism. It does not describe anything relevant to brains, learning, or evolution.
The process of learning from experience, of cut and try, trial and error, can be described without regard to consciousness.
Theists like to trade on the fact that we do not know exactly how self-awareness arises. Big deal. there are lots of thins we don’t know.
We don’t know how to anticipate the properties of water from the properties of hydrogen and oxygen taken separately.
Without being able to anticipate that simple emergent property, how can one place arbitrary limits on the emergent properties of matter?
It’s just another form of placing dragons on unexplored regions of the map.
Liz,
Whether or not such a repetition of circumstances would or could ever occur is entirely irrelevant; your answer is that GIVEN the exact same state of material processes, THEN the exact same thoughts and decisions will be reached.
Which makes humans nothing more than biological automatons under materialism. Adding random variants that change the outcome doesn’t change this conclusion one bit.
Whether we can anticipate or predict outcomes or not is entirely irrelevant to the question of whether or not the properties of water are computed by the physics involved.
What do you mean computed. the poinbt is you can’t compute or anticipate emergent properties.
Just one of many ways in which physics and chemistry differ from computer programs. Reality (and material) is not comprised of deterministic billiard balls behaving in perfectly predictable ways.
So you gain nothing in your argument by false reductionism. You can’t argue from first principles what matter can and cannot do.
Where did you get the idea that the nervous system is deterministic if it is based on physics?
I don’t think it’s irrelevant at all. I think it’s absolutely crucial to the ontology of Lizzie. And it’s Lizzie we are talking about (well, we could be talking about anyone, but let’s take me as an exemplar) when we are asking whether she is autonomous. If there is an identical Lizzie in an identical universe doing identical things, then in what sense is that not plain old Lizzie? I don’t see that the possible reflection of my life in some alternative plane of existence alters the question of whether I am autonomous one jot. Or, rather, I put it to you that it does not. Whether or not a mirror reflection of me could exist is entirely irrelevant to the question as to whether I am autonomous. The more interesting question is whether a very slightly different reflection of me and my universe could exist, in which I behaved differently. To which I’d answer – sure.
Oddly enough I agree. Which is why adding causeless causes seems to me not to change the conclusion either!
What matters to me is that I make informed decisions, not that I make “free” or “causeless” ones. If that means I don’t have free will, by your definition, that’s fine. I don’t especially want too many causeless factors affecting my decision-making. I’d feel pushed around 😉
This statement right here does not make any sense to me because the very nature of “biological physics” (to use Williams rather inaccurate term) is deviation every moment of every instant. The entire structure of biological activity as a system study demonstrates that no two instances of the exact same inputs will be the same outcome simply because the emergence of the conceptual and physiological constructs are unique on a moment by moment basis. Like the difference between the properties of water and the properties of hydrogen and oxygen, there is no deterministic property between “biological physics” and behavior. Given that, I can’t grasp how William concludes what he does.
William, is this what you mean by your beliefs being a product of what is useful to you rather than being the product of actual evidence?
I strongly disagree. I think the question of anticipating and predicting outcomes is absolutely fundamental to any meaningful discussion of decision-making and freedom, for these reasons:
1. If a system is unpredictable, and can only be modelled by running the-thing-itself (i.e. if it is chaotic and non-linear), then whether or not it is deterministic or stochastic is irrelevant – the fact is that in both cases we don’t know what will happen
2. If our decision-making is chaotic and non-linear (and there’s every reason to think it is), whether it is deterministic or stochastic, it is still a system in which the inputs do not allow us to determine the outputs without going through that decision-making process. And so it is no more sensible (considerably less sensible) to say that the output was “determined” by the inputs, given the decision-making machinery, than to say that the decision-making machinery determined the outputs.
3. If our decision-making is chaotic and non-linear, and dependent on complex feedback loops, and involving dynamic reweighting of inputs, the seeking of further inputs, etc, as we know it to by, then we have, in a literal, mathematical, sense, a near infinite number of degrees of freedom, unlike, say, a thermostat that has two outputs, on and off.
This last is why I say we have “free will” – constrained, sure, but constrained usefully (by relevant information) and far free-er than that of plants, or bats, or moths that seem to have no choice but to fly straight into the candle-flame. Or poor Mr M.
My main worry with “the ghost in the machine” conception is that it implies that if there is no “ghost,” then all we’re left with is “the machine”. What is necessary is to undermine the very logical coherence of the idea of the ghost in the machine. Fortunately, this has already been done in two formidable works of 20th-century philosophy: The Concept of Mind (Ryle, 1949) and The Phenomenology of Perception (Merleau-Ponty, 1945). Here’s Ryle:
Ryle then proceeds to explain what he means by a ‘category-mistake’ , the idea of a ‘logical type’, and describes the logical grammar of mental-conduct terms, such as knowledge, will, emotion, sensation, self-knowledge, imagination, and intellect. And Merleau-Ponty describes in painstaking detail how to understand perception and movement as carried out by the body-subject or embodied cogito. In more recent years, there’s been a lot of neuroscientific research which explains how the embodied-subject of Merleau-Ponty’s philosophical descriptions is causally implemented.
I would also recommend, for the curious, Teed Rockwell’s superb Neither Brain Nor Ghost: A Non-Dualistic Alternative to the Mind-Brain Identity Theory.
William wants to make a theological point rather than a scientific point.
I am curious how one reconciles the omniscience of the deity with free will. That’s always been a puzzle.
I am content to think the problem cannot (or has not) been formulated in a way that can lead to productive discussion.
It’s interesting that the problem has been confronted by the legal system, and while one can argue with the solution, the law provides different levels of accountability depending on whether a person is capable of acting freely. Coercion, mental deficiency and mental impairment are considered
Liz,
Whether or not we can compute the outcomes, and whether or not the process is non-linear or chaotic, and whether or not it is repeatable, and whether or not it can only be computed only on the fly during the process as it is occurring is completely, and totally, irrelevant to the point that the process is computed by biological physics. Every experience, thought, idea and outcome.
All of this word-wrangling is, IMO, nothing more than you (and others) trying to avoid that simple statement – that yes, under materialism, our thoughts, ideas, experience of qualia and choices, are computed by biological physics and nothing more, and that given an identical run-up set of physical states and sequences “X”, Y will be the decision-outcome every single time – whether an identical run-up, in reality, would or could ever happen.
But physics doesn’t compute. It’s not little billiard balls.
William Murray thinks that “materialism” is committed to the 17th-century picture of “matter” (“little billiard-balls”). And of course he’s right in one important way: if one had to chose between the Hobbesian view and the Cartesian view, the latter is far more reasonable. But he doesn’t realize that this is a false dichotomy, both for empirical reasons and conceptual ones. Most everyone here is stronger on the empirical side than I am, so I’ll restrict myself to the conceptual points.
BTW,
I’m not arguing that it is a fact that we have libertarian free will; I’m not arguing that there is any discernible experiential difference between being a biological automaton (as described above) and having libertarian free will.
My argument is that under materialism, we would necessarily be biological automatons – beings that exist as the function of biological physics as it computes “what comes next”, even if that computation creates and includes experience of qualia, learning, self-referential feedback loops, and the experience of CFW.
But, that will, and that consciousness, would be the ongoing product (even if also looping input) of a computation. Given X at any point (as a comprehensive physical run-up), Y necessarily follows – every single time.
The “I”, then, including thoughts, ideas, sense of self, qualia and decisions, is nothing more than the necessary, ongoing result of the computational process. “I” am always, at every point, in every actually existent sense, a computational product of physics – under materialism.
I’m not claiming that we’re not biological automatons, I’m just arguing that we must be biological automatons, as described above, if materialism is true. We may be. But if so, that has other philosophical ramifications, most notably in the arena of morality.