“The Quale is the Difference”

Barry has graciously posted a counter-rebuttal at UD to my Zombie Fred post rebutting his own original zombie post at UD.  (This debate-at-a-distance procedure isn’t a bad way to proceed, actually!  Although as always, he is welcome to come over here in person if he would like.)

redmousemat

Barry writes:

Over at TSZ Lizzie disagrees with me regarding my conclusions from the zombie thought experiment (see this post).  Very briefly, in the zombie post I summarized David Gelernter’s argument from the zombie thought experiment:

If a conscious person and a zombie behave exactly alike, consciousness does not confer a survival advantage on the conscious person. It follows that consciousness is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that consciousness cannot be accounted for as the product of natural selection.

Lizzie disagrees.  In her post she writes:

What is being startled if not being “conscious” of an alarming signal? What is trying to gain further information, if not a volitional act?  What is recognising that information is lacking if not a metacognitive examination of the state of self-knowledge?  What is anticipating another’s desires and needs, if not the ability to imagine “what it it is like” to be that person?  What is wanting to please or help another person if not the capacity to recognise in another being, the kinds of needs (recharging? servicing?) that mandate your own actions?  In other words, what is consciousness, if not these very capacities?

Let’s answer Lizzie’s question using her first example (the reasoning applies to all of her others).  To be startled means to be agitated or disturbed suddenly.  I can be startled by an unexpected loud noise and jump out of my seat.  Zombie Fred would have the same reaction and jump right out of his chair too.  Our physical outward actions were be identical.  So what is the difference?  Simply this.  I as a conscious agent would have a subjective reaction to the experience of being startled.  I would experience a quale – the surprise of being startled.  Zombie Fred would not have a subjective reaction to the experience.

I submit that Barry has not addressed my questions at all.  He has simply repeated his assertion – that physically identical entities (Fred and Zombie Fred) would differ in in some key way, namely one would experience a quale, and the other would not. And of course, I disagree.  But let me unpack Barry’s assertion:

Let me first note that Barry refers to the “physical outward” actions of the two Freds.  I suggest that “outward” is at best unnecessary, and at worst, misleading.  In classic Philosophical Zombie thought experiments, the two Freds are physically identical, right down to the last ion channel in the last neuron.

This means that not only would Zombie Fred’s “outward” (i.e. apparent to someone meeting Zombie Fred at, say, a cocktail party) reactions be identical to Fred’s, but the cascade of biological events generating those reactions would also be identical.  Not only that, but the results of those reactions – for example, changing the direction of gaze, reaching out to touch something, grasp something, change the trajectory of an action, move to a new location – will bring in new, behavourally relevant, information that ZF would otherwise not have gained.  This itself will impact on the results of further decisions that ZF makes, and therefore on Zombie Fred’s biological equipment, in just the same way as it would on Fred’s biological equipment.  In both cases, that equipment must enable both ZF and F to interrogate the state of their own knowledge, in order to base a decision on that knowledge. If it doesn’t in ZF’s case, ZF’s behaviour will differ from that of Fred’s.

And my question (or one of them) to Barry was: in what way does ZF’s interrogation of the state of ZF’s own knowledge differ from the meta-cognitive interrogation of our own state of knowledge that we call conscious awareness of our state of knowledge?

I will try to illustrate may point with the stupid red mouse mat illustrated at the top of this OP. My optical computer mouse reacts to a red mouse mat simply by stopping work (because it “can’t tell” whether it is moving when its red laser beam traverses a red surface).  If I put it my mouse on a black mouse mat it starts work.  But I do not argue that my mouse experiences red when it meets a red mouse-mat, even though though it reacts to it (by stopping work).  So I do not think my mouse experiences a quale.

But I experience the quale of “red” when I see the mouse mat. Or, if you like, when I look at the picture of the red mouse mat at the top of the OP, I “experience redness”. So what do I mean by that?

I suggest that my experience of redness consists not merely of reacting to redness (my red receptors do this, but they are not me), as my mouse does (by stopping work), but also of knowing that the mouse-mat is red, and, moreoever, knowing that I know that the mouse-mat is red, and being able to compare that state of knowledge with the state of not knowing that the mousemat was red, as would be the case if, for instance I saw this picture:

redmousematGSwhich is a greyscale image of the same mousemat.

Aha, you say – but if I told you that the mousemat was red, you would have that knowledge – what you’d lack was the quale assocate with a red mousemat.

Yes, indeed, I agree. But I suggest that that knowledge (gained from you telling me that the mousemat is red) is qualitatively different from knowing it is read by seeing that it is red in the following ways:

  • I know that my knowledge is contingent on my trust in your honesty, not my own perceptual apparatus
  • I know that I would not know that the mouse mat was red unless you told me
  • I know that if I saw a coloured red mousemat, as opposed to a greyscale image of the mouse-mat that I would know that it was red without you telling me.
  • I also know that if I really saw a red mousemat, I would quite like it, because I know I like red.
  • I also know that I would think it was a silly colour for a mouse mat, because the mouse probably won’t work on it, but maybe a red mouse could be cool.

And I suggest that all these pieces of knowledge are part of what constitutes my experience of directly seeing a red mousemat.  Moreover:

  • I also know that when I see red, and even, to some extent when I imagine red, and also to some extent, after you tell me the mousemat is red, I have an idea of what red is (or a red mouse mat would be) like, and that knowing what red is “like” is different from knowing that something is red.

And there’s the rub – what does that last thing mean?  Because now we are close to this ineffable “qualia” business.  And I suggest that the “quale” of red consists not only of all the explicit knowledge I listed first, it also includes implicit knowledge of what I feel like when we see red things, gained from partly from my life experience, but partly, I suspect, bequeathed to me by evolution in the genes that constructed my infant brain.

And in the case of red, specifically, I suggest it is a slight elevation of the sympathetic nervous system, resulting from both learned and hard-wired links between things that are red, and which have in common both danger and excitement – fire and blood being the most primary, edible fruit probably as a co-evolutionary outcome, and fire engines, warning lights, stop lights, etc as learned associations.  And just as we can implicitly find ourselves feeling a touch of anxiety that we cannot pin down, triggered by a reminder of something we know we should have done, but can’t remember what, I suggest that the “qualia” of red, and indeed of other colours, is, in addition to our explicit knowledge-about-knowledge, also implicit knowledge about our own internal responses, including idiosyncratic appetitative or aversive responses.

And my point is: all the mechanisms that generate that explicit and implicit knowledge in response to a red stimulus would have to be present in Zombie Fred for Zombie Fred to react to red as Fred does.  And, as a result, ZF would have just the same quale as Fred.  And we could test this: if we use red stimulus in a priming experiment, does ZF show the same priming effects?  Does ZF show the same increased reaction time to a red stimulus as to a green one as Fred does?  Will ZF be more likely react to a fire alarm test following a series of red stimuli than following a series of green ones?

I suggest, in short, that a “quale” is a highly automated repertoire of possible action sets triggered by a certain stimulus property (classically colour), and that our knowledge that the mouse-mat is red when we see the full-colour picture boils down to the knowledge that it has activated in us a specific repertoire of action sets that we package together, for convenience, as “red”.

And that in order to behave exactly like Fred, those action sets must also be triggered in Zombie Fred by the same stimulus property.  If they aren’t, he will behave differently.  If they are, he will experience qualia, because that’s what qualia are.

Barry finishes his post:

I discussed a similar situation in this post in which I contrasted my experience of a beautiful sunset with that of a computer.  I wrote:

Consider a computer to which someone has attached a camera and a spectrometer (an instrument that measures the properties of light).  They point the camera at the western horizon and write a program that instructs the computer as follows:  “when light conditions are X print out this statement:  ‘Oh, what a beautiful sunset.’” Suppose I say “Oh, what a beautiful sunset” at the precise moment the computer is printing out the same statement according to the program.  Have the computer and I had the same experience of the sunset?  Obviously not.  The computer has had no “experience” of the sunset at all.  It has no concept of beauty.  It cannot experience qualia.  It is precisely this subjective experience of the sunset that cannot be accounted for on materialist principles.

I completely agree with Barry that the computer is not experiencing the sunset (just I assume that he agrees that my mouse is not experiencing a red mouse mat).  But the computer is not behaving like Barry.  There is a tiny overlap in behaviour – both are outputting an English sentence that conveys the semantic information that the sunset is beautiful.  But to leap from “computer does not experience the sunset” to “this subjective experience of the sunset …cannot be accounted for on materialist principles” is a non-sequitur, because the computer does not behave like Barry.  And if we replace the computer by Zombie Barry “Zombie Barry does not experience the sunset” is mere assertion – unlike the computer, Zombie Barry does behave exactly like Barry, not merely “outwardly” but with every molecule and ion of its being.  So why should we conclude that Zombie Barry has no qualia?

I suggest that Zombie Barry both has, and must, because those qualia are a necessary consequence of ZB’s interrogation of its own internal state, and if it can’t do that, it won’t be able to behave exactly like Barry.

120 thoughts on ““The Quale is the Difference”

  1. But I experience the quale of “red” when I see the mouse mat.

    No, you don’t.

    Or, if you like, when I look at the picture of the red mouse mat at the top of the OP, I “experience redness”.

    Much better, and yes I expect you do experience redness.

    Here’s the difference. “Redness” names an objective property. The word “quale” comes from a misguided attempt to pseudo-objectivise what is subjective. That cannot be done. Words can only refer to objective entities, for otherwise there is no way that we could acquire the meaning of that word.

    The hard problem boils down to: provide a completely objective account of what is purely subjective.

    It should be obvious that it is a non-starter. We cannot give a purely objective account of anything.

    There’s a name for people who are able to give a completely objective account of their world(s). We call them “solipsists.”

  2. I completely agree with Barry that the computer is not experiencing the sunset (just I assume that he agrees that my mouse is not experiencing a red mouse mat).

    The computer does not experience the sunset. Barry does experience the sunset.

    The explanation is simple. The computer is an object, while Barry is a subject.

  3. There’s a famous incident of the artist who lost color vision due to a brain aneurysm. He lost not only the ability to see color, but also the ability to remember and understand the experience of color..

  4. I’m not sure what I can do besides repeat my objection from the last thread — even if consciousness itself makes no behavioral difference, it wouldn’t follow that consciousness can’t be selected for (or against). I say that because, even if consciousness were epiphenomenal, it could still be a spandrel — BruceS made this point earlier — and spandrels of course are selected for (or against) despite not being adaptive (or maladaptive) considered as traits in themselves.

    That said, I think it’s much more promising to deny that zombies are possible. By that, I mean that any embodied, living cognitive/affective system will necessarily have some degree of consciousness, and any system with some degree of consciousness will necessarily be an embodied, living cognitive/affective system. So consciousness and behavior cannot be disentangled as the zombie case makes them out to be.

    “But wait!” one might object, “I can conceive of zombies!” Well, perhaps one can conceive of them. I’m actually doubtful of this, but OK — philosophy is, among other things, a systematic exploration of our conceptual imagination. But one would need an argument to show that conceivability entails possibility in order to show that the conceivability of zombies entails their possibility.

    And surely we would not want to license “conceivability entails possibility” as a general principle (or would we?) since, if we license that principle, it would follow that we cannot conceive of impossibilities. And what happens then to our square circles? If we are conceiving of a square circle, and conceivability entails possibility, then square circles are possible after all. That seems a bit hard to swallow! So, if conceivability entails possibility, and square circles are impossible, then we are not really conceiving of them in the first place. That also seems a bit hard to accept — is “square circle” then a mere flatus vocis? Are we pretending to conceive of it when we really cannot? How can we tell?

    If it is unacceptable that square circles are possible, and also unacceptable that square circles are not inconceivable, then we must reject the principle that conceivability entails possibility. And so the conceivability of zombies tells us nothing at all about whether zombies are possible. One can therefore maintain that there is a necessary relation between consciousness and behavior while accepting that zombies are conceivable.

  5. Well, I certainly think that there is a necessary relation between consciousness and behaviour.

    And to that extent, I see no reason in principle to think it wouldn’t have evolved.

  6. Kantian Naturalist: If it is unacceptable that square circles are possible, and also unacceptable that square circles are not inconceivable, then we must reject the principle that conceivability entails possibility.

    Exactly. It is the concept of a quale that I am struggling with. I am sure if someone can explain what a quale is, I could have a go at conceptualizing it.

  7. Lizzie,

    And in the case of red, specifically, I suggest it is a slight elevation of the sympathetic nervous system…

    That can’t be the explanation, because it doesn’t account for our ability to experience red and blue in different parts of the visual field simultaneously, for example when looking at the Union Jack.

  8. keiths,

    Is this anything to do with “raw feels”? The colour information comes in as differential responses from our three sets of cones. We process this information presumably in the visual cortex to get broader colour information.

  9. keiths:
    Lizzie,

    That can’t be the explanation, because it doesn’t account for our ability to experience red and blue in different parts of the visual field simultaneously, for example when looking at the Union Jack.

    I don’t see why not, but I should have been clearer – I don’t mean you feel more excited so you know it’s red – I mean that the entire response set associated with red is what is activated by a red stimulus and bound to it.

    Originally I had a paragraph in there about binding, then I thought I’d leave it out, but the model I am envisaging is that colour is bound to a concept of the internal state induced by the colour, as it also is to the areas of the flag (in this case). So the gestalt is flag-with-red-bits, but the “red” part is bound both to the bits of the flag and internal state more generally associated with red.

    Neither are “present” if you like – all we have are models, in this case portions of a flag bound to on the one hand, and internal states associated with “red” on the other

    In fact, it it’s an unfamiliar flag and it’s flapping in the breeze, you will have to mentally construct a paradigmatic rectangular flag on which you conjecture shapes in different colours.

  10. By that, I mean that any embodied, living cognitive/affective system will necessarily have some degree of consciousness, and any system with some degree of consciousness will necessarily be an embodied, living cognitive/affective system.

    How do you know that? I mean, I rather suspect that only a conscious entity will act exactly like a conscious entity, since consciousness is hardly likely to escape energy affects, but I don’t know how we could determine if a robot or an alien is conscious, and perhaps more importantly, to know exactly what in that entity’s experience is “conscious” or not. We don’t even know with humans in many cases whether a motive or what-not is actually conscious, even though it inevitably affects consciousness at some point.

    Fine, if two things are identical, and one is conscious, so is the other–at least that’s the reasonable assumption (we assume other humans are because of similarity even without identity). If they are not identical, having very different origins, but have very similar outputs, how do we know if both are conscious just because one is?

    Glen Davidson

  11. Alan Fox:
    keiths,

    Is this anything to do with “raw feels”? The colour information comes in as differential responses from our three sets of cones. We process this information presumably in the visual cortex to get broader colour information.

    Yes. My case is that no feels are raw. They are all at least partially cooked, possibly even by evolution before we ourselves emerged into the world.

  12. I mean that the entire response set associated with red is what is activated by a red stimulus and bound to it.

    Well, why do you think so? What if I’m not even really aware of “red,” and it’s just passing by as I drive? What evidence would anyone have that the “entire response set” is thereby activated? Might response sets be unconsciously activated? And if there is an “entire response set,” how did it even become associated with “red”? Presumably most of any such “response set” would have been learned, so how would this affect, say, a baby’s sense of “red,” when it first senses it?

    I see red. I don’t think that it’s especially bound to any “response set” at all, other than what I think is likely to be more innate, the sense of it being “hotter” and “brighter” than most colors. But “blue” can be both “hot” and “bright” in a flame as well, which seems a more learned response. It’s contextual, in the end, what matters is what the “quale” means with respect to other “qualia,” or more importantly, with respect to other objects and subjects.

    Glen Davidson

  13. GlenDavidson: I don’t know how we could determine if a robot or an alien is conscious, and perhaps more importantly, to know exactly what in that entity’s experience is “conscious” or not. We don’t even know with humans in many cases whether a motive or what-not is actually conscious, even though it inevitably affects consciousness at some point.

    RDFish/aiguy (see this TSZ OP for a parallel question about what is intelligence) has convinced me there is no good definition for “consciousness” and until there is one, you can concentrate on the subunits (rather like biologists concentrating on research on living organisms without angsting about what the definition for “life” is – another analogy from aiguy) rather than worrying about defining consciousness.

  14. I pointed out several times on the previous thread that even if a zombie behaved identically to a conscious being, natural selection could distinguish between the zombie and the unzombie. Because they would be using different neural machinery to obtain the same behavior. And thus they would have different costs and different availability of genetic variation.

    If they both had the same behavior and the same nervous systems then they would also have the same consciousness or lack of it. I don’t recall anyone refuting that argument.

  15. I think myself that consciousness is a portmanteau of many “subunits”, and it’s more coherent to ask what someone (or something!) is conscious of than whether they are conscious.

    In other words, it makes more sense as a transitive verb “conching” than as an adjective.

    Some things don’t conch much, others conch a lot of things, but there’s a capacity limit to how much we can conch at a time. So we have this neat “fridge light” system, by which we can serially conch stuff as and when needed, but because we can conch whatever we want when we need it, we aren’t aware of not conching stuff, because as soon as we conch not conching it, it’s conched!

  16. I have a friend who is a commercial pilot, despite being red-green colour-blind. I remember him once trying to convey how he saw ripe fruit that I would see as red as a particular nondescript colour (he may have described it as grey) that he had learned to equate with “red”.

    How the hell someone colour-blind is allowed to pilot 747s also came up but is perhaps OT!

  17. Alan Fox: Exactly. It is the concept of a quale that I am struggling with. I am sure if someone can explain what a quale is, I could have a go at conceptualizing it.

    Well, I know a great deal about C. I. Lewis, who coined the term — but I cannot guarantee that contemporary use of the term has much to do with his use of it.

    Basically, Lewis introduces the term “qualia” to mean “the introspectively accessible non-conceptual component of perception”. It must be introspectively available — something that can be brought into view from within the first-person standpoint — and it must be non-conceptual — something that isn’t exhaustively describable in language. In Lewis’s account, qualia are introduced in order to make explicit the difference between perception and thought — qualia are present in the former and absent in the latter.

    Lewis initially just focuses on perception. There is a subtle slide towards broadening qualia-talk to include memory and imagination, but in my view, this a slide towards utter disaster*, because Lewis introduces qualia in order to highlight what is phenomenologically distinctive about perception, and once we broaden that to include memory and imagination, the phenomenology of perception gets effaced.

    * and by “disaster,” I mean a backslide away from the insights of Kant, Hegel, and Dewey towards a Cartesian/Humean conception of the mind.

  18. RDFish/aiguy (see this TSZ OP for a parallel question about what is intelligence) has convinced me there is no good definition for “consciousness” and until there is one, you can concentrate on the subunits (rather like biologists concentrating on research on living organisms without angsting about what the definition for “life” is – another analogy from aiguy) rather than worrying about defining consciousness.

    What are the “subunits”?

    More importantly, there is no good definition for “red” or any basic qualitative experience, so I’m not sure what lacking a good definition is supposed to mean. Most strings of definition end at some illustration (even if a word illustration–looks like…) or sets of illustrations. Consciousness certainly has meaning, however, in that we may be conscious or unconscious, and motives can be unconscious, conscious, subconscious, or some mix of these.

    Glen Davidson

  19. There is a subtle slide towards broadening qualia-talk to include memory and imagination, but in my view, this a slide towards utter disaster*, because Lewis introduces qualia in order to highlight what is phenomenologically distinctive about perception, and once we broaden that to include memory and imagination, the phenomenology of perception gets effaced.

    Yes, I think that “quale” is a useful enough term, but runs into problems almost immediately, as one wonders if “pitch” (what about a half-tone lower, another quale?) is a quale, if intention is a quale, if anger is a quale, etc. It seems to work well enough for colors when considered as ideals, but pretty soon becomes a small part of the whole spectrum of consciousness.

    Glen Davidson

  20. I would guess it’s acquired rather than innate 🙂

    Kantian Naturalist: Lewis initially just focuses on perception. There is a subtle slide towards broadening qualia-talk to include memory and imagination, but in my view, this a slide towards utter disaster*, because Lewis introduces qualia in order to highlight what is phenomenologically distinctive about perception, and once we broaden that to include memory and imagination, the phenomenology of perception gets effaced.

    The problem then, is that I’m not sure that it makes sense to separate perception from memory and imagination. The more we understand about the neuroscience of perception, the more like memory and imagination it becomes (and the more memory and imagination resemble each other). I’d say that all can be conceptualised as “model making”, and use shared neural substrates. Of course there are differences, but that has to do, I suggest, with tagging the provenance of the data on which we are constructing our “model” than with the mechanisms of model construction.

    Imagining, seeing and remembering something certainly use highly overlapping neural networks.

  21. GlenDavidson: What are the “subunits”?

    Like the analogy with “life” and “biology”, I think that the approach to understanding consciousness will come from insights derived from studying aspects of how the brain works from biochemistry to behavioural psychology and all points in between and beyond.

    More importantly, there is no good definition for “red” or any basic qualitative experience, so I’m not sure what lacking a good definition is supposed to mean. Most strings of definition end at some illustration (even if a word illustration–looks like…) or sets of illustrations. Consciousness certainly has meaning, however, in that we may be conscious or unconscious, and motives can be unconscious, conscious, subconscious, or some mix of these.

    OK but you’re talking of everyday usage. Reification is a trap for the unwary!

  22. Joe Felsenstein: If they both had the same behavior and the same nervous systems then they would also have the same consciousness or lack of it. I don’t recall anyone refuting that argument.

    Nobody can refute “that argument” because it is not an argument. It is an assertion, presumably coming from materialist ideology. It might well be correct, but I do not find it convincing.

    For that matter, I’m not convinced that “the same consciousness” is anything more than word salad.

  23. Lizzie: The problem then, is that I’m not sure that it makes sense to separate perception from memory and imagination. The more we understand about the neuroscience of perception, the more like memory and imagination it becomes (and the more memory and imagination resemble each other). I’d say that all can be conceptualised as “model making”, and use shared neural substrates. Of course there are differences, but that has to do, I suggest, with tagging the provenance of the data on which we are constructing our “model” than with the mechanisms of model construction.

    Could you provide some articles that examine the overlap in neural networks that function as the correlates of perception, memory, and imagination? That would help me a good deal in my current research!

    The reason why I want to treat perception as distinct from memory and imagination is this: naively considered (i.e. in purely philosophical terms, without worrying about the neurology), perception is openness to the world. When I perceive that some x is a y, where x and y refer to sensible features of the world (either objects or properties of objects), then if all is going well, I am taking in the fact that x is a y.

    The neurology explains how I am taking in this fact, but we’d off on the slippery slope to dualism/idealism/solipsism if we interpreted the causal mechanisms as epistemic barriers between worldly objects and my consciousness of them.

    (It is precisely because Kant failed to make this distinction that he took the fact of mediation between objects and consciousness to be a plank in his argument for transcendental idealism, and making this distinction is essential to reconciling Kantian insights about concepts and judgments with naturalism.)

    So I hope that, however the overlap in underlying neurology turns out, there’s just enough wiggle room in the model to show how the world exercises causal constraint on perception.

  24. Lizzie: The problem then, is that I’m not sure that it makes sense to separate perception from memory and imagination. The more we understand about the neuroscience of perception, the more like memory and imagination it becomes (and the more memory and imagination resemble each other).

    I am inclined to read that as an indictment of neuroscience.

    I’d say that all can be conceptualised as “model making”, and use shared neural substrates.

    I’d accept that as probably correct for imagination and memory, but as wrong for perception.

  25. Hi, KN

    Here’s an example of the sort of study I was thinking of:

    Where Bottom-up Meets Top-down: Neuronal Interactions during Perception and Imagery

    Memory and imagination are clearly closely related – when we “call something to mind” we are essentially “imagining” ourselves seeing it. But what is also clear is that actually seeing something, and imagining it, use common substrates. In this study, the subjects had to imagine, or were presented with images of, faces and chairs. This is neat, because faces and non-face objects involve fairly well defined brain regions in the fusiform gyrus (on the undersurface of the brain). So you can show that face region was active whether or not faces were presented or imagined/remembered. And then there are differences as well.

    And that’s where I suggest that “tagging” the provenance is important – and getting it wrong leads to hallucinations. Generally speaking we can tell the difference between imagining or remembering a voice or face, and hearing, seeing it. But in schizophrenia, this ability to distinguish between perception and self-generated imagery seems to be broken. We have some good clues as to what might be going wrong, but it clearly isn’t that people with schizophrenia use some different area for imagining things than seeing or hearing than other people do. We all use the same areas, essentially, for both. However, most of us know the difference, probably, as I said, because of fairly subtle neural circuitry for detecting whether the provenance of the percept is external or internal.

    And even non-psychotic people can be induced to see or hear things that aren’t there fairly easily by means of fairly simple but ingenious manipulations of stimuli, which is why, of course, optical illusions are so widely experienced.

  26. Neil Rickert: I am inclined to read that as an indictment of neuroscience.

    Well, I think that would be a mistake. Understanding perception in terms of modelling has been one of the big breakthroughs I’d say in cognitive psychology and neuroscience, and accounts magnificently for many otherwise puzzling illusory percepts.

    Please don’t mistake the insight that they involve greatly overlapping mechanisms for “are the same thing”. The really interesting issue (especially to me, as someone interested in understanding distortions of perception in mental illness) is how we do know what is memory, what is imagination and what is perception, and why these things might get garbled, as we see, for example in “false memory syndrome” and “implanted memories” (imagination confused with memory) and illusions and hallucinations (imagination or memory confused with perception). Also the limitations on visual and auditory working memory capacity (how are percepts retained in memory?)

    I’d accept that as probably correct for imagination and memory, but as wrong for perception.

    Well, the processes certainly overlap – as they must, if percepts are ever going to be remembered! And we even know quite a lot about the processes. For long-term memory, the famous mantra (attributed to Hebb) is “what fires together, wires together” – so the neural cascades initiated by a sensory stimulus, and producing a percept, are more likely to be re-initiated by a similar stimulus in future, “reminding” us of the first. In long-term memory this results from an increase in synaptic strength, probably through actual expression of actual proteins; in short-term, it is probably the result of the cascade continuing to cycle, maintained by excitatory-inhibitory feedback loops that produce the oscillations that we can pick up using MEG and EEG.

  27. From the abstract:

    Sensory representations of faces and objects are mediated by bottom-up mechanisms arising in early visual areas and top-down mechanisms arising in prefrontal cortex, during perception and imagery respectively.

    Bottom-up mechanisms arising in early visual areas as mediating sensory representations during perception is causal constraint enough for my purposes: to explain how “perception as openness to the world” is causally implemented.

  28. Alan Fox:
    I have a friend who is a commercial pilot, despite being red-green colour-blind. I remember him once trying to convey how he saw ripe fruit that I would see as red as a particular nondescript colour (he may have described it as grey) that he had learned to equate with “red”.

    Here is a paper about how it may be possible to generate inverted qualia by what the author claims is a physically occurring mechanism. Basically, the pigments in red and green cones are reversed.
    Pseudo normal vision
    As the author explains, this is not a problem for physicalism (since there is a physical difference) but it is a problem for a functionalist who claims that only the logical relation between brain states determines qualia, regardless of the physical realization.

    The discussion between you and your friend is similar to the spectrum inversion discussion except that the philosopher wants to be convinced why the same situation could not exist in two people who were physically normal and behaved the same way , eg said the same words, when looking at the same colors.

  29. And even non-psychotic people can be induced to see or hear things that aren’t there fairly easily by means of fairly simple but ingenious manipulations of stimuli, which is why, of course, optical illusions are so widely experienced.

    Optical illusions suggest that there’s no hard and fast distinction between internal and external origination, however. “Seeing” the triangle there seems to be due to the brain adding in “lines” where none actually appear to the eyes. Science, as I recall, once showed a picture of a leaf passing behind another object (probably a leaf), and you see the outline of the leaf nonetheless where the leaf isn’t actually visible. In cases such as these, things appear with no “tagging” of internal origination, but are seen as being external, at least until closer examination shows otherwise.

    Psychedelic drugs show the problem as well, with a person often having to figure out the odds of something “being real” or not. Details of the “fakes” are usually low, but then, they’re also tending to appear at a distance (or in dark areas, something obscuring much detail, if it exists), at least if you really do wonder if it’s “real.” Dreams are another case where the fake very much seems real to people who are very sane (otherwise), but that seems to have a lot to do with areas of logic not being very active, so poorly (if at all) evaluating the non-reality of what’s “happening.”

    I think that the big differences between dreams and imagination is that the latter seems a lot less “vivid,” and, even more importantly, one can just open one’s eyes to see that it’s not part of what’s actually “happening.” I never thought it was at all surprising that the same substrates are used for imagination and for perception–how could evolution “expense” separate capacities for each? I can’t see how anything is truly “tagged” as internal or external, though. except as a consequence of the overall context of the situation.

    Glen Davidson

  30. Lizzie [from OP and subsequent comment]:: I suggest that Zombie Barry both has, and must, because those qualia are a necessary consequence of ZB’s interrogation of its own internal state, and if it can’t do that, it won’t be able to behave exactly like Barry.
    […]
    The really interesting issue (especially to me, as someone interested in understanding distortions of perception in mental illness) is how we do know what is memory, what is imagination and what is perception.

    I’m not sure how to interpret your reference to “ZB’s interrogation of its own internal state” except as a different internal state which somehow leads to consciousness and (optionally?) qualia. Is that what you meant? If so, is there something about those internal states which makes them necessary and sufficient for consicousness? Or for qualia, if there can be consciousness without qualia.

    I think a similar question would apply to your second point. Is there a research direction to describe the brain (or organism) states in a way which answers the the questions you pose. Or is finding a characteristic of the organism state which differentiates the cases you list that not the right way of looking at the issue.

    Special request: no use of pronouns allowed in answer (since for me at least in these types of explanation personal pronouns seem to be relying on the Cartesian theater).

  31. Lizzie: The problem then, is that I’m not sure that it makes sense to separate perception from memory and imagination. The more we understand about the neuroscience of perception, the more like memory and imagination it becomes (and the more memory and imagination resemble each other). I’d say that all can be conceptualised as “model making”, and use shared neural substrates. Of course there are differences, but that has to do, I suggest, with tagging the provenance of the data on which we are constructing our “model” than with the mechanisms of model construction.

    Much of the sense of qualia very likely comes from the high degree of parallelism in the neural system. An input of “red” to a specific part of the nervous system quickly becomes the input of many parts of the nervous system that are distinct in their responses unless further training of the nervous system connects them.

    In high speed optical sensing systems, the paths to generating system responses are limited and highly refined to make the most efficient use of incoming information. Any information that proceeds along parallel paths is either lost or is used for other purposes that don’t speed up the response of the system.

    Thus, sending input off to other parts of the system to be “mulled over” and processed for further refinement of the system response takes time; and it does nothing unless the processed data are fed into the response circuitry.

    But in the nervous systems of living organisms, the input to any sensory system is shared among huge hierarchies of memory and previously linked hierarchies; and some of those linkages produce sensations we identify as emotions. Whether those emotions are “enjoyed” or “feared” makes little difference to a systematic response to a sensory input until further linkages are provided by training.

  32. BruceS: As the author explains, this is not a problem for physicalism (since there is a physical difference) but it is a problem for a functionalist who claims that only the logical relation between brain states determines qualia, regardless of the physical realization.

    Thanks for the link. I had a look at the paper and I think I largely agree with this:

    We are thus led to the following diagnosis: The explanatory gap, as far as the explanation of specific phenomenal sensations are concerned, basically rests on the partial epistemological independence between phenomenal structure on the one hand and concrete phenomenal qualities on the other. So maybe the proponent of the explanatory gap thesis, instead of insisting on the Qualia Inversion Hypothesis should rather concentrate on this specific independence claim and its philosophical consequences.

    I suppose what I am getting at is I don’t see the usefulness of quale as a category or in adding explanatory value in attempting to understand aspects of how the brain works.

  33. Lizzie: Well, I think that would be a mistake. Understanding perception in terms of modelling has been one of the big breakthroughs I’d say in cognitive psychology and neuroscience, and accounts magnificently for many otherwise puzzling illusory percepts.

    I see that as a big mistake. I see perception is involving interpolation, but not modeling in any more original sense.

    Well, the processes certainly overlap – as they must, if percepts are ever going to be remembered!

    That’s not how I look at the overlap.

    As I see it, we perceive our inner models. The perceptual judgment that we developed, so as to be finely tuned to reality, is thus also used to evaluate our internal models. Without that, internal models would not be nearly as useful. I see thought as mainly a method for accessing the knowledge that is implicitly located in our perceptual capabilities.

    We are normally quite clearly aware of whether we are perceiving reality or are perceiving our internal models.

  34. BruceS: Special request: no use of pronouns allowed in answer (since for me at least in these types of explanation personal pronouns seem to be relying on the Cartesian theater).

    Absolutely valid request!

    BruceS: I’m not sure how to interpret your reference to “ZB’s interrogation of its own internal state” except as a different internal state which somehow leads to consciousness and (optionally?) qualia. Is that what you meant? If so, is there something about those internal states which makes them necessary and sufficient for consicousness? Or for qualia, if there can be consciousness without qualia.

    And have said that it is the interrogation of the internal state, rather than having an internal state per se, that lies at the root of consciousness. However, not all interrogations need result in what we would want to call consciousness. For example, e.g. not just “Have I [the system of which the originator of this question is a part] been switched on for 10 minutes? If yes, activate power saving schedule” it needs to be an open and reentrant loop (A Strange Loop in fact) e.g. in which the answer to the question results in a change of state + new information which in turn requires interrogation in order to reach a decision, which may be rescinded if it didn’t produce bring about a specific goal, which itself can be set and reset depending on the state of the system!

    Some of the circuitry is present in Asimo – but Asimo doesn’t reset his own distal goals at any rate. I don’t think there’s “anything it is like to be Asimo” yet, any more (or less) than there is “anything it is like to be a moth”. But it’s going the right way IMO.

    I think a similar question would apply to your second point. Is there a research direction to describe the brain (or organism) states in a way which answers the the questions you pose. Or is finding a characteristic of the organism state which differentiates the cases you list that not the right way of looking at the issue.

    I think it’s the right approach, if I’m understanding you correctly. Please don’t think I think we are anywhere near producing artificial robots that be anything we would want to call conscious. I’m not sure we ever will be, unless we figure out how to get them to evolve. But I do think we already have a pretty good handle on how people become conscious of stuff, including their own intentions, and state of uncertainty. This is really important clinically.

    And I think the progress has been made by breaking down the question “is this person conscious and if so how does she do it?” into “what is this person conscious of now, and what will determine what she is conscious of next, and what processes control what determines what she is conscious of next” and so ad infinitum. I do think Hofstadter (and also Edelman and Tononi) have got it basically right – re-entrant loops that have enough degrees of freedom that every iteration of the loop, from the first simplest one, brings in more data, and that data influences what actions the organism takes, and therefore what new data comes in.

  35. Lizzie,

    I was simply objecting to this:

    And in the case of red, specifically, I suggest it [the quale] is a slight elevation of the sympathetic nervous system, resulting from both learned and hard-wired links between things that are red, and which have in common both danger and excitement – fire and blood being the most primary, edible fruit probably as a co-evolutionary outcome, and fire engines, warning lights, stop lights, etc as learned associations.

    This can’t be correct, because it would require the sympathetic nervous system to be slightly elevated in order for us to see red, and slightly depressed in order for us to see blue. We wouldn’t be able to see red and blue at the same time, as when viewing the Union Jack.

    The neural correlates of the ‘red’ and ‘blue’ quales must be granular and distributed across the visual field. Sympathetic nervous system activation doesn’t have the required spatial distribution.

  36. Barry Arrington at UD responds to some UD comments here:

    It is reasonable to say that consciousness confers survival advantage over unconsciousness

    This is an assertion. It is not an argument. If an organism without consciousness behaves in a way that is exactly the same as an organism that is conscious, it not reasonable to say that consciousness confers a survival advantage. Merely asserting the contrary (which is all LP does), does not establish the contrary.

    And by extension, thought-experiment philosophical zombies who behave in exactly the same way as the real entities they are supposed to impersonate will be indistinguishable from them. Seems that, following Barry, Barry is asserting, not arguing.

  37. keiths:
    Lizzie,

    I was simply objecting to this:

    This can’t be correct, because it would require the sympathetic nervous system to be slightly elevated in order for us to see red, and slightly depressed in order for us to see blue.We wouldn’t be able to see red and blue at the same time, as when viewing the Union Jack.

    The neural correlates of the ‘red’ and ‘blue’ quales must be granular and distributed across the visual field.Sympathetic nervous system activation doesn’t have the required spatial distribution.

    No, it doesn’t. I’m not actually saying it does. I’m actually trying to say that “red” is bound to an internal model of what promotes elevated sympathetic tone – as well as a great many other models, including ripe fruit, warm hands, English soldiers etc.

  38. Very tangentially related:

    Each Thing We Know Is Changed Because We Know It

    A eucalyptus has its implications
    where I come from: it means the autumn winds
    return each year like brushfires from the desert,
    return as dry reminders of the oaks
    whose place this was: the valley oak, blue oak,
    and the oracle which thrive on little water.
    And eucalyptus means the orange groves
    once flourished here, that rivers were diverted,
    that winter was denied and smoke hung low
    over the valley the night of the first frost–
    smudgepots warming and darkening the sky.
    And eucalyptus means that people lived here;
    the flesh tones of the mottled trunk, the bark
    in strips that dry and curl and fall to the ground
    mean that my mother’s family would walk,
    through walnut orchards and orange groves, to church–
    Saturday nights, three women, three generations,
    bound for the revival meeting, each collecting
    more friends as she went further south through town.
    And eucalyptus leaves, silver, scythe-like,
    shaken down by the wind, the tree still tall,
    scented the questions that my mother asked
    when her senile grandmother would get out
    of the house, walk south through the orchards, south
    right through downtown, straight to the water tower.
    Dark letters spilled across it six foot tall–
    SANTA ANA SANTA ANA, high
    and circular, the water named the town.
    Great-grandmother would stand beneath it–no words
    she could recall to call herself–just waiting,
    for what she didn’t know, and trying on names.

    Kevin Hearle 1989

    I dunno, it seems to speak to Lizzie’s idea that consciousness is not a thing, it’s a verb “conching” …

  39. Neil Rickert: Nobody can refute “that argument” because it is not an argument.It is an assertion, presumably coming from materialist ideology.It might well be correct, but I do not find it convincing.

    For that matter, I’m not convinced that “the same consciousness” is anything more than word salad.

    I am not taking a position on what consciousness really is. It is something Barry thinks we have, but that an indentically-behaving zombie would not have. Then he argues that natural selection could not select for consciousness, if its presence did not alter the behavior.

    I (and it seems KN as well) am just arguing that NS can differentiate between these two types of organisms. Our common assumption is that if “consciousness” is some real state of our minds, it is somehow reflected in the state of the neural machinery. Thus its presence or absence is correlated with different neural machinery (neurotransmitter levels, nerve connections, etc.)

    I was hoping that we would all be in agreement on that.

    If consciousness is nonexistent, then the whole issue is moot. If consciousness exists, then I think our argument has merit.

  40. Right. I was just pointing out that there are lots of ways in which consciousness could be an indirect target of selection even if it makes no difference to overt behavior. But my suspicion, rather, is that anything which can behave a complex living animal behaves will necessarily be conscious to some degree or other.

  41. Barry responds

    Over at TSZ Lizzie insists that Zombie Fred experiences qualia even though he is not conscious. Here:
    Lizzie bases this assertion on . . . . well, exactly nothing. She merely asserts it without the slightest support. The zombie behaves the same way as the conscious person. Therefore, according to Lizzie, he simply must experience subjective qualia.

    No, I asserted no such thing. Clearly an unconscious Zombie Fred will not experience anything, never mind qualia. My contention is that if Zombie Fred behaves exactly like Fred, then Zombie Fred must be conscious (and experience qualia). And I presented an argument for this, which you have not addressed (not surprisingly, as you have totally misunderstood my contention).

    Here is a summary of our exchange.
    Barry: If an unconscious zombie and a conscious person behave exactly the same way, then consciousness cannot be selected for by natural selection.
    Lizzie: If a zombie behaves the same way as a conscious person, then there is no difference between the zombie and the conscious person.

    Well, that is my conclusion, Barry. You’ve missed out my argument. And my point wasn’t that “there is no difference” – the zombie might be very different – silicon based, for instance. But would be alike in experiencing qualia. But again, that is my conclusion, and you’ve missed my argument. And I’m not assuming, as you seem to be, that ZF is unconscious. My claim is that if ZF behaves exactly like Fred then ZF cannot be unconscious.

    Barry: Of course there is a difference. For example, the conscious person experiences qualia.

    Well, that is the question at issue. It is merely your assertion that the zombie doesn’t. My contention is that a “zombie” who behaved exactly like Fred must be as conscious as Fred. And the argument in my OP supports this, I believe. If you disagree, please say where my argument breaks down.

    Lizzie: If the zombie behaves the same as the conscious person, then it must mean that he experiences qualia too.

    Yes, that is the conclusion from my chain of reasoning. Please tell me what is wrong with my chain of reasoning.

    At bottom, Lizzie’s strategy to prop up her materialist world is simply to deny that there is any difference between a conscious person and an unconscious zombie who behaves the same way.

    Well, for a start, this is factually wrong. It was when I realised that a “zombie” was nonsensical, that I ceased to posit a non-material “conscious” added extra. In other words my “materialist world” was a conclusion, not a prior. But of course I think there is a difference between a conscious person and an unconscious zombie. But that is not the question at issue; the question at issue is: is there a difference between a conscious person and a biological robot who behaves exactly the same way? My argument is that if the robot behaves in exactly the same way then that robot must be conscious, because consciousness, including the experience of qualia, is required for that kind of behaviour. You, Barry, seem to be assuming your conclusion, that the robot must be unconscious, because it is a robot, even if it behaves like a conscious person. I’m saying that if the robot behaves like a conscious person, then it must be conscious, because consciousness is required for that kind of behaviour.

    Thus, our exchange boils down to this:
    Barry: There is a difference between a conscious person and a zombie who behaves the same way outwardly.
    Lizzie: No there isn’t. An unconscious zombie who behaves the same way as a conscious person would be conscious.

    I said no such thing, which would clearly be ludicrous. Indeed I questioned the very word “outwardly” and said that apparent behaviour that was indistinguishable from that of a conscious person could only result from inner mechanisms that involved the very processes that result in consciousness. My argument in fact is the very opposite of the one you put in my mouth: An unconscious zombie (or person) would be unable to behave the same way as a conscious person, because consciousness, I contend, is necessary for the kind of behaviour conscious people exhibit.

    We are at an impasse, because there is no possible response to a non-sequitur other than to point out that it is a non sequitur.

    Well, one response would be to read what I actually wrote, instead of putting into my mouth arguments that I did not make.

    Another response would be to clarify your own claim:

    In your view, Barry, could a physically identical being to a conscious person, in principle, not be conscious?

    If you could answer this question, I’d be clearer about what you are claiming when you seem to say that conscious people have qualia and a biological robot doesn’t.

  42. Alan Fox:

    I suppose what I am getting at is I don’t see the usefulness of quale as a category or in adding explanatory value in attempting to understand aspects of how the brain works.

    Look at it the other way. Maybe the existence of qualia are a fatal problem for trying to understand mental events by understanding only brain events. And if we cannot understand mental events by understanding only brain events, then that proves a limit to science and physicalism.

    Or at least that is what I understand philosophers are trying to prove when they propose thought experiments like zombies and qualia inversion.

  43. BruceS: Maybe the existence of qualia are a fatal problem for trying to understand mental events by understanding only brain events. And if we cannot understand mental events by understanding only brain events, then that proves a limit to science and physicalism.

    I’m not suggesting research into cognition and consciousness should be limited to studying the brain. I’m sure much can be learnt from observing the whole organism. Perhaps it is a semantic issue. The idea of qualia maybe is useful as a descriptive in hypothesizing about aspects of cognition. I am yet to be convinced as it seems to me to have a category that encompasses already well-defined sub-categories is somewhat superfluous.

    Of course human research into human cognition is limited by the level of collective human cognitive ability. I don’t assume that all that lies within our universe is necessarily amenable to scientific human endeavour but history as shown predictions about the future often look ridiculous in hindsight.

  44. Alan Fox:

    Barry Arrington at UD responds to some UD comments here:
    This is an assertion. It is not an argument.

    And by extension, thought-experiment philosophical zombies who behave in exactly the same way as the real entities they are supposed to impersonate will be indistinguishable from them. Seems that, following Barry, Barry is asserting, not arguing.

    Here is the logic that I think Barry is using:
    1. The zombie argument proves that consciousness is separately from brain states and hence from behavior. Barry is not trying to restate this argument but assuming it is true based on Gelernter’s references to philosophy.

    2. If consciousness is separable from brain states and hence behavior, then the presence or absence of consciousness is invisible to natural selection as a mechanism of evolution.

    3. Hence there is no reason to think consicousness evolved by through natural selection.

    4. But since we are all conscious, evolution cannot be a complete explanation.

    Now I’ve noted in other posts that this argument is not complete (eg there are other mechanisms of evolution than natural selection which may apply).

    But I think that you cannot dismiss this argument without addressing the zombie argument in premise 1. If you just assume that brain states must be linked to consciousness in such a way that consciousness causes behavior through this linkage, then you are not addressing the whole argument BA is making.

    I have not studied her current response, but I think Dr. LIddle’s original response and that of some other posters in the threads are incomplete because of this and hence BA is correct in calling attention to this logical gap.

  45. Alan Fox: I’m not suggesting research into cognition and consciousness should be limited to studying the brain. I’m sure much can be learnt from observing the whole organism. Perhaps it is a semantic issue. The idea of qualia maybe is useful as a descriptive in hypothesizing about aspects of cognition. I am yet to be convinced as it seems to me to have a category that encompasses already well-defined sub-categories is somewhat superfluous.

    Of course human research into human cognition is limited by the level of collective human cognitive ability. I don’t assume that all that lies within our universe is necessarily amenable to scientific human endeavour but history as shown predictions about the future often look ridiculous in hindsight.

    You can use organism states instead of brain states without affecting the philosophical argument about the limitations of physicalism.

    I completely agree with your last paragraph. In this thread I’m playing a devil’s advocate role, I guess, to try make sure the logic of the qualia argument is completely presented.

  46. BruceS: Look at it the other way. Maybe the existence of qualia are a fatal problem for trying to understand mental events by understanding only brain events.

    I agree with Alan. I also find the introduction of “quale” and “qualia” to be misguided, and to have set people off onto a wild goose chase.

    I see the idea of “trying to understand mental events by understanding only brain events” as equally misguided. It seems to be based on presuppositions which would imply that consciousness is useless.

    And if we cannot understand mental events by understanding only brain events, then that proves a limit to science and physicalism.

    Why do we even need to understand “mental events”? Are there any such things, other than in a metaphorical sense?

  47. BruceS: Now I’ve noted in other posts that this argument is not complete (eg there are other mechanisms of evolution than natural selection which may apply).

    I agree. I keep mentioning the explanatory power of sexual selection by mate choice with regard to brain evolution, especially with regard to artistic activities (non-exhaustive list: poetic language, music, visual art) where a simple survival element is difficult to postulate.

    1. The zombie argument proves that consciousness is separately from brain states and hence from behavior. Barry is not trying to restate this argument but assuming it is true based on Gelernter’s references to philosophy.

    A simple thought experiment (though it has been done – Dr Guillotine?) refutes the idea of consciousness independent of the brain; remove the brain and observe whether consciousness is affected.

    ETA poetic

  48. BruceS: In this thread I’m playing a devil’s advocate role, I guess, to try make sure the logic of the qualia argument is completely presented.

    I’m sure Barry appreciates your efforts. 🙂 Seriously, I hope he can find time to visit.

Leave a Reply