The problem of consciousness is, notoriously, Hard. However, it seems to me that much of the hardness is a result of the nature of the word itself. “Consciousness” is a noun, an abstract noun, derived from an adjective: “conscious”. And that adjective describes either a state (a person can be “conscious” or “unconscious” or even “barely conscious”) or a status (a person, or an animal, is said to be a “conscious” entity, unlike, for example, a rock).
But what must an entity be like to merit the adjective “conscious”? What, properties must it possess? Merriam-Webster gives as its first definition: “perceiving, apprehending, or noticing with a degree of controlled thought or observation doing, or be capable of doing”. In other words, the properties of a conscious thing refer to its capacity to do something. And, indeed, it is derived from a latin verb: the verb scire, to know. In other words, “consciousness” may be better thought of as an abstract noun derived not from an adjective that describes some static attribute of an entity, but from one that implies that that entity is an agent capable of action.
And so, I will coin a new verb: to conch. And I will coin it as a transitive verb: an entity conches something, i.e. is conscious of something.
Armed with that verb, I suggest, the Hard Problem becomes tractable, and dualism no more of an issue than the dualism that distinguishes legs and running.
And it allows us, crucially, to see that an animal capable of conching itself, and conching others as other selves, is going to be something rather special. What is more, to be an animal capable of conching itself and others as selves will not only experience the world, but will experience itself experiencing the world.
It will, in a sense, have a soul, in Hofstadter’s sense, and even a measure of life beyond death in the conchings of other conchers.
[As has this post….]
Hi Elizabeth,
I don’t see how “to conch” renders the Hard Problem tractable any more than “to be conscious of” does. Could you elaborate?
on February 26, 2012 at 1:45 am said:
It will have, in a sense, a soul, one even capable, if not of immortality, of existence beyond death, as a concher in the conchings of other conchers.
In other parlance, if “I” is an avatar created by the brain to interact with others in a social construct, then perhaps “I” can also exist in the brains of others.
Sounds like a synonym for “perceives”. But put it in a novel and maybe you can replace “grok”.
I’m not sure we need a new word for this, since we have the term intentionality, which captures the ‘aboutness’ of experience.
The idea of “the hard problem” goes back to the mid-nineties, when David Chalmers (in The Conscious Mind) started distinguishing “easy problems” in philosophy of mind from “hard problems”. The “easy problems” are problems that can be solved once we’ve specified a mechanism that can carry out some cognitive task (memory, visual perception, navigation).
But consciousness seems to be a “hard problem” because we don’t have the slightest idea how any mechanistic explanation could account for it. That is to say: it’s not “we know what a mechanistic explanation of it would like it, but we haven’t been able to find one” but rather “we don’t know what a mechanistic explanation of it would even look like, and the very idea of an mechanistic explanation of consciousness might be fundamentally incoherent”.
There’s a corresponding hard problem of intentionality, of course. Here, the problem is, “how could intentionality be mechanistically explained?” Does it even make sense to say that neural connections could account for the aboutness of experience and of thought?
Much of philosophy of mind/philosophy of neuroscience is a tough slog, but I do recommend Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness, by Alva Noe (who has a nice, quite readable essay here).
As hard as the hard problem is, what I’ve been struck with of late is how absolutely common and ordinary consciousness/awareness is, once we have living organisms at all. The hard problem is conceptually hard for us, but it is obviously not difficult at all for the kinds of physical/biological organization that result in consciousness to emerge. In fact, again given living organisms, conscious organisms evolutionarily downstream are absolutely commonplace.
So I’m also struck by the apparent contradiction between the ease with which consciousness can be instantiated and how hard it is for us to think about it. We’re clearly missing something.
Neil Rickert on February 26, 2012 at 4:37 am said:
Carl Sachs: But consciousness seems to be a “hard problem” because we don’t have the slightest idea how any mechanistic explanation could account for it.
But why should there be a mechanistic account?
We use mechanism as a basis for control, including remote control. If I had a mechanistic account of your consciousness, that could give me access to what you are conscious of. But the whole point of your consciousness is that it give you access, not that it give me access.
We use feedback as a tool for self-control. So we should expect consciousness to involve feedback systems, rather than mechanisms. As best I can tell, we will never be able to design a conscious system. You can only get consciousness with self-design, which is roughly what evolution and development is about.
There’s a corresponding hard problem of intentionality, of course. Here, the problem is, “how could intentionality be mechanistically explained?”
In a way, it is really the same problem. And it’s just as unlikely that there will be a mechanistic account of intentionality. But we do solve it using feedback, and we do that in public where it can be observed. But almost nobody seems to be looking.
[Footnote: I am attempting to respond to Carl’s comment, which is stuck waiting for moderation. It was probably trapped by the spam filter (too many urls). I can see it there, which is how I can reply to it, but I am unable to do anything about it. Probably people with author privileges can see it if they know where to look.]
Thank you, Neil. I figured it was the URLs that got it caught in the spam filter. I’ll respond once it’s been released and others can take a look.
Carl
Neil Rickert: As best I can tell, we will never be able to design a conscious system.You can only get consciousness with self-design, which is roughly what evolution and development is about.
I’d say it’s too early to make either of those assertions, at least in such general terms. If consciousness is understandable in principle, then it’s very probable that eventually it will be manufacturable in practice. I don’t see any laws of physics preventing it. And I’m not too sure about the “self-design” thing, either. I wouldn’t have phrased it that way. I’m sure I didn’t design my consciousness myself. Its potential was preloaded into me, and it developed as the rest of me did. I’m not willing to predict how long it will be, but I don’t see any obstacle except our current ignorance to being able to install it into a manufactured machine.
sledgehammer on February 26, 2012 at 6:38 am said:
IANAX, just a humble physicist. I have a question for the neuros and/or psychs in attendance.
Can consciousness can be described as a continuum, from self-awareness in humans and other mammals all the way down to social awareness in insects like honeybees?
Heck, even plants might be seen as having a very low-level “awareness” of the sun’s position in order to track it with its leaves, even if it’s a purely mechanical, hydraulic mechanism. And who’s to say that a computer, e.g. connected to a web of sensors and actuators and executing an adaptive algorithm, isn’t exhibiting some form of low-level “awareness” of it’s environment.
Are there well-defined boundaries between various levels of conch, even in humans?
If there are no clear demarcations, that helps to explain why it’s so difficult to define.
Reciprocating Bill:
So I’m also struck by the apparent contradiction between the ease with which consciousness can be instantiated and how hard it is for us to think about it. We’re clearly missing something.
Yes, we’re clearly missing something fundamental. It’s a maddening intellectual itch, and we haven’t even figured out where to scratch. I hope I live long enough to see us make some significant progress.
I will say that I think subjective awareness must be an inevitable epiphenomenon of certain kinds of information-processing systems. Inevitable because, as you point out, consciousness is commonplace, even ubiquitous among animals with sufficiently complicated nervous systems. Epiphenomenal because I cannot see how subjective awareness can have any selective value whatsoever.
Take any conscious system and imagine subtracting out the subjective awareness while the information processing remains intact. Such a system, though no longer conscious, would behave identically to the original conscious system, and so selection would be unable to distinguish between the two. If consciousness is invisible to selection yet ubiquitous, it strikes me that it must be an epiphenomenon of something that does have selective value, like a sophisticated nervous system.
Consciousness is that mental experience which we lose upon falling into a dreamless sleep, and regain when we awaken. We all know what that refers to (as opposed to the slippery notion of “intelligence”). I am unsure how framing consciousness as an activity rather than this experience gets us any farther on the hard problem.
I don’t think we’re going to get very far discussing the problem here, except maybe to summarize all the cool research going on now in the labs of folks like Chris Koch (who is progressively dissecting the neural correlates of conscious experience, and simultaneously making fine distinctions in our phenomenology, such as between consciousness and attention) or Daniel Wegner (who is continuing the empirical challenge to the notion that consciousness is causal, rather than perceptual).
What I do think we can make progress on here is to make explicit the relationship between the mind/body problem and ID. Although ID proponents are loath to admit it (and often unaware of it), ID is predicated upon a dualistic, libertarian theory of mind. Denyse O’Leary and Michael Egnor and other ID enthusiasts argue for dualism, and talk about what they believe are the moral implications at stake, but never concede that if they are wrong, the arguments for ID disintegrate. Dembski is slightly more open about this, conceding that ID requires an “expanded ontology” (i.e. dualism) and stating that “I fully grant that my theology would crumble with the advent of intelligent machines”.
I have long thought that scientists are fighting the wrong fight against ID. It is not the fact that modern biology is correct and complete as is that shows ID is wrong. Rather, it should be constantly pointed out that ID rests squarely on old, intuitive notions of mind that are at best unprovable metaphysical assumptions, and are (as virtually all cognitive scientists believe) likely false.
Inevitable because, as you point out, consciousness is commonplace, even ubiquitous among animals with sufficiently complicated nervous systems.
Given there is no experimental design that solves the problem of other minds, I think this is an open question for now. I am certainly compelled to treat animals as though they had conscious experience (I don’t even eat them), but the less mammal-like they are, the less I tend to sense there is someone inside.
If we learn a lot more about neural correlates of conscious experience, we may be able to assess the type of awareness experienced by other animals based on their neuroanatomy, or brainwaves, or something. Until then, can we really know what it is like to be a bat, as Prof. Nagel famously asked?
aiguy: can we really know what it is like to be a bat?
I don’t even know what it’s like to be you!
One part of this that strikes me is the deep interconnection between some aspects of my consciousness and language. Of course, I don’t always articulate experience linguistically, but very often, I do – the internal monologue and all that. Trying to decide ‘what one thinks’, one almost has a dialogue with oneself, and it is very hard to think upon many abstract notions without such a linguistically-based dialogue – a progression towards a crystallised thought that is almost like someone explaining it to you, in sentences, or rather you explaining it to yourself.
Keiths – Already, this discussion illustrates what is “hard” about the hard problem.
Keths:
I will say that I think subjective awareness must be an inevitable epiphenomenon of certain kinds of information-processing systems.
yet –
Take any conscious system and imagine subtracting out the subjective awareness while the information processing remains intact.
I’m wary of your second statement in light of your first (with which I agree, although “epiphenomenal” is a fraught characterization). If the first is accurate, then the thought experiment in the second is impossible, as it would follow that the only way to abolish consciousness is to disrupt or remove the sort of information processing relative to which consciousness is an inevitable epiphenomenon.
Neil:
As best I can tell, we will never be able to design a conscious system.
Oddly enough, I’ve believe the opposite, that it is possible and it may be done, perhaps by modeling the physical arrangement of our system – which you would say has the wrong kind of history – after those that do have the right sort of history. I am somewhat persuaded by the various “substitution” thought experiments (e.g. imagine that we succeed in creating an artificial neuron…we begin substituting these neurons for those in the visual cortex of someone’s brain, etc.)…
However, the hard problem is such that we could succeed, yet not be any better able to explain WHY the physical system we have created has subjectivity than we are able to explain why WE do. Nor could we ever really ascertain whether the system is actually conscious, particularly if you accept the possibility of zombies (which I don’t). All we would have is the behavior of the system, and unconscious zombies notoriously behave precisely as do their conscious counterparts, including asserting that they are conscious and discussing the hard problem of consciousness.
I agree with KeithS in that I don’t see that coining a new word for it actually makes the hard problem of consciousness any more tractable, although it’s a shorter and handier label than the alternatives.
Coming at it from a slightly different angle, though, there is obviously a time component to consciousness. To be aware is to be aware of change, from moment to moment.
Supposing we assume that our awareness has a certain ‘frame-rate’ of 25 frames per second (fps), the rate at which a movie needs to be projected for us to perceive it as continuous motion. Now suppose our frame rate was changed to something like time-lapse photography, say only one frame per week. but perceived by us as continuous. How would our awareness of the Universe change? Do we have any reason to think our ‘frame-rate’ of awareness is privileged over any other? It provides us with a certain temporal ‘resolution’ but why not a higher frame-rate something like that of ultra-high-speed photography which would provide even higher resolution, enabling us to see the flight of a bullet in slow motion for example?
One of my favorite SF novels is Fred Hoyle’s October 1st Is Too Late. It’s not great literature by any means but it does grapple with these challenging ideas about our concepts of time and consciousness in an accessible way. ( For those wanting to refresh their memories of what Hoyle wrote, the relevant passages are excerpted here.) What I find fascinating is not only does it allow for the possibility that our perception of the sequential flow of time could be an illusion but also that we could experience a million different consciousnesses but never know it.
Umm “conch” is already taken, ie it has a meaning. Heck Key West has been called the “Conch Republic”.
Allan Miller: I don’t even know what it’s like to be you!
And I don’t even know what it is like to be me.
This “what it is like to be” kind of talk is deeply misleading and confused.
If I know what it is like to be me, then I ought to be able to describe that to others. But I cannot.
If I know what it is like to be me, then I ought to be able to compare what it is like to be me when I am down with the flu, with what it is like to be me under more normal conditions. But I can make such a comparison only in very vague terms, such as “it feels awful when I am down with the flu.”
Neil,
And I don’t even know what it is like to be me.This “what it is like to be” kind of talk is deeply misleading and confused. If I know what it is like to be me, then I ought to be able to describe that to others. But I cannot.
You are using a very restricted sense of “to know” here. I know what the word “consciousness” refers to, although I cannot explain what it feels like. The fact that we know things that we cannot express directly in words is obvious (and pretty much the reason for poetry).
This relates to Alan Miller’s point about consciousness and language. Dan Dennett argues that language is essential to consciousness, as consciousness is the way we narrate our experience. This leads him to say weird things like an AI word processor would be more conscious than a dog. But just as it seems clear to me that can we know things non-linguistically, we can also experience them non-linguistically.
Reciprocating Bill,
There are a few reasons to question these “successive replacement” thought experiments. They seem compelling if you’ve already bought into a Dennett-esque functionalist account, where all you have to do is reproduce an information processing system of sufficient complexity. Searle’s infuriating Chinese Room questions whether this would be sufficient to produce real understanding, however, and suggests there is something very special about the biological properties of the brain that gives rise to conscious thought. Penrose and others aim another level deep, and argue that there are exotic physical processes underlying both our mental abilities and our phenomenological experience of them. If Searle or Penrose is right, since we don’t know just what properties are necessary, we may not be able to even identify what features we need to replicate.
Carl Sachs:
But consciousness seems to be a “hard problem” because we don’t have the slightest idea how any mechanistic explanation could account for it. That is to say: it’s not “we know what a mechanistic explanation of it would like it, but we haven’t been able to find one” but rather “we don’t know what a mechanistic explanation of it would even look like, and the very idea of an mechanistic explanation of consciousness might be fundamentally incoherent”.
I think we know a lot what a mechanistic explanation consists of but we cannot understand it by experience. We cannot experience what happens when consciousness develops in an entity as a new property, because we can never experience what it is first to be an entity without consciousness. I see the Hard Problem comparable to the problem presented in Frank Jackson’s Mary the color-scientist: a technical description can never become an experience.
Neil, I think you are barking up the wrong tree. First, consciousness is only “the top of the iceberg” of your mind. Second, by language you can describe only “the top of the iceberg” of your conscious experiences.
Really, they should call this the fun problem of consciousness.
Whatever those exotic physical processes, they are also utterly common. The flattened squirrel in the road employed (or perhaps instantiated) them until just a little while ago, as do countless trillions of other living organisms at this moment. Moreover, in the instance of mammals, the material arrangement of a single fertilized egg is capable of imploding the substances carried in mother’s blood stream – oxygen, proteins, lipids, minerals, sugars etc., none in themselves mysterious – in such a way that it eventually replicates and differentiates itself into kidneys and genitalia and thymus and stomach and bones and brain, marshals those exotic processes (apparently), and thereby becomes both a living and a conscious organism.
So, unless we are willing to say that these exotic processes also account for embryological development, and perhaps the fact of life itself, and that they are incapable of being deliberately manipulated to yield life and consciousness (which sounds an awful lot like a return to vitalism), it isn’t clear to me that why we too will not eventually construct similar material seeds that have the same cunning. Are not single cells – including zygotes – composed of material substances, describable in the languages of physics and chemistry, organized in a very special way?
[Note: I hope I figured out who said what in this post – if something is misattributed, please let me know! – Lizzie]
I’m just borrowing it. I’ll give it back when I’m done
eciprocating Bill,
I would tend to agree about squirrels, but how can we know? What of a fish? A flea? A clam?
Moreover, in the instance of mammals, the material arrangement of a single fertilized egg is capable of imploding the substances carried in mother’s blood stream – oxygen, proteins, lipids, minerals, sugars etc., none in themselves mysterious – in such a way that it eventually replicates and differentiates itself into kidneys and genitalia and thymus and stomach and bones and brain, marshals those exotic processes (apparently), and thereby becomes both a living and a conscious organism.
We understand ontogeny only a little bit better than consciousness. There was some big result very recently about the role of EM fields in directing development I recall…
So, unless we are willing to say that these exotic processes also account for embryological development, and perhaps the fact of life itself, and that they are incapable of being deliberately manipulated to yield life and consciousness (which sounds an awful lot like a return to vitalism), it isn’t clear to me that why we too will not eventually construct similar material seeds that have the same cunning. Are not single cells – including zygotes – composed of material substances, describable in the languages of physics and chemistry, organized in a very special way?
First, I wasn’t suggesting that it is impossible to deliberately manipulate substances to reproduce living, conscious organisms; rather, I was saying it may be that we are unaware of the critical features, and so we might not even undertake to reproduce them.
And positing currently unknown aspects of physics (quantum gravity or whatnot) to account for our extraordinary ability to think and feel isn’t really vitalism, or dualism. I believe that simply positing some irreducible, undetectable elan vital or res cogitans doesn’t provide any understanding of anything. I’m just saying that consciousness is deeply mysterious, and physics is deeply mysterious, and that physics now seems inseperable from mind, so we shouldn’t hastily assume that mind can be reduced to (or engendered by) the sort of physical causes we currently understand.
I don’t think so.
I know what the word “consciousness” refers to, although I cannot explain what it feels like.
I don’t know what “consciousness” means. Sure, I have a vague sense of what people intend when they use that word. But I think there’s a lot of inconsistency between the use of that word by different people. So, to me, it seems like a rather vague term that probably cannot be easily refined.
I guess that I did not explain myself well enough.
The expression “what it is like” is an expression of comparison. In order to know what it is like to be me, I have to be able to compare being me with not being me. I don’t have any way of doing such a comparison.
I see. Actually, I’ve been following your blog and been a little confused about your concept “measurement”. This explains it quite a lot.
That said, I don’t agree with you. “Comparison” is a logical operation and it belongs to the level of symbolic languages. You can’t compare your experiences logically, simply because they are all different. The result of every comparison would be “different”.
There are interpretations how nature works (I mean biological systems and processes below the level of symbolic languages) and my interpretation is: Nature doesn’t compare, it selects. Even brains below that level aren’t computers. Heh, after discussing this matter several times I’ve even formulated a maxim:
The biggest barrier to understand our brains is our logical mind.
Comparison is a perceptual operation.
Yes, we use a “compare” instruction in computers, and what that does is a logical operation. But that is very different from what we mean by “compare” in ordinary life.
The thing that’s always bothered me about Searle’s thought experiment is that the Chinese room occupant has a single data stream as input, no memory, and executes a purely deterministic algorithm. It would be like a human child, born with only a sense of hearing, no sight, no touch, no taste, etc, and a severe case of Alzheimer’s, trying to decipher a spoken language.
I also think that some randomness, fuzzy logic if you will, is an essential ingredient in breaking the rigidity of a purely deterministic, rule-based, inference engine, thereby allowing for free association, trial-and-error, recursion, etc. that make the conscience experience so rich.
Really? You think that we can’t know anything that we can’t put into words? That really does seem restrictive to me. I can’t tell you what “red” looks like to me, but I do feel I know what red looks like to me. That is the whole point of the “Mary, the color scientist” thought experiment.
I don’t know what “consciousness” means. Sure, I have a vague sense of what people intend when they use that word. But I think there’s a lot of inconsistency between the use of that word by different people. So, to me, it seems like a rather vague term that probably cannot be easily refined.
When we say “I lost consciousness” and “I regained consciousness”, it doesn’t seem to be very ambiguous to me. Nobody would ask, “Whatever do you mean by that?” I agree we may learn to make finer distinctions (e.g. between consciousness and attention), but I think the term has real meaning. “Intelligence”, on the other hand, means very little indeed, especially when applied to contexts outside of human mentality.
I was confused by the OP
“Conch” as a verb is also “taken”, as everyone in the chocolate factories knows.
Doesn’t that idea end up in the chicken-egg problem? What’s the “first comparison”? You must have thought it.
It’s my bedtime now, I’ll return tomorrow.
No, I don’t think that at all.
When we know something, we are able to make distinctions. We might not be able to give a precise description of what we know, but we can say something about the distinctions we are able to make.
That really does seem restrictive to me. I can’t tell you what “red” looks like to me, but I do feel I know what red looks like to me.
I do not feel that I know what red looks like.
We say “Roses are red, violets are blue.” But that is making a distinction between roses and violets. I don’t see it as making a distinction between red and blue. Suppose I woke up tomorrow, and somebody had done surgery on my brain to switch the colors around. Would I notice? Or would I still be saying “roses are red, violets are blue.” I am not at all convinced that I would notice. For all I can tell, the color red might be identical to the taste of onion, but they seem different because my experience of the color red has it spatially distributed in a way that doesn’t happen for the taste of onion.
By the way, I assume you are aware that Frank Jackson changed his mind about that “Mary” argument.
Yes, I have thought about that. This might drift a tad off-topic. As best I can tell, comparison is something that philosophers just take for granted without attempting to explain.
My view: We make distinctions in the world. Psychologists call that “perceptual discrimination.” Roughly speaking, we make distinctions (using measurement), and we divide the world up based on those distinctions. This is roughly what Wittgenstein described as “carving the world at its seams”, except that there often are no seams other than those that we choose to identify for ourselves.
This dividing up the world gives it a hierarchical structure. We divide into parts, then we divide those parts into smaller parts. I call it partitioning, though it is often called categorizing. (I don’t like the term “categorizing” because it carries with it some assumptions that I think are mistaken).
Once we have suitably partitioned the world into a hierarchical structure, we compare things by virtue of their relative location within that hierarchy.
I see this partitioning as prerequisite to being able to perceive, and as prerequisite to being able to have objects and being able to have any symbolic representations at all.
Neil,
I disagree about this too! I can imagine a blue rose and a red violet. I could dye these flowers, and show them to you, and you would also see a blue rose and a red violet.
Suppose I woke up tomorrow, and somebody had done surgery on my brain to switch the colors around. Would I notice? Or would I still be saying “roses are red, violets are blue.”
Nobody knows. It could depend on how the colors got switched. If the wiring in the retina was switched, it could be that we awaken (regain consciousness) to find that red fire engines now look blue. But if the wiring is switched downstream, it could be that we notice no difference.
I am not at all convinced that I would notice. For all I can tell, the color red might be identical to the taste of onion, but they seem different because my experience of the color red has it spatially distributed in a way that doesn’t happen for the taste of onion.
??? This seems quite at odds with the neuroscience as well as the phenomenology.
By the way, I assume you are aware that Frank Jackson changed his mind about that “Mary” argument.
I was, and I think he’s now wrong (I do not believe that qualia make people do things).
[Again, I may have misattributed some words here – let me know and I will fix 🙂 – Lizzie]
I’m so sorry so many interesting comments to this post were lost in the crash – if anyone has any tabs still open at this page and could mail me the contents that would be fantastic. I hadn’t read them all properly, and was looking forward to it 🙁
Learned an important lesson or too, though, about my own impulsivity….
I had nothing to contribute, save to observe that the verb “to conch” is already in use in the chocolate-making industry.
I doubt, though, that many here will be bothered by that.
I knew that.
heh.
I have always thought of consciousness as the brain’s sandbox, where we can take memories (however recent) out and play with them in something like a simulation, so that we get hypothetical results which may lead to new data points that can in turn be stored as “memories”.
Being magnificently ignorant of psychology and neuroscience, I have to ask whether that is anything like what all you Clever People think it to be?
PS: OED, 3rd Ed. also defines ‘conch’ as “a nickname for the lower class of inhabitants of the Bahamas, Florida Keys, etc., from their use of conchs as food.”
Here is a modified version of a comment I was attempting to post when the blog went down. Keiths had remarked that consciousness may arise whenever information processing of a certain kind is present. He also remarked on the possibility of non-conscious zombies and the epiphenomenal nature of consciousness.
Keiths:
I understand the first thought experiment. My point is that in postulating “another being that receives the same stimulus and processes information in the same way, but without subjective awareness,” you have surrendered your first assertion – that consciousness inevitably arises where certain kinds/complexity of information processing are present. You can’t have that, and zombies, in the same world. Or at least not in the same thesis vis consciousness.
Regarding the thought experiment vis water, wetness and weight, does that work? Zombie thought experiments stipulate that zombies exhibit physical and behavioral (and hence informational) characteristics that are identical in every respect to those of their conscious twins – yet are devoid of awareness. They can be good psychotherapists, strive for sexual contact, write poetry in response to sunsets, compose music, use humor, and engage in passionate defenses of the causal efficacy of mental states, bolstered by nuanced reports of their own experience.
Your imagined dry water, to be analogous, would need to be indistinguishable in all causal respects from wet water, not just with respect to weight. It would be composed of H20, behave as a liquid at room temperature, be useful in washing dishes, babble in brooks, possess surface tension, quench thirst, and lubricate sexual encounters between zombies in exactly the way that wet water does – yet still be dry. Upon postulating “dry water” (all else being equal) with these properties, I think our discussion would turn on, “what do you mean by ‘wet’ and ‘dry’?” Similarly, our discussion of our exquisitely sensitive zombie would turn on, “what do you mean by conscious?”
Oddly enough, the wetness of water and consciousness are often linked in other discussions of consciousness: the wetness of water as emergent property often cited as analogous to consciousness as an emergent property of matter/energy organized in a particular way.
OK, thanks to aiguy, this thread is sort-of restored! I’m afraid all the restored posts have my penguin (because I posted them all as me and then edited), but they should be correctly attributed and dated.
Thanks to all! Hopefully March will be less of a bumpy ride…
Reciprocating Bill,
I think you’re getting hung up on my use of counterfactuals. Using a counterfactual in an argument does not require one to assert the truth of the counterfactual. Let me rephrase my arguments in a way that makes this clear:
Argument 2a:
1. If we put a gallon of water in a bucket and weigh it on a scale, we observe a weight of about 8.35 pounds.
2. Water is wet.
3. In a world where water was dry, but everything else about water was unchanged, the weighing experiment would produce the same result.
4. Therefore, water’s wetness is irrelevant to its weight.
Note that this conclusion does not depend on the dryness of water (the counterfactual).
Argument 2b:
1. Subjective awareness seems inevitably to accompany certain kinds of information processing.
2. In a world where such information processing could occur without subjective awareness, behavior would be unchanged.
3. Therefore, subjective awareness is irrelevant to behavior (and is therefore invisible to natural selection).
This conclusion, like the previous one, does not depend on the truth of the counterfactual (that subjective awareness be separable from certain kinds of information processing).
Which is no doubt why wetness popped into my mind when I was searching for an analogy. It’s a nice illustration of the associative nature of memory: by activating the networks in my brain that deal with the concept of consciousness, I inadvertently primed the circuits associated with the concept of wetness.
I hate to jump in at this late date with a side-stream, but I just have to assert that consciousness is inextricably tied to emotion.Looking ate “lower” animals, it seems obvious that emotion and the perception of pain comes first, and as brain size increases, the brain supports more complex ways of avoiding pain and achieving pleasure.
My own opinion is that awareness of others precedes awareness of self, in the brain size and complexity series.The ability to see other creatures in distress seems to span quite a range of brain sizes. It has obvious adaptive value. It would seem that seeing oneself is a step up in complexity.
But I doubt if any of this would occur without emotion. I think “faking” emotion would be very difficult for a simulation or AI machine.
Another digression: Aspergers people see to lack emotional response, but when they self-report, they seem to have exaggerated emotions. I personally believe this is tied to some of their feats of memory and computation. Their learning systems are jacked up to the breaking point.
I might be missing some context here.
If premise 2 is false, the argument fails. So it does depend on the truth of the counterfactual. Maybe it doesn’t require that the counterfactual be factual, and I suppose that is really what you intended.
In any case, I challenge premise 2.
Some time ago – I think it was in the 1970s, some people at Northwester University were writing computer programs. And then they would put an AM radio near the computer while the program ran. And you would hear interesting music, due to the static picked up by the radio.
If you did exactly the same information processing on a different computer, which happened to be different in its timing and in its electrical noise emissions, you would not get the same music.
The same information processing, but different behavior.
Sure, with the same information processing, you would get the same information processing behavior. But when people talk of “behavior” without that qualification, they are usually talking of observable physical behavior.
(As a footnote – I personally think these issues of information processing are a red herring).
Neil,
I indicated in my comment what I meant by ‘counterfactual’:
And:
Regarding ‘behavior’, I’m using it in the typical sense. If asked to describe the behavior of an animal, few people would bother to characterize the electromagnetic emissions of its brain. Nor would I.
Really? How so?
Me: (As a footnote – I personally think these issues of information processing are a red herring).
I see the information processing theories of mind as mistaken.
Yes, we deal in information. But the information we deal with already informs us about the world. It does not require much processing. I see the brain involved in gathering the information in the first place, and gathering it in a way that it is already useful so does not require processing.
(I’m struggling with the fact that 3. seems to be not much more than a restatement of 2, rather than something that follows from it. This seems to show that epiphenominalism is thinkable, but not much more.)
Would it not follow, were subjective awareness invisible to selection, that, by virtue of drift and the absence of stabilizing selection most species would come to be devoid of awareness, or acquire bizarre forms of awareness that are irrelevant to their usefulness, correspondence to actual states of affairs, etc.? In particular, if awareness imposes any costs (say, extra metabolic work) in the absence of any other relevance to selection would we not expect to be zombies?
Assuming we are not, then why not?
Consider a simple reflex. Your body will respond without consciousness, but the conscious mind will then catch up to the reflex and own it. There are experiments which show that decisions can be made before the conscious mind is aware of the decision. Split brain experiments show that the conscious mind will manipulate its perception of the world to fit a narrative. Or a simpler example; your body will react to pain before you are even aware of it, or sometimes even before you’ve even experienced the injury.
Consciousness is the sensation of the mind.
Reciprocating Bill,
those are very difficult points you have raised sir. This thread is interesting, aiguy has some good stuff too, good thing liz recovered it all and kept it going.
I’m sorry but I don’t understand what kind of process you think. IMO we discriminate objects which get our attention – from the beginning of our ability to observe/experience – and build our worldview on those objects and their ( later experienced/thought) relations.
I don’t understand how and when the prerequisite partitioning happens, if it must exist before we are able to perceive. Where have I gone astray?