The Ghost in the Machine

Let’s suppose there really is a Ghost in the Machine – a “little man” (“homunculus”) who “looks out” through our eyes, and “listens in” through our ears (interestingly, those are the two senses most usually ascribed to the floating Ghost in NDE accounts).  Or, if you prefer, a Soul.

And let’s further suppose that it is reasonable to posit that the Ghost/Soul is inessential to human day-to day function, merely to conscious experience and/or “free will”; that it is at least possible hypothetically to imagine a soulless simulacrum of a person who behaved exactly as a person would, but was in fact a mere automaton, without conscious experience – without qualia.

Thirdly, let’s suppose that there there are only a handful of these Souls in the world, and the rest of the things that look and behave like human beings are Ghostless automatons – soulless simulacra. But, as in an infernal game of Mafia, none of us know which are the Simulacra, and which are the true Humans – because there is no way of telling from the outside – from an apparent person’s behaviour or social interactions, or cognitive capacities – which is which.

And finally, let’s suppose that souls can migrate at will, from body to body.

Let’s say one of these Souls starts the morning in Lizzie’s body, experiencing being Lizzie, and remembering all Lizzie’s dreams, thinking Lizzie’s thoughts, feeling Lizzie’s need to go pee, imagining all Lizzie’s plans for the day, hearing Lizzie’s alarm clock, seeing Lizzie’s watch, noting that the sky is a normal blue between the clouds through the skylight.

Somewhere an empty simulacrum of Barry Arrington is still asleep (even automatons “sleep” while their brains do what brains have to do to do what brains have to do).  But as the day wears on, the Soul in Lizzie’s body decides to go for a wander.  It leaves Lizzie to get on with stuff, as her body is perfectly capable of doing, she just won’t be “experiencing” what she does (and, conceivably, she might make some choices that she wouldn’t otherwise make, but she’s an extremely well-designed automaton, with broadly altruistic defaults for her decision-trees).

The Soul sees that Barry is about to wake up as the sun rises over Colorado, and so decides to spend a few hours in Barry’s body.  And thus experiences being Barry waking up, probably needing a pee as well, making Barry’s plans, checking Barry’s watch, remembering what Barry did yesterday (because even though Barry’s body was entirely empty of soul yesterday, of course Barry’s brain has all the requisite neural settings  for the Soul to experience the full Monty of remembering being Barry yesterday, and what Barry planned to do today, even though at the time, Barry experienced none of this.  The Soul also notices the sky is its usual colour, which Barry, like Lizzie calls “blue”.

Aha.  But is the Soul’s experience of Barry’s “blue” the same as the Soul’s experience of Lizzie’s “blue”?  Well, the Soul has no way to tell, because even though the Soul was in Lizzie’s body that very morning, experiencing Lizzie’s “blue”, the Soul cannot remember Lizzie’s “blue” now it is in Barry’s body, because if it could, Barry’s experience would not simply be of “blue” but of “oh, that’s interesting, my blue is different to Lizzie’s blue”. And we know that not only does Barry not know what Lizzie’s blue is like when Barry experiences blue (because “blue” is an ineffable quale, right?), he doesn’t even know whether “blue” sky was even visible from Lizzie’s bedroom when Lizzie woke up that morning.  Indeed, being in 40 watt Nottingham, it often isn’t.

Now the Soul decides to see how Lizzie is getting on.  Back east, over the Atlantic it flits, just in time for Lizzie getting on her bike home from work.  Immediately the Soul accesses Lizzie’s day, and ponders the problems she has been wrestling with, and which, as so often, get partly solved on the bike ride home.  The Soul enjoys this part.  But of course it has no way of comparing this pleasure with the pleasure it took in Barry’s American breakfast which it had also enjoyed, because that experience – those qualia – are not part of Lizzie’s experience.  Lizzie has no clue what Barry had for breakfast.

Now the Soul decides to race Lizzie home and take up temporary residence in the body of Patrick, Lizzie’s son, who is becoming an excellent vegetarian cook, and is currently preparing a delicious sweet-potato and peanut butter curry.  The Soul immediately experiences Patrick’s thoughts, his memory of calling Lizzie a short while earlier to check that she is about to arrive home, and indeed, his imagining of what Lizzie is anticipating coming home to, as she pedals along the riverbank in the dusk.  Soul zips back to Lizzie and encounters something really very similar – although it cannot directly compare the experiences – and also experiences Lizzie’s imaginings of Patrick stirring the sweet potato stew, and adjusting the curry powder to the intensity that he prefers (but she does not).

As Baloo said to Mowgli: Am I giving you a clue?

The point I am trying to make is that the uniqueness of subjective experience is as defined as much by what we don’t know as by what we do.  “Consciousness” is mysterious because it is unique.  The fact that we can say things like  “I’m lucky I didn’t live in the days before anaesthesia” indicates a powerful intuition that there is an “I” who might have done, and thus an equally powerful sense that there is an “I” who was simply lucky enough to have landed in the body of a post-anaesthesia person.  And yet it takes only a very simple thought experiment, I suggest, to realise that this mysterious uniqueness is – or at least could be – a simple artefact of our necessarily limited PoV.  And a simple step, I suggest, to consider that actually a ghostless automaton – a soulless simulacrum is – an incoherent concept.  If my putative Soul, who flits from body to body, is capable not only of experiencing the present of any body in which it is currently resident, but that body’s past and anticipated future, but incapable of simultaneously experiencing anything except the present, past, and anticipated future of that body, then it becomes a redundant concept.  All we need to do is to postulate that consciousness consists of having accessible a body of knowledge only accessible to that organism by simple dint of that organism being limited in space and time to a single trajectory.  And if that knowledge is available to the automaton – as it clearly is – then we have no need to posit an additional Souly-thing to experience it.

What we do need to posit, however, is some kind of looping neural architecture that enables the organism to model the world as consisting of objects and agents, and to model itself- the modeler – as one of those agents.  Once you have done that, consciousness is not only possible to a material organism, but inescapable. And of course looping neural architecture is exactly what we observe.

I suggest that the truth is hiding in plain sight: we are conscious because when we are unconscious we can’t function.  Unless the function we need to perform at the time is to let a surgeon remove some part of us, under which circumstances I’m happy to let an anaesthetist render me unconscious.

367 thoughts on “The Ghost in the Machine

  1. You cannot know that. Human societies always had a majority of people that beleived they have free will and are accountable for their choices.

    Accountability has social utility even if people do not have magic free will, even if their actions are fully determined. This is because their behavior will be changed by consequences.

    Just an aside. My academic background is special education. Dealing with the mentally retarded.

    Now one could argue — and it has been argued — that retarded people should not be held accountable for their actions in some circumstances.

    But this is not an acceptable position, and it is not the position of parents’ organizations or of schools or of social workers.

    A person who cannot be held accountable will not be allowed to live outside a cage. Or a jail. So by denying accountability, you get the opposite of the intended effect. Accountability is what makes political liberty possible.

  2. petrushka: A quibble:
    The most important potential aspect of free will is not choosing, but creating.

    Agree, but Why you are changing the point? Creating and choosing are not the same. We are debating choosing.

    petrushka

    When you argue for a magic man behind the mask, a non-material chooser, you are abandoning the opportunity to understand how creativity works. It could be very interesting, and it’s a shame that ID proponents voluntarily choose not to think.

    Why are you saying this. First of all I´m not an ID proponent. Second I´m not abbandoning to undertstand how crativity works and I do not know how a philosophical system can prevent the search for knowledge.

  3. Robin:
    One way to understand the severe reaction some folks – particularly anti-science, conservatively religious types – have to compatibilism is that such folks are stuck with a perspective that materialism means that humans are computers. As such, our every action and thought would be the product of a specific program. Even accepting that the program can be modified on a daily/hourly/minute-by-minute basis based upon experience, they still cannot conceive of programming any more advanced than what runs on your phone or laptop. Seriously – their entire concept of materialism boils down to Human OS 2.0 and in such a scenario, there can be no such thing as freely choosing mustard or catsup that isn’t predetermined within the program.

    In many ways, it’s the Weasel latching/”the goals are smuggled in” ignorance all over again.

    Usually when religous people say that materialism can account for what are supposed brain properties, who come with computer and program analogies are the materialists insisting that self learning programs can do this and that and AI is almost there.
    May you make a better analogy how a brain does what it does?

  4. petrushka: Accountability has social utility even if people do not have magic free will, even if their actions are fully determined. This is because their behavior will be changed by consequences.

    You can say that because you live in a society where the majority thinks that his free will is real and his actions are not fully determined.
    I would like to see you, not me, living in a society where the majority thinks that his action are fully determined, then they are not accountable for them, so you can tell me how is living there.

    petrushka:
    Accountability is what makes political liberty possible.

    True accountability or pretended accountability?

  5. Agree, but Why you are changing the point? Creating and choosing are not the same. We are debating choosing.

    I’m not changing the point. We are discussing free will. Binary choices are trivial. Any hand calculator can make binary choices. The really interesting capacity of living things is creating new things. Inventing stuff that did not exist before. Unanticipated stuff.

  6. You can say that because you live in a society where the majority thinks that his free will is real and his actions are not fully determined.

    No, I say that because accountability has social utility even if the agents being held accountable are fully determined robots.

    Accountability has utility with any agents that can learn.

  7. petrushka: No, I say that because accountability has social utility even if the agents being held accountable are fully determined robots.

    Accountability has utility with any agents that can learn.

    True accontability or pretended accountability?

  8. petrushka: . Accountability is what makes political liberty possible.

    Political liberty is real or it is another illusion?

  9. Blas:
    I think probably you do not understand my point. Chosing means that you have at least two possible options and you select one or the other.
    Lizzie said that if were possible to repeat the same conditions when she made each one of all of their choices she would do always the same thing she already did.
    Then there are no options there, given the condition Lizzie would do always the same thing, so the other option is not possible it is not an option, and the process is not choosing.
    When Lizzie says he is choosing between chocolate or vanilla icecream she reallyis computing all the data she has and her brain make she ask for chocolate.

    Another favourite theme of mine. Today is rich in distractions!

    I think you are confusing two senses of “possible”. Any sentence with the word “possible” implies a set of conditions. For example, if I say it is possible for me to walk 20 miles in a day that is short hand for “it is possible given my body”. It may well be impossible given constraints in my diary or my lack of desire to do so but that is not typically of concern in discussing this kind of thing. On the other hand if I say it is possible for me to go to a meeting in London tomorrow that is typically short hand for “it is possible to fit it in with my other commitments”. It may well be impossible given my pathological hatred of meetings.

    So the question arises when you say you have the option to choose vanilla or chocolate ice-cream under what conditions are the alternatives possible? Usually we are talking about lack of constraints. They are both available on the ice cream counter, you have enough money for both, neither of them make you ill etc. It may also be impossible to choose chocolate given your passion for vanilla and dislike of chocolate (we do sometimes say things like “I couldn’t possibly choose the chocolate the vanilla looks so nice”) but this is not typically the kind of possibility we are referring to when we say we have options.

    So Lizzie can quite reasonably say that under a very extensive set of conditions (where conditions include her mental state, memories, motivations etc as well as lack of constraints) it would not be possible for her to choose anything else but it would be still be possible under the more relaxed type of conditions we usually mean when we talk about having options.

  10. Accountability means consequences for actions. Accountability happens even in the natural world. There are consequences for putting your hand in a fire. There are different consequences for eating ripe fruit. Consequences are what teaches and shapes what we do.

    There’s no pretend.

  11. I understand and appreciate the point Blas is trying to make. He wants the answer to the big question. The question similar to Einstein’s question about whether god had choices in the making of the universe. Einstein’s question is the anthropic question, the fine tuning question.

    Blas wan’t to know whether there is some agency within us that is no bound by prior cause. It’s the billiard ball determinism question.

    The “scientific” answer is that billiard ball determinism does not apply to the world that we can observe. There may be a god’s eye view in which everything, past present and future, can be known, but that view is not accessible to us.

    I find it interesting that in the traditional theological view — that god is omniscient — that past, present and future all exist concurrently, and causation has no meaning.

    But he seems to think if free will isn’t “real” that accountability makes no sense, and this is simply wrong. We hold people accountable because accountability is a useful social construct. If people did not learn and change as a result of consequences, there could be no society at all.

  12. petrushka:
    Accountability means consequences for actions. Accountability happens even in the natural world.

    Acountability means acknowledgment and assumption of responsibility for actions, and it happens only in the human natural world because he is or immagine he is free to chose do or not to do something.

    petrushka:
    There are consequences for putting your hand in a fire. There are different consequences for eating ripe fruit. Consequences are what teaches and shapes what we do.

    There’s no pretend.

    Yes, there is consequences for putting your hand in a fire but you are accountable only if you put your hand on the fire because you wanted, you chose freely to put the hand on the fire. If you have not choice other than put your hand in a fire you are not accountable. You can pretend to be, but really you are not.

  13. petrushka,

    Absolutely – holding people to account is something people do to other people. Certain types of conditions are commonly accepted as excuses which remove or mitigate accountability (e.g. I did it in my sleep or I had a gun held to my head), others are not accepted (e.g. I really, really wanted to rob a bank), even though the second type may determine that the action will be performed (exactly what is acceptable as an excuse does seem to be one of the variable aspects of different societies).

  14. Accountability does not require the consent of the agent being held accountable, nor does it require intelligence in the usual sense.

    Accountability is simply the application of consequences.

    We are discussing two different things. You are discussing morality and I am discussing free will in the broadest sense.

  15. petrushka:

    “Blas wan’t to know whether there is some agency within us that is no bound by prior cause. It’s the billiard ball determinism question.”

    Yes. Really my point is that there are three options when we chose. 1) Our chose is determined by the initial condition and then determined and our chose it is not a real chose. 2) We chose ramdomly and the isn`t a chose or 3) We have some agency different from the physical universe that allow us to chose independent of the initial conditions. I do not see other options.

    petrushka:

    “The “scientific” answer is that billiard ball determinism does not apply to the world that we can observe. There may be a god’s eye view in which everything, past present and future, can be known, but that view is not accessible to us.”

    Yes.

    petrushka:

    “But he seems to think if free will isn’t “real” that accountability makes no sense, and this is simply wrong. We hold people accountable because accountability is a useful social construct. If people did not learn and change as a result of consequences, there could be no society at all.”

    I answer this in other comment.

  16. Totally agree. Leaning – e.g., the systematic co-mingling of perception, comparative division, memory, and feedback – is not the same as data input in a computer. IDists it seems do not understand this distinction.

  17. Robin:
    Totally agree. Leaning – e.g., the systematic co-mingling of perception, comparative division, memory, and feedback – is not the same as data input in a computer. IDists it seems do not understand this distinction.

    And a physical system can systematic co-mingling of perception, comparative division, memory, and feedback?
    May you give an example?

  18. Usually when religous people say that materialism can account for what are supposed brain properties, who come with computer and program analogies are the materialists insisting that self learning programs can do this and that and AI is almost there.
    May you make a better analogy how a brain does what it does?

    I’m not aware of that many materialist (actually…any) who insist that the brain is like a computer. The statement that some self-learning programs are beginning to mimic human-type learning is an assessment of a process, not of the physical nature of brain or machine hardware.

    By contrast, the ID need to have brains be like computer programs (e.g., “deterministic programs”) is palpable. It’s also inaccurate. Computers are logic gates; brains are stimuli assessors. The big difference is that in brains, each and every stimuli assessment itself creates a stimuli to assess. In fact, most stimuli is ignored by the brain. It takes many years of development and honing to establish controls on what to filter and what to process. Computers, by contrast, are designed to process a specific way right out of the box and do not change over their lifecycle. Programs, by and large, do not affect the hardware to create new processing paths or develop different memory connections. And while sophisticated programs can change, the underlying hardware does not particularly (though you may lose some pathing or memory cells over time). Brains, by contrast, don’t run programs; their very nature is the processing of stimuli and they access the stimuli by creating models of associations. The more accurate the model, the longer the brain can last within it’s associated body.

  19. Accountability is a form of feedback. It is indifferent to the internal state of the agent being held accountable.

    Evolution inplements an accountabilit system. There are consequences for changes in the genome. Populations learn. Even plants. Even bacteria.

    This kind of system can account for invention and creativity, which are far more complex and interesting than mere decision making.

  20. Big Blue exhibits learning, albeit a very rudimentary kind compared to human brains.

    Personally, my favorite example is predator offspring learning to hunt. Nearly all predators on this planet have to learn successful strategies for hunting. Such requires the perception of the variables that make some prey “easier” or “more difficult targets”, comparative division of “prey” vs “not prey” and division between “likely success” and “long shot”, memory of previous successes and failures the comparative division between those situations, and a feedback loop during the chase and completion to hone the process. Hawks in particular have an amazing learning curve for hunting. So do big cats.

  21. Simple example. You go to a grocery store. Decision making implies choosing between packages. Creativity implies asking what can I make. This is where WJM’s version of free will becomes interesting. There are people who do not want to spend the effort creating. Mere choosing is easier.

    But much of out lives is spent winging it rather than clicking choice buttons. A system of thought that does not address the process of invention is pretty lame.

  22. This has been an interesting discussion. Perhaps a lot of the controversy here centers around what is meant by “computer”. As Lizzie allowed in the OP, if by “computer” we mean something like the laptop on your desk, then our brains are quite different in their architecture.

    But I would say I’m a materialist who understands the human body to be a computer. I say “body” rather than “brain”, because the brain isn’t a functioning computing system without being integrated into a body. Just like a CPU or some RAM chips are important subsystems of my MacBook, but are not “computers” in and of themselves, the brain needs the input and outputs of the integrated body to compute.

    The distinction you describe between computers (“logic gates”) and brains (“stimuli assessors”) is a distinction without a difference in this context, as I see it. The way the human brain/body works can be virtualized such that a robot-like computer would have a very similar runtime architecture to a human, and would be much less like the MacBooks and ThinkPads we are familiar with as “computers” in the colloquial sense.

    If we understand “computer” as a system that has deterministic executable operations (“code”) operating against data, both (fixed) internal data and inbound new stimuli from outside the system, producing algorithmic responses and output, then I am a computer in as full a sense of that term as my MacBook.

    If you are stuck on the hardware distinction, don’t be. The “hardware” can be virtualized in software, so that synaptic re-mapping and other forms of adaptive configurations happen in software, but have the same logical effect as the synaptic changes in humans.

    Anyway, I’m one materialist quite willing to go beyond “human brains are LIKE computers”, to “humans ARE computers”. I’m not aware of any objection that is defensible in terms of computing concepts (algorithms operating on data). I’m actually more interested in fellow materialists’ ideas of how this claim breaks down than in contending with ID/theistic superstitions about consciousness and freewill as magical dynamics of some supernatural domain.

  23. I’d say it (temporarily) breaks down because we haven’t a clue how to construct a brain-like computer. Certainly nothing that has the architectural economy of a brain.

    I think it’s a problem of structure. We know a lot about chemistry, but we do not know how to make a simple chemical replicator from first principles. And we do not know how to make a simple brain from first principles. In both cases we have examples to copy, but we do not know how to build from the ground up.

    Just my opinion, but I think the only way to build such structures from the ground up is through evolution.

  24. eigenstate,

    I’m in agreement with eigenstate I think. But I’d like to know why, at a material level, it is that that a scene I/Computer consider to be “beautiful” may be considered “not-beautiful” by You/Computer.

    Would cognitive scientists with their wondrous scanners detect differences in brain activity between two (or any other number of) people looking at the same scene if they experienced different aesthetic responses?

    If so, what might have conditioned these responses? Is there an near-infinite regress of ever more distal causes at some point in which the dualist/supernaturalist/whatever can point and shout “There! That’s where there’s an immaterial something-or-other is operating! There’s a soul in there”?

  25. I disagree Eigenstate. I understand the point you’re trying to make, but the disagreement I have lies in the fact that AI research shows that simple decision gating of even the most advanced computer systems does not allow for the dynamicism, even in mistake making, found in brains. In many ways, the mistakes that human brains make/develop give some real insights into the differences between computers and brains and why the human brain (in particular) is so dynamic in its learning.

    For instance, humans and some other animals have the ability to apply the understanding gained in one area to a completely separate area that they have never been exposed to. Computers structures do not allow for this and one hypotheses is that this type of learning is actually the product of a neural pathway reuse error (or set of errors) that computers can’t mimic. Again, one of the things that separates brains and computers is that brains actually process the stimuli formed from processing. This in turn is processed and so on (though some is filtered out; situations vary). As such, the very act of processing a given stimuli can create a gestalt of several stimuli processings. Computers just don’t have that capability by their very architecture. They would be horrible program running tools if they did.

    This isn’t to say that at some point computer architecture design won’t get there, but what we have now isn’t it. Part of the problem is that we currently don’t have a way to create a system that creates its own neural pathways and that can dynamically allow those pathways to adapt to different processing needs/conditions/stimuli. And even that is just the tip of the iceberg (so to speak).

  26. petrushka,
    I don’t see anything to disagree with in what you wrote. But I wasn’t thinking that “build it from first principles” was the test of what is or is not a computing system. Maybe I’m missing an aspect of the question at hand, here?

    Here’s a test I think gets at the heart of this: your mission is to build a “virtual brain” — or, per my objection to brain-as-computer, a “virtual human”. If you have exquisite manufacturing technologies and unlimited funds, where do you run into architectural/structural/conceptual problems in your project?

    The challenge of building such a massively parallel synaptic mesh to approximate human wiring is daunting, for example. But it’s a scaling, and ultimately, a cost issue, rather than a conceptual barrier; we have virtual architectures that mirror human neuronal configurations and dynamics. They are just slow and expensive to deploy and manage by the billions on traditional computing hardware platforms.

    I can’t, and haven’t seen an architectural or structural barrier. The objections I’ve seen, when I investigate, are scaling issues, or limitations of manufacturing technologies, etc.

    Your point on evolution is well taken. Even in a “silicon format”, I think the only plausible strategy in front of us are neural networks that evolve. And by that I don’t mean just adapt in local, learning sense, but neuronal layouts that evolve over iterations of the platform (i.e. reproductions), and which converge on optimal initial wiring configs.

  27. Robin,

    I disagree Eigenstate. I understand the point you’re trying to make, but the disagreement I have lies in the fact that AI research shows that simple decision gating of even the most advanced computer systems does not allow for the dynamicism, even in mistake making, found in brains. In many ways, the mistakes that human brains make/develop give some real insights into the differences between computers and brains and why the human brain (in particular) is so dynamic in its learning.

    I may be just unfamiliar with the knowledge/observations you are referring to. Not demanding a cite, but rather interested if you have an example handy which you think is representative of this problem?

    I will stipulate up front that no computer system built by humans anywhere has the kind of parallelism and dynamicism that would/will be required for even a very crude “virtualized human”. But as I said to Petrushka, this is a limitation of scale and manufacturing complexity, as far as I can see. If I had a cluster of CPUs and the appropriate RAM and disk and input/output sensors such that I could approximate the “human architecture”, I can’t come up with any obstacle to that being just as dynamic as a human, in principle.

    Can you?

    Dynamicism, as you use it here, is a function of massive amounts of parallel hardware and a software mesh that integrates that with real-time input and output functions that we can as of yet only dream of (see IBM’s “Blue Brain” project for an exciting, if humble start down that path…).

    For instance, humans and some other animals have the ability to apply the understanding gained in one area to a completely separate area that they have never been exposed to. Computers structures do not allow for this and one hypotheses is that this type of learning is actually the product of a neural pathway reuse error (or set of errors) that computers can’t mimic. Again, one of the things that separates brains and computers is that brains actually process the stimuli formed from processing. This in turn is processed and so on (though some is filtered out; situations vary). As such, the very act of processing a given stimuli can create a gestalt of several stimuli processings. Computers just don’t have that capability by their very architecture. They would be horrible program running tools if they did.

    I think you are applying top-down “classical software” modes here, which I agree are problematic for this kind of task. But there’s no architectural barrier to implementing the same topology and dynamics that the human brain+body does, so far as I can see. For example, neural pathway reuse error is a “feature” we can build into the system (and features like this have been implemented). If it furthers our design goals, like programmers often quip “It’s a feature, not a bug”.

    Similarly, we can introduce stochastic inputs and randomizing factors that degrade the “determinism” of the outputs, and produce variation and exploration effects that work like variation does in biological evolution.

    This isn’t to say that at some point computer architecture design won’t get there, but what we have now isn’t it. Part of the problem is that we currently don’t have a way to create a system that creates its own neural pathways and that can dynamically allow those pathways to adapt to different processing needs/conditions/stimuli. And even that is just the tip of the iceberg (so to speak).

    I don’t dispute any of that. But my focus is on a clear understanding of the nature and characteristics of humans-as-computers. If we are understanding computers as “systems that compute”, algorithms operating on data in a given hardware environment, then I remain unable to see any basis for disagreeing with “humans are computing systems”. They are natural, evolved, organic computing systems, yes, and thus much different from a Dell desktop, but computing systems in the fullest sense all the same.

    That we can’t manufacture facsimiles from carbon or silicon at this point, or even in the foreseeable future, does not prevent us from understanding humans as they are as very sophisticated organic computing systems. Or so I say. 😉

  28. “Accountability is simply the application of consequences.”

    As usual a discussion with a materialist ends in a change of the meaning of words.

  29. Blas:
    “Accountability is simply the application of consequences.”
    As usual a discussion with a materialist ends in a change of the meaning of words.

    Well yes, I tend to gravitate toward operational definitions. It’s necessary if you intend to test a proposition. How else do you break out of solipsism?

    You can feel free. You can feel really, really free. You can feel genuinely free, not pretending.

    But if you want to generate a proposition in such a way that it can be independently evaluated, it needs to be operational.

  30. You didn’t ask for it, but here’s an example of what I’m talking about:

    http://www.gamasutra.com/view/feature/3947/intelligent_mistakes_how_to_.php?print=1

    Not, per se, the topic of the article, but rather the very common faulty thinking that goes into anthropomorphizing computer systems in general. People tend to think that computers “cheat” in many games (I yell at Civ IV all the time for it’s damn cheating!), but computers can’t cheat, at least not in the sense that the computer is breaking a rule or making up it’s own numbers so that it can hit, defend, or avoid better than it should be allowed. That we assign such characteristics to computer systems or that we assign anthropomorphic qualities to animals or plants or even cars is an example of the type mistakes that our human brains make that give real insight into the differences between brains and computer systems. Even simple things like playing the game Telephone or playing Eye Witness and counting the digression and degradation of memory vs what the mind makes up is a good illustration. Computers can’t make things up. They don’t conceptualize experience through gestalt because their architecture and programming is specifically designed for the opposite – literal reflection and decision making based on complete data.

    Think about it for a moment: programming is antithetical to nuance or abstract conceptual error making. And I would argue that it’s virtually impossible to have accurate abstract conceptual capability without the ability to be abstractly wrong.

  31. I don’t dispute any of that. But my focus is on a clear understanding of the nature and characteristics of humans-as-computers. If we are understanding computers as “systems that compute”, algorithms operating on data in a given hardware environment, then I remain unable to see any basis for disagreeing with “humans are computing systems”. They are natural, evolved, organic computing systems, yes, and thus much different from a Dell desktop, but computing systems in the fullest sense all the same.

    That we can’t manufacture facsimiles from carbon or silicon at this point, or even in the foreseeable future, does not prevent us from understanding humans as they are as very sophisticated organic computing systems. Or so I say. 😉

    Yes. Again, I get this point you’re making. Perhaps I shouldn’t have said I disagree so much as I think the analogy goes only so far. But yes, in principle, and in general, I can get on board with the idea that humans are computer systems. It’s just that we are nothing like any computer system we humans currently are capable of making.

  32. GAs can be wrong. At the moment they are too simple to “think,” but they are starting down that path. Neural networks are possibly a step further down the road.

    Lots of computer systems learn and have the ability to be wrong.

  33. Robin:
    You didn’t ask for it, but here’s an example of what I’m talking about:

    http://www.gamasutra.com/view/feature/3947/intelligent_mistakes_how_to_.php?print=1

    Interesting page, thanks!

    That we assign such characteristics to computer systems or that we assign anthropomorphic qualities to animals or plants or even cars is an example of the type mistakes that our human brains make that give real insight into the differences between brains and computer systems. Even simple things like playing the game Telephone or playing Eye Witness and counting the digression and degradation of memory vs what the mind makes up is a good illustration.

    I understand your point, and agree with you that the differences are stark; the “human computer” is a (delicate) balance between stability and plasticity. Too “stable”, and the system cannot adapt sufficiently. Too “plastic” and the system cannot retain and capitalize on the adaptations it makes.

    Man-made computing systems are obsessed with stability. In fact, excepting the parts of computing that specifically design in plasticity and self-changing adaptation as a means to achieve the benefits we see in biology, “plasticity” in any form is a runtime exception, and very often a fatal one. That’s not right or wrong, that’s just a reflection of the goals we’ve traditionally deployed for our computing resources — the spreadsheet will calculate the input variables in these cells and produce the result in these cells, the same way, every single time. Adaptation is failure in the vast majority of man-made computing projects, because the targets are rigid, and pre-specified.

    That’s all well and good, but the same computing resources that obsess over stability can also be deployed in an “organic” fashion, by which I mean, utilizing neural networks that incorporate plasticity and dynamic rewiring and re-connections as an architecture that also for (sometimes drastic) self-reconfiguration for adaption and learning, and also as the basis for stochastic variation, harnessing random inputs to search the nearby phase space of outputs and consequences.

    Computers can’t make things up. They don’t conceptualize experience through gestalt because their architecture and programming is specifically designed for the opposite – literal reflection and decision making based on complete data.

    A spreadsheet program, or a traditional OS would not, for sure, I agree. Software systems based on neural networks with neuroplasticity and back-propagation, though, are not only capable of “making things up” when they also harness stochastic inputs, they can’t do anything else (which, as above, can be a problem, too). If you dispute this, a small examination of an example I think can make this clear. It’s at points disturbing and humbling when you work with these systems, because it’s hard not to occasionally feel that there is a “ghost in the machine”, in what is unquestionably a machine. It’s the same superstition, though, I suggest, as think that the human ‘self’ is a kind of homunculus looking out the window of our eyes. Sophisticated and self-referential feedback loops produce the basis for abstraction, reflection, “fuzzy thinking” and non-predictability that can provide a strong sense of “intelligence” in some intuitive-but-superstitious/supernatural sense.

    Think about it for a moment: programming is antithetical to nuance or abstract conceptual error making. And I would argue that it’s virtually impossible to have accurate abstract conceptual capability without the ability to be abstractly wrong.

    I’m not sure what you mean by “abstractly wrong”, but dissonance, randomized inputs/outputs, fuzzy logic, bayesian clustering and many other forms of “nuance” are all practical expressions of computing systems, just not the “stability-maxed” systems that we typically favor for predictable tools in our laptops and phones, etc. Errors are fiendishly hard to avoid in development, and they’re not so difficult to put in. 😉

    The trick is modeling out “error” generation and modifications in ways that make those variances useful. In even basic neural network apps now, “errors” are the engine of progress, the means of variation that makes the machine learning and adaptation work. If you start with some discrete initialization state, you “introduce errors”, or perturbations in the processing paths, and see what you get for results. In several of the apps I’ve worked on, the “error” introduction occurs in a very tight, low level loop. This is how the system explores new pathways and connections that may produce better fitness scores.

    These aren’t really “errors” in the normative sense, right? These are variations that explore nearby parts of a performance landscape.

    OnEdit: quote formatting

  34. Do you mean that GAs can come up with a poor solution or that they can be wrong wrong? I’m not aware of any situations where GAs are used to find specific answers and they get the wrong answers, but I’m certainly no expert.

    Be that as it may, I’m not really referring to being wrong so much as having architecture that is faulty as a feature (to use Eigenstate’s reference). One of the products of human gestalt concept generation is that it has a tendency to be inaccurate. This is both a problem and a feature – it does mean that sometimes our memories are incorrect, but it also allows for unique conceptual frameworks.

    Logical fallacies are a good example of what I’m talking about. Everyone, at one time or another, engages in some form of logical fallacy. It’s so easy for us to cherry pick even when we don’t mean to or engage in the Fallacy of the General Rule even when we really know the statistics if we just thought about if for a moment. I think have competing brain processes contributes to this faultiness, yet that faultiness allows us to occasionally “think outside the box” (or engage in cliches! :)). Computers, as far as I know, just can’t do those sorts of things because it’s not a useful design approach given the limited computer capability at this point.

    Eigenstate may well be right though that this is merely a limit of resources and not a limit of theoretical possibility.

  35. OK, so much for that controversy! I agree with you, fully, that modern man-made computing systems are drastically different in their hardware and runtime architectures from human-computers. I also agree that we are not currently capable of building or producing man-made machines, with or without “organically-informed” hardware and runtime architectures that can attempt even crudely approximate human cognition. I’ll go a third step and say that such capabilities are sadly a lot further out in the future than we once thought, as the the complexity and scale of human-computing becomes more discernible to us.

  36. Robin:
    Do you mean that GAs can come up with a poor solution or that they can be wrong wrong? I’m not aware of any situations where GAs are used to find specific answers and they get the wrong answers, but I’m certainly no expert.

    Evolutionary algorithms are enterprises in generating 99.99% wrong answers, most of them laughably wrong, as the ‘exhaust’ on must generate through enormous numbers of generations and variations that (hopefully) converge on a good answer.

    Just to cite a personal example, I’ve been working in my free time for years now (!) on an software program that uses EA to play “Go” — see here if you are unaware of the world’s greatest game. Writing programs that can even be moderately competitive with a strong intermediate go player (3kyu-1dan range) has been elusive for decades, and is only now becoming available. Anyway, in my deployment of EA, I’ve at times been quite excited by “moments of brilliance” and what seems almost creepy in its “thinkingness”, but inevitably, the learnings that it has acquired are not sustainable or adaptable enough; it plays brilliant for several moves, and then makes a completely stupid, game losing move.

    I won’t digress into that more, here, but the high level go is well defined per the rules of the game. My program, like many others that have deployed EA toward this end (and which are, no doubt, much more sophisticated implementations than my own), is regularly “wrong”, and badly wrong.

    This is really easy to understand, though, when you think about how machine (and I say, also, human) learning works: “wrong” decisions and “failures” are just as crucial as “right” decisions and “successes” in deriving knowledge. The program needs to experience both, and incorporate both in order to converge toward better fitness. If my program doesn’t know how to make errors and identify them as such, it can’t improve.

    Be that as it may, I’m not really referring to being wrong so much as having architecture that is faulty as a feature (to use Eigenstate’s reference). One of the products of human gestalt concept generation is that it has a tendency to be inaccurate. This is both a problem and a feature – it does mean that sometimes our memories are incorrect, but it also allows for unique conceptual frameworks.

    Right. One must be very carefully how one defines “error” or “failure”, here. Putting noise into a system — demonstrably “error” in a first order sense — is the catalyst for variation and adaptation and thus progress. Completely stable systems are idempotent. Completely chaotic systems are just thrash. The “magic” happens in mixing in some stochastic dynamics with feedback loops. Being “inaccurate” or “wrong” in this way is how computing systems get beyond stasis.

    Logical fallacies are a good example of what I’m talking about. Everyone, at one time or another, engages in some form of logical fallacy. It’s so easy for us to cherry pick even when we don’t mean to or engage in the Fallacy of the General Rule even when we really know the statistics if we just thought about if for a moment. I think have competing brain processes contributes to this faultiness, yet that faultiness allows us to occasionally “think outside the box” (or engage in cliches! :)). Computers, as far as I know, just can’t do those sorts of things because it’s not a useful design approach given the limited computer capability at this point.

    I get you. I’ll just close this part by saying that such n-way models for competing goals (like humans have, always the competition between pride, vanity, empathy, altruism, laziness, love, etc.) are hard to find in modern computing — they are more problems than solutions, given our goals. But that does not foreclose on the prospects for these models. Given what you’ve said above, I think we are agreed on this.

    Eigenstate may well be right though that this is merely a limit of resources and not a limit of theoretical possibility.

    I’m quite interested in this topic, and have been for a long time, and have spent several man-years now working on EA-powered and learning-related software systems. I ask a lot, but do not get much by way of “barriers in principle” to this question. The closest I get are naked intuitions (“It just can’t be”), and more sophisticated expressions of that same intuition in the form of Searle-esque “Chinese Room” arguments and appeals to qualia, etc.

    There may be a theoretical limit, here. But I’m not aware of one,

  37. I’m not sure Go has a computable answer, so a Go program just has to be better than its competitor. Much of life is like that.

  38. Having browsed the last 4 or so posts to only the most superficial degree I still would like to offer a definition of free will I can’t recall anyone having proffered before.

    Free will is what happens when memory interacts with environment. It is a process not an object.

    Is there any utility in this definition?

  39. It is pretty close to the mark in order for free will to exist. The nervous systems of sentient organisms require external stimuli in order to develop proper responses to their environments.

    Animals deprived of stimulation do not develop normally. Animals placed in stimulating environments from an early age develop more complex brains and are able to adapt more easily as adults. The same is true for humans.

    Note, for example, that people who live only inside their own heads and do not interact with the natural world tend to be extremely rigid in their “thinking.” After a while they can no longer even will themselves to learn.

    The same patterns can be seen in closed societies that enforce adherence to dictated beliefs and behavior. People in such circumstances develop rigid patterns of thinking and behavior, and develop a paranoid perspective on people and societies outside their own.

    The dangers of a prior commitment to a rigid sectarian dogma, out of fear of what will happen in some “afterlife,” lies at the core of ID/creationism’s stubborn resistance to external evidence. The ID/creationist movement now pretty much wallows in pseudo-philosophy and deductive reasoning from preconceived premises in order to reach all its conclusions.

    The model of investigation and testing provided by science is a model that best suits intellectual development. Ideologies based on some sort of sectarian, socio/political dogma are the antithesis of learning by interacting with the environment.

  40. Aardvark:
    Having browsed the last 4 or so posts to only the most superficial degree I still would like to offer a definition of free will I can’t recall anyone having proffered before.

    Free will is what happens when memory interacts with environment.It is a process not an object.

    Is there any utility in this definition?

    You are spot on to stress that it is a process not an object. It is only in philosophical discussions that it gets reified. In every day conversation we talk about whether specific acts were done out of free will as opposed to being coerced or acting unconsciously or involuntarily. We don’t concern ourselves with whether we have something called “free will” or not.

    Memory can interact with the environment to produce actions which are subconscious or involuntary so I am not so sure about that part of the definition.

  41. It looks like we have some agreement that humans are biological computing machines, just beyond our current to build or even meaningfully model.

    Liz holds that “our decision-making is chaotic and non-linear, and dependent on complex feedback loops”. I would assume that others here (if not all, some) hold this position – that humans (and particularly, their “minds”, and decision-making processes, are chaotic, non-linear physical computations that cannot be formally predicted but are, nonetheless, determined by the comprehensive state that precedes what comes next (if an identical run-up to decision, then the same decision, every time).

    Earlier, Liz said that I had not yet convinced her otherwise.

    But, what does “convince” mean, under the materialist paradigm as described above? If ideas, beliefs, convictions, thoughts, etc. are arrived at through a non-linear, chaotic, organic/physical process, then it certainly can’t be said that it is only a step by step, linear deductive, inductive or abductive logical analysis which results in a conviction. It cannot necessarily be a linear, step-by-step analysis of the evidence – that would necessarily be predictable.

    As they have already admitted, such a process towards a state of mental conviction is a “chaotic” system, and thus unpredictable by the kinds of predictive, linear methods that logic has to offer. So, how does one convince a chaotic system of its error, if it is computing an error (and we know human computers generate errors all the time – very convincing, compelling errors people will fight and die for)?

    How can I convince Liz if convincing Liz is not a matter of a linear argument concerning evidence and/or logical argument? Since her system is unpredictable and chaotic, who knows what added ingredient will have the desired effect of changing her conviction in this matter? Perhaps a combination of music and a certain meal will have a butterfly effect and cascade down into Liz becoming an Evangelical?

    Or, perhaps an event like the death of a child, as with Darwin, can alter brain chemistry to the point of accepting as true something one wouldn’t even countenance before? Or, a chain of events that seem impossible to affect the computation in order make one believe in a God of some sort?

    Is their any relief to be found from the chaotic, unpredictable nature of these beliefs? No. Such relief would require a map of predictability, a system of some sort inside chaotic patterning and arranging, or a locus of will that is not itself the product of such a system that can be used to arbit “incorrect” conclusions from correct.

    Unfortunately, in the materialist paradigm, the arbiter itself is the product of the same system that produces the error, and can itself be arranged by the system to not see the error as error, but as the correct result, and the only thing that can change the arbiter module is something that we cannot even predict will have that effect in the first place – such as a sound argument or good evidence to the contrary. It may be a bit of pizza or a butterfly flapping it’s wings in Brazil.

    IOW, under this materialist paradigm, it is perfectly reasonable to say that a butterfly flapping it’s wings in Brazil (or a random secretion somewhere in the body) culminated in the final ingredient necessary for Liz – or anyone else here – to compute that their materialism is a sound metaphysic and that there is no God, their system producing the sensation that it is sound reasoning and a logical conclusion or position to hold.

    Also, the arguments they present are the result of a chaotic, unpredictable system that provides both the words said and the sensation that they represent correct values and meaningful arguments. The system could as easily produce, as I have said before, a person that barks like a dog and at the same time believe that it has said something very wise and profound. There is no arbiter but the system itself which is known to produce deep and sustaining errors of thought, complex delusions and convictions that are unsupportable and unreasonable.

    So, how to convince Liz, when there is no way to predict what sort of combinations of stimuli or ingested material would produce such a conviction? In the materialist world, all we are are monkeys flinging feces at the wall, and whatever happens to “stick” triggers a change in conviction.

  42. What would it mean, William, to convince someone who chooses what to believe? Why should anyone care what you believe?

    Rational discussion does not depend on the internal states of those doing the disgussing. It stands on its own.

    Put another way, the object of internet discussions is to make the best possible case for a position, not to convince anyone.

    It would be mildly insane to expect to convince anyone.

  43. petrushka:

    Rational discussion does not depend on the internal states of those doing the disgussing.It stands on its own.

    If, as you say, “rational discussion does not depend on the internal states of those doing the discussing”, but rather “stands on its own”, then under matieralism, where is it standing, if not in the internal states of those doing the discussing? How is it being interpreted and arbited, and by what – if not the internal states of those engaging in it?

  44. It’s a formalism, like a mathematical argument. Not quite as rigorous, but still adhering to rules of logic. The discussion stands whether the disputants are dead or alive, machines or people.

  45. William J. Murray: It looks like we have some agreement that humans are biological computing machines, just beyond our current to build or even meaningfully model.

    No, there is no such agreement.

    There’s perhaps a consensus, but no agreement.

    We are not computers.

    Liz holds that “our decision-making is chaotic and non-linear, and dependent on complex feedback loops”. I would assume that others here (if not all, some) hold this position – that humans (and particularly, their “minds”, and decision-making processes, are chaotic, non-linear physical computations that cannot be formally predicted but are, nonetheless, determined by the comprehensive state that precedes what comes next (if an identical run-up to decision, then the same decision, every time).

    I don’t agree with that. But let me unpack it.

    Liz holds that “our decision-making is chaotic and non-linear, and dependent on complex feedback loops”.
    The physical processes that enable decision making are chaotic and depend on complex feedback loops. But it does not follow that the decision making itself is chaotic. A lot of our decision making is systematic.

    I left out the “non-linear” because I don’t really know what that is supposed to mean as applied to decision making.

    … and particularly, their “minds” …

    I would leave minds out of the discussion. As I see it, “mind” is a metaphor. It is hard to see how terms such as “chaotic” apply to a metaphor. I seem to recall that Dennett talks of the mind as a Joycean Computer. Here the “Joycean” is an allusion to the “streams of conscious” in the writings of James Joyce. That seems reasonably apt for a description of a metaphor. So I’ll allow that we might be metaphoric computers, but not physical computers.

    … chaotic, non-linear physical computations that cannot be formally predicted …

    Definitely not computations. I’ll agree that there’s a lot that cannot be “formally predicted”.

    … but are, nonetheless, determined by the comprehensive state that precedes what comes next …

    This whole idea of a sequence of states is nonsense. Yes, I do understand that philosophers have been talking that way since forever. But it is nonsense nonetheless.

  46. petrushka:
    It’s a formalism,like a mathematical argument. Not quite as rigorous,but still adhering to rules of logic. The discussion stands whether the disputants are dead or alive,machines or people.

    What is your conviction about what formalisms are, and what logic is, and how they are used, and “where they stand” generated by, if not your internal states? Does logic exist independent of organic or inorganic computational systems? Does the concept of a formalism have self-existence? Are “you” something other than the chaotic, unpredictable system described by the other materialists in this thread?

    Under materialism, everything you think and believe, Petrushka – every concept. induction and deduction – is produced by what has been described as an unpredictable and chaotic system that we already know is prone to generate deep, systemic and unrecognizable errors (unrecognizable by the system itself). Even the act of “recognizing” one’s error could itself be an error!

    Do you disagree that the human computational system is unpredictable? That it is chaotic? Do you disagree that it is possible that butterfly effects such as I have described are possible?

    Or do you think that somehow logic and science are immune to the influence of chaotic influences and unpredictable conclusions?

  47. Words like random stochastic, chaotic and the like have good formal definitions and usages. They are not equivalent to anything goes, or I can believe stuff because it makes me happy.

    I do not wish to makee a straw man of your position, and I would appreciate it if you would not make a straw man of ours.

    Living things differ from billiard balls be ause they learn. Iological evolution is a learning process. The genomes of populations change over time due to feedback.

    The behavior of neurons changes over time due to feedback.

    This seems to be a difficult concept for some people because it appears to reverse the flow of causation. Things change as a result of consequences rather than antecedents. It’s a property of life. Maybe it’s an operational definition of life.

  48. I don’t know if RDFish is also looking at this thread – but if you are looking I want to congratulate you on your remarkable clarity, patience and logic. I am really enjoying your dialogue with Gpuccio.

Leave a Reply