Do Atheists Exist?

This post is to move a discussion from Sandbox(4) at Entropy’s request.

Over on the Sandbox(4) thread, fifthmonarchyman made two statements that I disagree with:

“I’ve argued repeatedly that humans are hardwired to believe in God.”

“Everyone knows that God exists….”

As my handle indicates, I prefer to lurk.  The novelty of being told that I don’t exist overcame my good sense, so I joined the conversation.

For the record, I am what is called a weak atheist or negative atheist.  The Wikipedia page describes my position reasonably well:

Negative atheism, also called weak atheism and soft atheism, is any type of atheism where a person does not believe in the existence of any deities but does not explicitly assert that there are none. Positive atheism, also called strong atheism and hard atheism, is the form of atheism that additionally asserts that no deities exist.”

I do exist, so fifthmonarchyman’s claims are disproved.  For some reason he doesn’t agree, hence this thread.

Added In Edit by Alan Fox 16.48 CET 11th January, 2018

This thread is designated as an extension of Noyau. This means only basic rules apply. The “good faith” rule, the “accusations of dishonesty” rule do not apply in this thread.

1,409 thoughts on “Do Atheists Exist?

  1. fifthmonarchyman: Interesting. This thesis seems to trade on a reductionist “understanding” of understand.

    I see it as based on Hamming’s own experience.

    I would argue that there are lots of things that aren’t programmable including human behavior and evolution.

    And we don’t fully understand those things.

  2. Entropy:

    Plantinga’s argument becomes absurd the moment he’s talking about truth as if it’s some kind of property, rather than the label that it is.

    You don’t think there can be true beliefs? False beliefs? Veridical perceptions? Non-veridical perceptions? Accurate representations? Inaccurate representations?

  3. Neil Rickert:

    I would argue that there are lots of things that aren’t programmable including human behavior and evolution.

    And we don’t fully understand those things.

    Then what’s the point? Sure, if you can program it you have a pretty good understanding of it, but who would doubt that?

    The fact that we don’t fully understand evolution and human behavior means that we can’t program these, but it certainly doesn’t mean that we don’t understand them reasonably well.

    Glen Davidson

  4. keiths,

    I don’t know what “semantic content that isn’t propositional or language-based” would look like — unless you’re assuming that mental representations will always count as semantic contents?

    There’s a worry I have here, and I want to put it carefully because this isn’t my considered view (yet), but it’s a concern that I have. And it has to do with this question: “what do we mean by calling something a representation?” This is the main issue I’m working on right now in my research, and there is a worry about diminishing returns as hairs are increasingly split. But there is an important issue here.

    Think about tree rings. There’s a causal relationship between seasons and tree growth, so that the tree rings are a reliable indicator of past seasons. Do the tree rings “represent” the seasons? In one sense, sure. But they function as representations of the season to us — they don’t function as representations to the tree. In the absence of representation-interpreting creatures such as ourselves, the relationship between tree rings and seasons is just one of causally grounded covariance between seasons and rings. But it’s not a representation. (Or is it?)

    The situation seems to be different when I ask a question, or give an order, or make a claim: my sentences have a sense and a reference. And while it’s crucial to not let intentionality become something mysterious, weird, and more-than-natural (as philosophers from Plato through Descartes and Kant to Frege have tended to do), it’s also crucial to understand that the place of intentionality in the natural world is something that needs to be explained very carefully and precisely.

    Here’s the question, then: can we explain content in terms of covariance?

    The idea of naturalizing intentionality seems to rely on the idea that we can explain content in terms of covariance. That’s roughly the project of people like Fred Dretske and Ruth Millikan. But it’s not clear if those projects are ultimately successful. The dilemma that they face is that these projects tend to either project too much content into covariance, so that we never get from covariance to content — or they just start off with covariance and never arrive at genuine semantic content. (Millikan has the first problem, Dretske has the second.)

    Is the cat’s “representation” of the mouse just a covariance — as the mouse moves about in space and time, the underlying neurophysiological states in the cat’s brain reliably covary with the mouse’s location? So that the relation between the mouse and the cat is just a much more complicated version of the relation between the season and the tree rings?

    If that were right, then the cat doesn’t represent the mouse any more than the rings represent the seasons — that is, we can interpret these patterns as representations, but the cat doesn’t represent the mouse to herself.

    I guess I’m still trying to figure out these closely related issues: what makes something a representation? Are all representations semantic content? What is semantic content, anyway? What’s the difference between concepts and non-conceptual representations (if there are any)? Can one have beliefs and desires without concepts? Can one have beliefs and desires without semantic content?

  5. Kantian Naturalist: what makes something a representation?

    Language? The right hemisphere, though it is the major processor of new information and provider of new ideas, has to communicate via the left hemisphere and Broca’s area. We think about things and names for things in separate areas of the brain.

  6. Alan Fox: Language? The right hemisphere, though it is the major processor of new information and provider of new ideas, has to communicate via the left hemisphere and Broca’s area. We think about things and names for things in separate areas of the brain.

    That would the case for linguistic representations, but keiths has suggested that non-linguistic animals have mental representations. I’m fine with that suggestion, but I want to really nail down what it means!

  7. keiths:
    You don’t think there can be true beliefs? False beliefs? Veridical perceptions? Non-veridical perceptions? Accurate representations? Inaccurate representations?

    “Beliefs” can be true or false depending on whether “belief” is thought of as “syntactic proposition.” Perceptions are neither, it’s our interpretations (propositions) about those perceptions that can be true/false.

    Accuracy/Inaccuracy is not the same as “true/false,” and here you’re hitting the nail on the head, as you should. Here’s where Plantinga’s bullshit starts to look as the sophistry that it is. If we talk in those terms, we could say that the accuracy of our interpretations changes with experience, that our cognitive faculties are more about accuracy than about true/false dichotomies. For example, take the instinct to get away from pain. Pain is immediately unpleasant (this is a tautology, right?), so no need for too much of cognition, except to link a painful experience to the item that produced such pain. Say, red cylindrical smallish thing. Next time you see one of those, you’ll avoid it. Is your belief false? Depends on how we word it. If the proposition was “every cylindrical red smallish thingie will cause me pain,” then this might be “false.” But if we said “cylindrical red smallish thingies might cause me pain,” then it’s true. Not all of them, but I know that at least one of those thingies did. Suppose we observe that some animals eat a red cylindrical thing and don’t get pain. We go and check the cylindrical thing, and lo and behold, there’s a detail we didn’t take into consideration. The one that caused us pain was hairy, the one that seems to be edible doesn’t have those hairs. OK, might be good to try. You call Jack and tell him to taste the apparently edible one, and he enjoys it! You have a more defined belief now that expands your menu over the menu of beings who cannot make this further distinction.

    Do you start to understand how this true/false bullshit allows for semantic play more than for understanding how far we can trust our cognitive faculties? How, by putting together more rational scenarios, starting with the pain thing, we can start to understand how and why and when we can trust our cognitive faculties? The need for better representations? The need for more accurate data? How and why evolutionary processes could lead to better cognitive faculties, understood as those that are able to include more details, etc?

    Plantinga sets you up to miss the problems with his thinking by misusing words like “truth,” getting you to fill-in-the-gaps, thus accepting the dichotomy as the central issue in an all-or-nothing fashion. That leaving aside that beliefs are not genetically inherited, and thus not the subject of natural selection.

    Plantinga’s bullshit is absurd from many angles, not just one. It self-implodes from many angles, not just one. I’m really never in the mood of translating nonsense into somewhat more sensical stuff to help a sophist out. If the guy makes basic epistemological mistakes, then I point them out. I don’t go ahead and buy into their bullshit.

    Plantinga’s crap should not be taken seriously. It contains basic mistakes in both philosophy and biology. That should be more of a cause of concern than of respect.

  8. KN,

    …keiths has suggested that non-linguistic animals have mental representations. I’m fine with that suggestion, but I want to really nail down what it means!

    Here’s an example: My dear departed cat Frida was blind for the last year of her life. She nevertheless managed to get around the house quite well. She had a mental map of the house in her head, and she knew where her food, water, litterbox, and bed were. She knew where my bed was and where the computer was (where she could often find me if she wanted some affection).

    That mental map was clearly a representation of the layout of the house, and it was an accurate one. It’s not hard to see how it would be advantageous in nature for an animal to be able to form an accurate mental map of its territory, and how it would be maladaptive to form inaccurate maps.

  9. Alan,

    The split brain is not unique to humans. It’s ubiquitous in vertebrates.

    That’s irrelevant to the point, which is that representations need not be linguistic.

  10. keiths:
    KN,

    Here’s an example:My dear departed cat Frida was blind for the last year of her life. She nevertheless managed to get around the house quite well.She had a mental map of the house in her head, and she knew where her food, water, litterbox, and bed were.She knew where my bed was and where the computer was (where she could often find me if she wanted some affection).

    That mental map was clearly a representation of the layout of the house, and it was an accurate one.It’s not hard to see how it would be advantageous in nature for an animal to be able to form an accurate mental map of its territory, and how it would be maladaptive to form inaccurate maps.

    I’m sorry to hear that about your cat. My cats are getting along in years and it be long before they start getting sick.

    Everything you said about Frida’s mental maps seems basically correct to me. I might quibble over “accurate”, but as long as there’s a caveat “accurate enough for her to satisfy her needs and achieve her goals,” we’re in agreement.

    And we certainly agree that having map-like representations just good enough to allow the animal to satisfy its needs and achieve its goals will tend to be adaptive, just because map-like representations which aren’t good enough will be selected against.

    Still, it’s going to be a further question whether having a good-enough mental map of an environment is the same thing as having true beliefs about that environment, right? That is the issue I’m trying to raise.

  11. Kantian Naturalist: Still, it’s going to be a further question whether having a good-enough mental map of an environment is the same thing as having true beliefs about that environment, right? That’s the issue!

    It needn’t be an issue of “truth,” but could be a kind of “right” or “wrong” (but clearly non-verbal, which is possible even with ourselves).

    Non-human animals indeed appear to distinguish “right” from “wrong” in a non-propositional manner, even doing so through “counting.” For some, get beyond a “count” of around three (I’m not sure of the exact number) and they can no longer determine whether or not an entity remains in a room, since four went in and three came out (whatever the exact count). But even not very bright animals will recognize that it’s “right” that one human or one mouse still is (at least could be) in a certain enclosure when two of either species went in and only one came out.

    I really don’t get where propositional knowledge is considered so much more important than what it seems to more or less crudely echo (never mind that the crude echos can result in elegant logical developments). The cat or whatever knows it’s “right” that a mouse is (or at least could be) out of sight behind the door, without needing to consult any sort of propositional logic.

    Glen Davidson

  12. Entropy,

    Plantinga sets you up to miss the problems with his thinking by misusing words like “truth,” getting you to fill-in-the-gaps, thus accepting the dichotomy as the central issue in an all-or-nothing fashion.

    Plantinga’s argument doesn’t rest on a narrow applicatrion of “true” and “false”, as I’ve already pointed out. Plantinga writes:

    We think our faculties much better adapted to reach the truth in some areas than others; we are good at elementary arithmetic and logic, and the perception of middle-sized objects under ordinary conditions.

    Perceptions aren’t true or false in the narrow sense, but they are certainly true or false in the sense of being more or less veridical. That’s what Plantinga is getting at: whether our cognitive and perceptual faculties are reliable in the sense of generating true beliefs, veridical perceptions, and accurate representations.

    Don’t get hung up on the terminology. It’s the ideas that matter, and a fair reading of Plantinga reveals those ideas. He isn’t hiding them or trying to play word games.

  13. Entropy,

    I’m really never in the mood of translating nonsense into somewhat more sensical stuff to help a sophist out. If the guy makes basic epistemological mistakes, then I point them out. I don’t go ahead and buy into their bullshit.

    Plantinga’s crap should not be taken seriously.

    You’re trying very hard to misunderstand Plantinga’s argument and then reject it on the basis of that misunderstanding. Don’t do that! It’s exactly what our opponents do with evolutionary theory.

    The right approach is to take the argument seriously and address it at its strongest — even making it stronger than Plantinga did, if we can — and then refute it.

  14. keiths,

    Did you read what I wrote? Did you understand it? Because you just added pain to injury. You showed Plantinga further misusing the terms, and then you translated that into a less nonsensical thing. Why do you want me to do that? I simply won’t. If you’re comfortable cleaning up the sophistry, then have a go at it.

    You still didn’t check what I wrote, and I think it’s important in showing that the reliability of our cognitive faculties is not about true/false dichotomies, and that by forgetting that dichotomy, we come to understand what we should expect from evolved cognitive faculties better than if we allow faulty philosophy and biology to guide our quest.

  15. Entropy,

    A refutation is much stronger than a dismissal. You are trying very hard to dismiss Plantinga’s argument without bothering to understand what he is actually saying.

    That doesn’t help your cause, it hurts it.

    Again:

    Don’t get hung up on the terminology. It’s the ideas that matter, and a fair reading of Plantinga reveals those ideas. He isn’t hiding them or trying to play word games.

  16. keiths:
    You’re trying very hard to misunderstand Plantinga’s argument and then reject it on the basis of that misunderstanding.Don’t do that! It’s exactly what our opponents do with evolutionary theory.

    Au contraire. I’m trying very hard to use my bullshit detector early on, rather than allowing that ass-hole to take control over the quality of my thinking.

    keiths:
    The right approach is to take the argument seriously and address it at its strongest — even making it stronger than Plantinga did, if we can — and then refute it.

    The right approach is to take it only as seriously as it deserves. I’m not in the business of helping sophists make their case. If they want me to take them seriously, they have to be serious themselves. If they give me sophistry, I point it out. That you might prefer to accept their loaded premises and translate their sophistry into something coherent is your choice. I’d rather pass. To each their own.

  17. Glen,

    I really don’t get where propositional knowledge is considered so much more important than what it seems to more or less crudely echo (never mind that the crude echos can result in elegant logical developments). The cat or whatever knows it’s “right” that a mouse is (or at least could be) out of sight behind the door, without needing to consult any sort of propositional logic.

    Exactly. And if the cat could speak, it might say “There’s a mouse behind the door!” But that wouldn’t be because the belief was linguistic. It would be because the non-linguistic belief had been translated into a proposition, which was then stated outright.

    Propositions (and language) are not the issue here.

  18. keiths: A refutation is much stronger than a dismissal. You are trying very hard to dismiss Plantinga’s argument without bothering to understand what he is actually saying.

    I refuted. I showed how by not being hung up in improper philosophy we can better understand what to expect from evolved cognitive faculties than by cartooning, and misusing, both philosophy and biology.

    Didn’t you read what I wrote?

  19. Entropy,

    The right approach is to take it only as seriously as it deserves.

    You’re pretending that it doesn’t deserve to be taken seriously. Plantinga’s argument is clearly wrong, but it deserves to be addressed on its merits.

    Plenty of IDers and creationists argue that evolutionary theory doesn’t deserve to be taken seriously, and will dismiss it on that basis. Don’t be like them!

    Far better to refute than dismiss. And the refutation should be of the strongest possible argument — stronger than Plantinga’s version, if we can improve on his.

  20. keiths,

    What our opponents do with evolutionary biology is to misrepresent it. I waited until Pantinga was done to see if he would correct his, by then apparent, mistakes, but he never did. He relied on those very mistakes to make his case. Therefore I was right to think of those as philosophical and biological foundational mistakes. If the mistakes were not foundational, and part and parcel with the whole construct, then I’d have no problem with the initial misuse. But the problems carry on as a wave of bullshit that starts moving along at that very point.

  21. keiths: Far better to refute than dismiss. And the refutation should be of the strongest possible argument — stronger than Plantinga’s version, if we can improve on his.

    I refuted. Again, did you read what I wrote? In the process I showed that the bullshit doesn’t deserve much respect. What’s so wrong with that?

  22. KN,

    I’m sorry to hear that about your cat.

    Thank you. She was almost 23 when she died, and I know how lucky we both were that she had such a long and happy life, but, boy, do I miss her!

    My cats are getting along in years and it be long before they start getting sick.

    Here’s hoping they stay healthy for a long time.

    Everything you said about Frida’s mental maps seems basically correct to me. I might quibble over “accurate”, but as long as there’s a caveat “accurate enough for her to satisfy her needs and achieve her goals,” we’re in agreement.

    I think of “accurate” in this case as meaning “correctly representing the spatial relationships of the salient features of her environment.”

    And we certainly agree that having map-like representations just good enough to allow the animal to satisfy its needs and achieve its goals will tend to be adaptive, just because map-like representations which aren’t good enough will be selected against.

    Yes.

    Still, it’s going to be a further question whether having a good-enough mental map of an environment is the same thing as having true beliefs about that environment, right? That is the issue I’m trying to raise.

    I would say yes. If the cat believes the mouse is behind the door, and the mouse is in fact behind the door, then the cat’s belief is true.

  23. Entropy,

    I refuted. Again, did you read what I wrote?

    Why do you keep asking that? I’m reading all of your comments.

    And no, you didn’t refute Plantinga’s argument. You dismissed it based on a mischaracterization:

    Plantinga’s argument becomes absurd the moment he’s talking about truth as if it’s some kind of property, rather than the label that it is.

  24. Neil Rickert: And we don’t fully understand those things.

    Is that because “full understanding” is impossible when it comes to non-programmable things in your opinion?

    I would say that if you know that a particular behavior is uniquely the product of the mind of “Mr Jones” then you understand it fully or as fully as you can ever hope or want to.

    Just because “Mr Jones” is an irreducibly complex entity does not mean he is an enigma. He might be very predictable just not programmable.

    I think that to say you don’t fully understand something just because you can’t break it down completely and reconstruct it exactly is to misunderstand what “understand” means.

    For me “understanding” is more about knowing what a particular entity will do and why they will do it than knowing precisely how to duplicate it’s behavior myself.

    Peace

    P.S. like I said this is interesting stuff thanks for the conversation

  25. GlenDavidson: The fact that we don’t fully understand evolution and human behavior means that we can’t program these, but it certainly doesn’t mean that we don’t understand them reasonably well.

    Interesting,

    Do you think that given enough information you could program evolution or human behavior?

    Or do you think that “programing” these things will be forever beyond our grasp?

    peace

  26. fifthmonarchyman: Is that because “full understanding” is impossible when it comes to non-programmable things in your opinion?

    Yes, pretty much.

    I would say that if you know that a particular behavior is uniquely the product of the mind of “Mr Jones” then you understand it fully or as fully as you can ever hope or want to.

    “Understand as well as you can ever hope to” is not the same as “fully understand.”

    Take the question “Why is there something rather than nothing?” I understand that as well as I can hope to. Yet I do not understand it at all.

  27. fifth:

    Or do you think that “programing” these things will be forever beyond our grasp?

    OMagain:

    Why are there quotes around “programming”?

    I’d like to think it’s because fifth recognized his spelling error, but that’s overly optimistic.

  28. fifthmonarchyman: Do you think that given enough information you could program evolution or human behavior?

    Some aspects of human behavior can already be modelled sufficiently well to guide the design of structures and systems.
    http://onlinelibrary.wiley.com/doi/10.1002/(SICI)1099-1018(199911/12)23:6%3C349::AID-FAM710%3E3.0.CO;2-3/full#references

    Humans are surprisingly unsophisticated in some respects. Never read Foundation? Ever seen a close up magic show? Ever talked to WJM about alien abductions and faith healers curing cancer?

  29. fifthmonarchyman: Do you think that given enough information you could program evolution or human behavior?

    The problem for programming evolution, is that it is very dependent on the environment. So you would really have to program the environment. And the environment is too complex to program, in part because the program itself would be but a part of the environment.

    For programming human behavior, you could never be “given enough information”. That’s because the human cognitive system is a creator of information, and what it is going to create would need to be part of that given information.

  30. OMagain: Why are there quotes around “programming”?

    Because in the case of evolution and human behavior we would probably call what we were doing “comprehensive simulation” rather than “programing”.

    Either way we would start with same input feed it into an algorithm and and end up with the same output.

    peace

  31. Neil Rickert,

    We are on the same page.

    Do you agree that humans seem to be hardwired with a propensity to to infer that a mind is behind phenomena like human behavior and evolution that are predictable but not programmable?

    peace

  32. OMagain: Some aspects of human behavior can already be modelled sufficiently well to guide the design of structures and systems.

    modelled is not remotely the same thing as programed.

    To say something can be modelled is just to say that it’s somewhat predictable. Of course human behavior is somewhat predictable if it wasn’t personal relationships with other humans would be impossible

    peace

  33. fifthmonarchyman: Because in the case of evolution and human behavior we would probably call what we were doing “comprehensive simulation” rather than “programing”.

    What’s that? Simulation at the atomic level? Why stop there?

  34. fifthmonarchyman: modelled is not remotely the same thing as programed.

    To say something can be modelled is just to say that it’s somewhat predictable. Of course human behavior is somewhat predictable if it wasn’t personal relationships with other humans would be impossible

    Humans can be programmed or reprogrammed, again, surprisingly simply.

  35. OMagain: What’s that? Simulation at the atomic level? Why stop there?

    I don’t think that simulation at the atomic level would be comprehensive enough to equal programing.

    You’d need simulation at the mental level and I don’t think that is possible.

    peace

  36. fifthmonarchyman: Do you agree that humans seem to be hardwired with a propensity to to infer that a mind is behind phenomena like human behavior and evolution that are predictable but not programmable?

    But by “programmable” you mean “modeled to a level of detail that FMM personally finds satisfactory but which he is unable to explain that same level of detail to anyone else”. Currently, anyway.

    When “programmable” is defined in an idiosyncratic way, how can communication be possible?

  37. OMagain: Humans can be programmed or reprogrammed, again, surprisingly simply.

    I don’t think that is what Neil and I are describing when we say “programing”

    peace

  38. fifthmonarchyman: You’d need simulation at the mental level and I don’t think that is possible.

    Interesting how a few steps down we find your definitions preclude any answer other then the one you started out wanting.

    fifthmonarchyman: Either way we would start with same input feed it into an algorithm and and end up with the same output.

    Ever seen an advert?

  39. fifthmonarchyman: I don’t think that is what Neil and I are describing when we say “programing”

    No, nobody is talking about what you are taking about when you use “programming”, no.

  40. OMagain: But by “programmable” you mean “modeled to a level of detail that FMM personally finds satisfactory but which he is unable to explain that same level of detail to anyone else”.

    Not at all I simply mean simulated to such an extent that the simulation is indistinguishable from the original.

    peace

  41. fifthmonarchyman: You’d need simulation at the mental level and I don’t think that is possible.

    Why not? What needs to be added to the simulation at the atomic level to add that something extra? Is that something only god can do? If I could make an exact copy of myself down to the smallest level of detail and then run in it a physical modelling program, do we each share a soul or do we get our own? Or is the copy inferior in some way, despite the fact it’s from it’s perspective functioning in precisely the same way.

    If I built a computer and copied you and ran that collection of atoms inside the computer, would that collection of atoms know that it was not real because simulation at the mental level was impossible? Would it believe that if you tried explaining it to him?

    If someone made that atomic level software simulation squeal on a message board, would that someone be going to “hell”?

    If so, I’ll see you in hell keiths.

  42. keiths:
    Why do you keep asking that?I’m reading all of your comments.

    And no, you didn’t refute Plantinga’s argument.You dismissed it based on a mischaracterization:

    Then you didn’t read it. Thanks for the semi-conversation.

  43. fifthmonarchyman: Not at all I simply mean simulated to such an extent that the simulation is indistinguishable from the original.

    It’s conceivable that someday we can model a collection of atoms that represents a human mind and by that point the input fidelity will be realistic. You are fooled, completely. What magic steps in and stops that simulation working? What stops those simulated atoms being self aware? What’s the logical barrier? If you stepped in to it’s software world, how can you determine between the two of you which is the simulation and which is the “real” version of you?

    You know they are already scanning mice brains in neuron by neuron right?

    https://www.wired.com/2016/03/took-neuroscientists-ten-years-map-tiny-slice-brain/

    And guess what, in software we get the behaviours we get in wetware…

    So I’m not saying it’s going to happen anytime soon or even at all, but in principle it’s just a matter of scale if we are talking about the atomic level.

    https://www.technologyreview.com/s/518446/first-atomic-level-simulation-of-a-whole-battery/

  44. keiths:
    KN,

    Thank you.She was almost 23 when she died, and I know how lucky we both were that she had such a long and happy life, but, boy, do I miss her!

    Awww!

    Here’s hoping they stay healthy for a long time.

    Thank you!

    I think of “accurate” in this case as meaning “correctly representing the spatial relationships of the salient features of her environment.”

    That much I completely agree with, though I’d want to insert a caveat or modifier along the lines of “as adequately as necessary for the animal to achieve its goal and satisfy its needs.” But with that caveat, there is the very interesting little wrinkle: cognitive evolution will not endow an animal’s mind with greater representational adequacy than is necessary for achieving the goals specific to animals of that kind. It just has to be good enough, and no more than good enough.

    If the cat believes the mouse is behind the door, and the mouse is in fact behind the door, then the cat’s belief is true.

    That does make sense, of course. I’m not saying you’re mistaken. The itch that I can’t stop scratching here is this: philosophers, scourge of common sense that they are, are prone to say that beliefs are propositional attitudes. But if an animals mental representations are not propositional (and we seem to agree that they are not), then how can they be beliefs?

    If animals have beliefs and their minds lack propositional structure, then we need a broader conception of belief beyond the propositional attitude definition long favored by philosophers. I have nothing against that; I’m simply not sure what that would be.

Leave a Reply