Cartesian skepticism has been a hot topic lately at TSZ. I’ve been defending a version of it that I’ve summarized as follows:
Any knowledge claim based on the veridicality of our senses is illegitimate, because we can’t know that our senses are veridical.
This means that even things that seem obvious — that there is a computer monitor in front of me as I write this, for instance — aren’t certain. Besides not being certain, we can’t even claim to know them, and that remains true even when we use a standard of knowledge that allows for some uncertainty. (There’s more — a lot more — on this in earlier threads.)
In explaining to Kantian Naturalist why I am a Cartesian skeptic, I introduced the analogy of the Sentinel Islander:
KN,
Here’s an analogy that shows how serious the problem of circularity is for your position.
Suppose that a few decades from now you possess a really high-fi pair of virtual reality goggles, plus some sensitive motion sensors. You kidnap a North Sentinel Islander who knows nothing about virtual reality or computers, and you tell him that the goggles and sensors are magical devices that can grant him access to an actual land, LaLa Land, which is far away.
The islander learns to navigate LaLa Land successfully, even carrying out tasks within it. If you ask him questions about LaLa Land, he answers them “correctly”. He even claims to know things about LaLa Land, which he takes to be real. We know better, because we understand that the goggles do not deliver veridical sensory information. They are fostering an illusion. LaLa Land doesn’t exist in the real world.
The islander could argue, KN-style:
1. I assume that the goggles deliver veridical information about LaLa Land.
2. On the basis of that assumption, I am able to navigate LaLa Land successfully and satisfy my goals.
3. Therefore, the goggles deliver veridical information about LaLa Land.
Is he right? Obviously not. We can see that he is being fooled, and we can diagnose the problem with his argument: it’s blatantly circular.
How is your argument any better than his?
If we assume that our perceptions are veridical, we are making an equivalent mistake to the Sentinel Islander when he assumes that the VR goggles deliver veridical information about LaLa Land.
On the other hand, if we don’t know that our perceptions are veridical, and we can’t even judge the likelihood that they are veridical, then we are in no position to claim knowledge — justified true belief — regarding the external world.
keiths:
KN:
They aren’t justified, because they rest on the unjustified assumption that the VR headset is delivering veridical information about a real place.
keiths:
KN;
Cartesian skepticism wasn’t my “desired, predetermined conclusion.” I was quite happy believing that my perceptions were generally accurate, and I would have happily continued believing it if the argument against it weren’t so compelling.
I changed my mind on the basis of reasoning and evidence. That’s how it should be.
You are changing your reasoning when it leads you to an undesired conclusion. That’s how it shouldn’t be.
Your beliefs are not justified, because they rest on the unjustified assumption that there exists a VR headset that is delivering non-veridical information about a place that is not real.
petrushka,
I don’t see how. Something physical has to change state in order for simulation to occur.
KN,
I’m still interested in your responses to the questions I raised in this comment:
KN,
In my view, the truth of a claim is independent of whether a particular “epistemic position” exists or is occupied (unless, of course the claim is about epistemic positions and their occupancy).
That’s what I was getting at in this exchange:
KN:
keiths:
Would you agree that the islander’s knowledge claims are false even if no one is aware of that?
Suppose you are in an analogous position. A race of aliens envatted you some time ago, but they’ve since gone extinct. At this point no one, including you, is aware of the envatting.
I would say that it is true that you are envatted. Would you disagree?
If the house is indeed for sale, since Almuerzo believes it and you say he is justified in doing so, then, since on your view, knowledge is JTB, you ought to say Almuerzo knows it. But you don’t.
His belief is justified* but not justified. He knows* that the house is for sale, but he doesn’t know it.
I omit those details for obvious reasons: 1) they don’t need to be repeated every single time we talk about this stuff, because the implicit asterisks are always there when claims are being made about the external world; and 2) they aren’t relevant to the point I’m making here, which is that contra KN, justification depends on more than doing one’s best, as demonstrated by the Almuerzo/Borodin scenario.
keiths,
This justification* biz is new. At least to me. Don’t you want to say that Almuerzo is has any justification [without the asterisk] at all for believing the house is for sale? Has he ONLY justification* and no evidence whatever? Why is that?
walto,
The asterisk means the same thing here as when it is appended to “know”. It signifies a dependence on the assumption that our perceptions are generally veridical.
If perceptions aren’t generally veridical, then Almuerzo doesn’t actually know that the “For Sale” sign was there this morning — or the house itself, for that matter.
Be careful with the phrase “any justification.” When we’re talking about knowledge — justified true belief — we mean sufficient justification. Almuerzo doesn’t have sufficient justification for claiming to know (without the asterisk) that the house is for sale, because he doesn’t know that his perceptions are veridical.
keiths,
You talk about perception being ‘generally veridical’ and have mentioned things like demons, dreaming and BIVs. But let’s suppose (for the sake of argument only) that perception IS generally veridical. What about pranksters? Dretske talks about mules cleverly disguised as zebras, and there are many famous examples of papier-mache farm facades. Won’t they defeat claims to knowledge even given general veridicality? And thus defeat claims to knowledge* (or justification*)? When is justification ‘sufficient’ on your view?
walto,
They defeat certainty but not knowledge*.
More later — I have a meeting to attend.
That’s not an assumption, it’s a fact. And if our perceptions are generally veridical then it is still true even if you don’t know it.
ETA: mea culpa if I left out a critical asterisk where one was needed or failed to leave one out where i should have done so.
walto,
They defeat certainty, but not knowledge*.
The crucial difference is that if you know that perception is generally veridical, you can build up a model of the external world, and that model can be used to estimate the likelihood that you are being fooled in a given instance.
If you don’t know that perception is veridical, then all bets are off. You know nothing about the external world and are therefore unable even to estimate the likelihood that you are being fooled.
For example, I’ve seen people arguing against brain-in-vat scenarios on the basis of technological infeasibility. They estimate the computational demands that would be placed on the vat and argue that no conceivable technology could meet those demands.
The problem with those arguments is that they inadvertently assume the veridicality of perception. To know whether a technology is feasible, you need to know certain things about the physics by which the technology operates. We know a lot about physics, but it’s the physics of our (potentially virtual) world, not necessarily the physics of the real world.
Limitations imposed by the virtual world’s physics may not apply to the real world’s physics, so technologies that are infeasible in the virtual world may be quite realizable in the real world.
To evaluate feasibility based on the physics of the (potentially virtual) world is to inadvertently assume the veridicality of perception.
Fixed that for ya!
It’s the physics* of the only world* that matters*.
keiths,
What is the likelihood that the last time you thought you saw a zebra it was a cleverly disguised mule?
And if “we need only to know* certain things about the physics by which the technology operates” (and we operate) in our world, why isn’t that just knowledge [no asterisk]? It’s what everybody means by the term, after all. It seems to me that, in spite of your denials, what you are calling “know” is certain knowledge and what you are calling “know*” is fallible knowledge. You cannot actually calculate a single “likelihood”–you just think you must have them if you rule out heavyweight (i.e., philosophically SKEPTICAL) defeaters. But you really have no basis for the belief that your non-philosophical defeaters leave you with “likelihoods.” It may well be that every “zebra” you’ve ever seen has been a cleverly disguised mule and every red silo you’ve ever seen has been a fake. You actually have no idea at all.
A real skeptic is willing to take the position that they don’t know anything. You try to pussy out by saying you know* things. But precisely the same arguments you think are successful against knowledge are also successful against knowledge*. And, obviously, if you really think you know* things in spite of not having the slightest idea of “likelihoods” that you’re correct, then there’s no reason that you can’t know things.
Either fish or cut bait.
HAH! I know* you think you got me there walto, but you have simply forgotten to take into account knowledge**.
walto:
Very low (assuming the veridicality of my perceptions, as stipulated).
You’re getting things jumbled up here. The technological feasibility of a Cartesian scenario depends on the physics of the world in which it is implemented, not of the world it implements. Make sure you understand this — it’s important.
No. For the nth time, I think that knowledge is possible, but not absolute certainty; and I think that knowledge* is not knowledge, fallible or otherwise, because it is not justified. Knowledge* depends on an unjustified assumption — that our perceptions are veridical — and therefore is not itself justified.
As I keep explaining to you, likelihood estimates need not be numerical.
Not so. I’m able to rule them out — assuming the general veridicality of my perceptions, as you stipulated — in the same way I’m able to rule out other conspiracy theories.
I’m concerned with whether my position is correct, not with whether some angry old insurance regulator thinks I’m a “pussy” rather than a “real skeptic”.
Obviously not. To evaluate knowledge* claims, you take the veridicality of perception as a given. Having done so, the chief argument against knowledge — that perception might not be veridical — is closed off and unavailable.
In the case of knowledge*, the likelihoods become conditional: “the likelihood of X given that my perceptions are generally veridical.”
Sure there is — the fact that I don’t know that my perceptions are veridical.
Kierla,
That you make some arguments for perceptual errors ‘chief’ and others not ‘chief’ is not relevant to whether you or anyone knows things. Similarly, your assertion that all the zebras you’ve ever seen weren’t really cleverly disguised mules because you apparently have this feeling that you can rule out the possibility (along with other all other ‘conspiracy theories’ isn’t worth much, because you can’t tell us what makes something a conspiracy rather than a skeptical concern. You concede your talk of likelihoods is just gas, because you have no way of calculating a single one of them so don’t really know+ whether any are greater or less than .5. No one ever was so firm both that one needs likelihood and that one can’t estimate them and still managed to ignore the implications of that.
You simply have two sorts of defeaters–those you think are cool and so you are a pussy about, and those you think are uncool–so you are a big tough anti-conspiracy guy about. This dichotomy results in you believing that you don’t know–but you do know* your own name.
You have no actual likelihoods for either group–just blind fear for the one group and blind faith for the other. You can’t define ‘generally veridical’ except to say things like, ‘You know, where I’m likely correct’; or relevant defeaters except to refer to them as ‘the ones nobody can know to be false.’
It’s a huge batch of hand-waving because it’s a silly, ad hoc position. Even we old insurance regulators can recognize* huge piles of bullshit when we see it. Imagine if you had to deal with an actual smart person! (Hint–I’d just go home instead if I were you. Safer.)
And what are the odds of that!
And no one thinks knowledge or knowledge* is knowledge if it’s not justified. You’re the odd one here with your belief that one can have unjustified knowledge*.
If you in fact think that knowledge* is not knowledge, fallible or otherwise, because it is not justified, why not just call it opinion [rather than knowledge*], like the rest of us?
Mung,
Too scary. S/he is ready to give up the comforts of knowledge home, but not without preserving his/her safe knowledge* harbor.
It’s like a dieter giving up frosting but, you know, keeping the cake.
So if knowledge* is unjustified true belief, what is unjustified false belief?
Is keiths claiming that the difference between knowledge* [unjustified true belief] and unjustified false belief is truth? I find that rather hilarious.given his recent comments to fifth in the truth, reason, logic thread.
walto,
You’re getting things jumbled up again. Here’s what I wrote:
walto:
I didn’t make that assertion.
Jesus, walto. The idea that there is a vast, coordinated scheme to present disguised mules as zebras is a conspiracy theory, and it can be dismantled the same way that other conspiracy theories can be dismantled. Do I really need to spell it out for you? If I ask you whether you’ve seen zebras, will you seriously answer “I don’t know, because they might all have been disguised mules”?
Your stalled car is straddling the railroad tracks, and a high-speed train is bearing down on you. If you cannot calculate the numerical likelihood of death, are you helpless to act? Or would you do what a rational person would do, which is to get out of the car and run?
You have a mental block about numerical probabilities, walto.
We can estimate them. What you’re failing to grasp is that the estimates need not be numerical.
walto,
You’re overlooking the crucial difference between Cartesian scenarios — such as the brain-in-vat scenario — and run-of-the-mill conspiracy theories, like the idea that there is a vast, coordinated scheme to present disguised mules as zebras.
If you assume the general veridicality of perception, as stipulated, then you can reject the zebra conspiracy theory based on information you’ve gathered via perception. The conspiracists don’t control all of your sensory information; just the parts they can influence via their mule disguises. You can leverage the information that they don’t control to determine that the zebras aren’t real, by administering DNA tests, for example.
Compare that to a brain-in-vat scenario in which all of your sensory information is under the control of the vat designers. If they want you to see a zebra in front of you, they send the appropriate visual information into your brain. What can you possibly do to determine whether the zebra is real? A DNA test certainly won’t help you, because the vat designers will arrange for the result to come back as positive for zebra. No matter what you do, they can arrange to maintain the illusion.
The difference between a true Cartesian scenario and your zebra conspiracy is night and day.
But if you ask anyone whether they’ve seen cows, will they seriously answer “I don’t know because I might be dreaming or a brain in a vat.” Please. This response just highlights how self-contradictory and ridiculous your position is. You want your cake but are afraid to eat it.
Exactly. I believe in the reality of cars rushing at me. You don’t.
Haha. If I assume the general veridicality of perception I can also reject all of the silly theories you’re so worried about, Jeremy.
keiths:
walto:
That you can’t see the difference between the two questions is a big problem.
Read this comment again.
walto,
You and KN are in the same boat.
To assume the general veridicality of our perceptions is to make the same mistake as the Sentinel Islander, when he assumes that the VR headset is delivering veridical information about a distant place.
It’s a mistake for the islander, and it leads him to make bogus knowledge claims about LaLa Land. I’m still waiting for an explanation, from you or KN, of why it isn’t a mistake for us to make the analogous assumption about the veridicality of our perceptions.
keiths:
walto:
Good grief. If you think a Cartesian skeptic wouldn’t get out of the way of a speeding train, then you don’t understand Cartesian skepticism at all. You’re making the same mistake as KN: thinking that Cartesian skepticism asserts the non-veridicality of perception. It doesn’t. It simply asserts that we cannot know that our perceptions are veridical.
Also, what happened to the numerical calculations that you considered so essential a short while ago? Not so essential when a train is bearing down on you, are they?
keiths,
I see you’ve understood nothing. Go back and reread Elon Musk. You’re the one wedded to “likelihoods” — not me. I’ve said from the start that it’s an absurd quest. You apparently agree–when you’re afraid.
When the car is stalled on the railroad tracks, you get out and run, walto. You’ve assessed the likelihood of harm, should you choose to remain in the car, and found it too high for your liking.
You estimate likelihoods all the time, just like the rest of us.
And just like the rest of us, you don’t limit yourself to numerical estimates.
It must be a metaphorical boat then, given that we can’t actually know there are any boats in the real world.
But we can sure as hell act as if we know they are!
God, you are confused and self-contradictory. Of course I estimate likelihoods all the time. I simply don’t need to do so to know things. Neither do you. You’re terrified of the bus for good reason. You simply forget this when you engage in what I guess you take to be ‘doing philosophy.’ Then you think you need them and don’t have them when looking at a moving bus. It’s a simple contradiction.
walto,
You’re repeating KN’s earlier mistake again. He seems to have gotten past it; why can’t you?
The Cartesian skeptic does not assert that our perceptions are non-veridical and that the train is unreal — only that we can’t know that they’re veridical and that the train is real.
Please reread that until it sinks in.
Furthermore, even if I did assert that the train was unreal — and again, I don’t — it does not follow that I should be unafraid and remain inside the car.
Again, please think about that until it sinks in.
You’ll never defeat Cartesian skepticism if you have no idea what it is.
walto,
Sure we do, and I’ve demonstrated this again and again via a dialogue:
You can’t defeat a doubter, period.
Yes that’s still utterly confused, as I pointed out each of the last about 15 times you’ve posted it.
Rather than continue this pointless exercise, let me suggest that you read a couple of good things on this issue. First, from Robert Nozick’s book Philosophical Explanations, I think the section on Skepticism–pp. 168- 248 is very good (although Kripke has pounded it). And/or if you are committed to the closure of knowledge under known entailment (I recently finished an article according to which that principle can’t be true, BWTHDIK?), then I recommend Keith DeRose’s 1995 paper, “Solving the Skeptical Problem” which is also good.
walto,
Your literature bluff won’t cut it. What you need is a counterargument.
The Xavier/Yolanda dialogue is nonsensical because the knowledge claim clashes with the likelihood assessment:
It remains nonsensical if you change the last line to this:
But if you change the last line to this, it makes sense…
…because then the knowledge claim agrees with the likelihood assessment.
If you, Nozick, or DeRose have a counterargument, let’s hear it.
walto, keiths does not know that your literature bluff “won’t cut it,” nor does he know that you need a counterargument.
I’ve responded to that tripe several times already. Interestingly, it’s not even consistent with several of Larchmont’s recent posts regarding likelihoods.
Larchie’s is very confused and, apparently, absolutely determined to stay that way.
Welp, I tried.
walto,
I’m looking for a counterargument, not a “response”.
It’s pretty clear that you won’t be able to come up with one on your own, but perhaps you could comb Nozick or DeRose for one.
keiths thinks that because KN has ceased to respond to his nonsense that KN agrees with him. How he knows this remains a mystery. Apparently it has something to do with likelihoods that can be non-numeric.
No one but you can possibly know what you are looking for and no one but you can possibly know whether you think you have received a response.
Given that you don’t know that you are not a brain in a vat and given that you do not know that you are not a sentinel islander being deceived by a VR headset, why should any rational person take anything you say seriously?
No reason?