One big problem, as I mentioned here, and elsewhere, with ID as a hypothesis is that it is predicated on the idea that mind is “immaterial” (or at least “non-materialistic”) yet can have an effect on matter. That’s the basis of Beauregard and O’Leary’s book “The Spiritual Brain”, as well as of a number of theories of consciousness and/or free will. And, if true, it makes some kind of sense of ID – if by “intelligence” we mean a “mind” (as opposed to, say, an algorithm, and we have many that can produce output from input that is far beyond anything human beings can manage unaided, and can in some sense be said to be “intelligence”), we are also implicitly talking about something that intends an outcome. Which is why I’ve always thought that ID would make more sense if the I stood for “Intentional” rather than “Intelligence”, but for some reason Dembski thinks that “intention”, together with ethics, aesthetics and the identity of the designer, “are not questions of science”.
I would argue that intention is most definitely a “question of science”, but that’s not my primary point here.
What I’d like to do instead is to unpack the hypothesis (and it’s a perfectly legitimate hypothesis) that there is something that we term “mind”, and which is “immaterial” in the sense that it has no mass, and does not exert a detectable force, but which nonetheless exerts an influence on events.
Beauregard and O’Leary cite Henry Stapp, and say:
According to the model created by H. Stapp and J.M.Schwartz, which is based on the Von Neumann interpretation of quantum physics, conscious effort causes a pattern o neural activity that become a template for action. But the process is not mechanical or material. There are no little cogs and wheels in our brains. There is a series of possibilities; a decision causes a quantum collapse, in which one of them becomes a reality. The cause is the mental focus, in the same way that the cause of the quantum Zeno effect is the physicists continued observation. It is a cause, but not a mechanical or material one. One truly profound change that quantum physics has made is to verify the existence of nonmechanical causes. One of these is the activity of the human mind, which, as we will see, is not identical to the functions of the brain.
Well there is certainly some important unpacking to do here before we go any further. Beauregard and O’Leary appear to be saying that quantum effects are neither “mechanical [n]or material”. OK. In that case, I do not know of a single “materialist”! Nobody I know would claim that quantum effects do not exist. In which case, none of us are “materialists” and Beauregard and O’Leary have a straw man. I would also buy the idea that the brain itself is non-deterministic in a quantum sense – that what we do is not merely direct result of matter put into motion at the beginning of existence, but also fundamentally uncertain.
So I think that Beauregard and O’Leary have drawn their desired line in a very odd place. The difference between the people they dismiss as “materialists” and themselves is not that we “materialists” don’t think that quantum effects exist or are perfectly real. It’s between people who don’t think that these quantum effects have anything to do with intentional behaviour, and people who think that it’s where the leeway for “free” intentional behaviour resides. They go on to say (h/t to William for doing the typing):
In the interpretation of quantum physics created by physicist John Von Neumann (1903-1957), a particle only probably exists in one position or another; these probable positions are said to be “superposed” on each other. Measurement causes a “quantum collapse”, meaning that the experimenter has chosen a position for the particle, thus ruling out the other positions. The Stapp and Schwartz model posits that this is analogous to the way in which attending to (measuring) a thought holds it in place, collapsing the probabilities on one position. This targeted attention strategy, which is used to treat obsessive-compulsive disorders, provides a model for how free will might work in a quantum system. The model assumes the existence of a mind that chooses the subject of attention, just as the quantum collapse assumes the existence of an experimenter who chooses the point of measurement.
Firstly, I find the idea that because doing something intentionally (focusing attention, for instance) has neural correlates demonstrates that intention, and thus mind, has physical effects extraordinarily naive (and their claim that it was not until the nineties that neuroscientists considered that thought could affect brain structure even odder, given that Hebb, their own countryman, died in 1985, is regarded as the “father of neuroscience” and is most famous for “Hebb’s rule” that “what fires together wires together”, and that “Hebbian learning” is fundamental to the notion of neural plasticity). But more to the point, is there any basis for concluding that something that we call an immaterial, non-mechanical but somehow quantum-real mind can “hold” brain patterns “in place” and thus affect the motor output, i.e. the act that implements the final decision?
One source cited is a paper by Schwartz, Stapp and Beauregard, which goes into some detail. There is an interesting critique by Danko Georgiev of the Stapp model here, and a reply by Stapp here (link is to a Word document with tracked changes still turned on!). So I’d be interested to know what the physicists here make of the physics.
But my problem with the argument is more fundamental, and relates to the concept of intention itself. I’m going to define “intention” in the plain-English sense of meaning “a goal that a person has in mind, and acts to try to bring about”. And I will use “quantum mind” to denote the putative non-material, non-mechanical but capable-of-inducing-effects mind apparently postulated by Beauregard and colleagues.
If a person has such a mind, then her intention, according to my definition, resides within it it. Which is fine. And her capacity to act to bring about the intended goal has something to do with the muscles she possesses, and the relationship between her mind and those muscles, which presumably goes via the brain. And let’s suppose that this quantum mind brings about changes in brain state that can “hold in place” a particular neural pattern of firing, possibly until it reaches execution threshold, and outflow to the muscles begins.
This is actually quite a good model of decision-making, and something that my own research deals with specifically – how do we inhibit a response to a stimulus that requires one until we are sure that our response is going to be the appropriate one?
The problem it seems to me is when we try to address the question: how is that goal selected? For example, in many circumstances, the proximal goal (find a pencil) subserves a more distal goal (write down your phone number) which in turn serves an even more distal goal (so that I can call you back when I’ve found the answer to your question) and so on (so that I can help you solve your problem; so that I can feel good about myself; so that I can check “problem solved” on my worksheet; so that you feel good about yourself; so that your children will be able to get home from school; etc). And all these goals require information. Depending on the information, the goals may be different, and in the light of new information, goals may change. In other words, to form an intention, the quantum mind needs a goal, and to form a goal, it needs information.
Where does it get that information? One possibility is the sensory system. In fact it’s hard to know where the information can come from otherwise. In order to solve your problem I have to know what it is, and in order to prioritise my goals I have to know more about your problem. That means I have to listen to what you are saying, and my brain has to react to the vibrations that arrive at my eardrum.
And that information has to get to the quantum mind. What the quantum mind decides must therefore be, in part, an output from the input of my body and brain.
So my very simple question to Beauregard, Stapp, Schwartz, O’Leary et al, is: in what sense is your postulated quantum mind anything more than part of the process by which as a person (an organism) I respond to incoming information with goal-appropriate actions? If the quantum mind is adding something extra to the process, on what basis is it doing so? If on the basis of incoming information, why is it not a result of that input? If on the basis on no information, in what sense are the decisions it makes anything more than a coin toss?
And, to IDers generally: if a divine mind can alter the configuration of a DNA molecule by means of somehow selecting from quantum probabilities those most likely to bring about some goal formed on the basis of information to which we are not privy, how could we tell that the resulting DNA molecule is the result of anything other than probabilities that are perfectly calculable using quantum physics? And if those molecules violate those probabilities – DNA molecules suddenly start to form themselves consistently into configurations highly improbable under the laws of quantum mechanics – on what basis would we invoke quantum mechanics, or even a quantum mind, to “explain” it?
I don’t think you can use “quantum” as an alibi for “anything improbable that we can’t explain”. If Divine intention is smuggled in under the guise of quantum indeterminacy, then how could we detect it? And if your inference is that Cambrian animals must have been intended because they are otherwise unlikely, how do you explain that in terms of quantum mechanics? And if quantum mechanics won’t do the job, we are back to square one:
How does mind move matter?
We make models of the data that arrives via our sensory inputs. So no, my view does not set me apart from Alan.
I do it all the time, Mung. It’s my job.
You have to understand scientific reasoning to understand a scientific argument. Specifically, you have to understand the limitations intrinsic to any scientific claim.
This is simply untrue, William. Psi research is certainly not the “only area where blind, double blind and triple-blind protocols are stringently employed” and, as I pointed out, in one of the “triple-blind” examples of psi research conducted by the researchers you mentioned, there was a gaping hole in the blinding.
Blinding procedures are absolutely standard in many fields of science, including my own, and double-blinding is the industry standard for all drug trials. However, blinding is not straightforward – there are many ways in which blinding can fail (as in the psi example above). Blinding is often difficult to achieve, and often effects from blinded studies can nonetheless be attributed to leakage in the blinding.
Your assertion that psi research is the “only” domain of science where blinding is stringent is a terrible and unjustified slur on many domains of science where it is de rigeur, and where it is readily accepted that nonetheless leakage in the blinding may produce false positives.
As someone who claims to know little about science you really need to consider that the gaps in your knowledge are serious impediments to your ability to fairly critique science.
Firstly is highly predictive. People who lack a good “theory of mind” find it more difficult to anticipate what other people will do, and to read their intentions.
Secondly, it promotes self-efficacy: having a model of yourself as an intentional agent, is also known as having a strongly “internal locus of control”. Lack of such a locus of control is associated with depression, and even psychosis. Successful therapy for mental disorders often involves helping people develop a model of themselves as agents of their own destiny.
There is a large literature on both of these, so I won’t cite. You can google.
Miller does present a quantum model of free will.
As I’ve argued in the OP, as I see it, the more fundamental problem with any quantum model of free will (or any libertarian model) isn’t solving the problem of how the mind moves matter (it may do, it may not) but how such a mind could be said to be “free”. If it is informed, its output is a function of its input and it is not “free”. If it is not informed it is not “willed”.
The only coherent model of free will that I can see is one in which an agent (say a human being) considers many options in the the light of information, weighs up their likely consequences, and compares them against goals that are also formed on the basis of information.
That model is perfectly amenable to normal neuroscience. If options are restricted, the agent is reduced in freedom. If options are reduced to one, the agent is not free. If the agent has no information as to which of several options is likely to further her goals, then her choice is not willed – it’s essentially a throw of the die.
We have free will, then, when we have options as to how to act, and information as to what any given action will lead to.
A mind that is independent of that information cannot be therefore said to have free will.
I very much doubt this. I guess I’m a skeptic of the “theory” theory.
I’m slightly skeptical myself. I think that the concept of “mental time travel” is a better one. And I don’t think it’s categorical (that’s my main skepticism with the “theory theory”). I think some people find it easier to “see from another view point” than others, and how easy on any one occasion will depend on various factors and context. But it does require at least the implicit recognition that other points of view – and thus other minds – exist.
I agree with that.
Exactly. Another way of putting (what I take to be) the same (or similar) point is that what we really want to understand is agency, what it is that we can be held responsible or accountable for doing. And here reason-giving plays a fundamental role: an action is justified, or not, in light of the adequacy of the reasons given in support of that action.
So, here’s the hard question: are reasons causes? If so, what kind of cause are they?
It might be tempting to reason as follows:
(1) all natural phenomena have mechanistic explanations;
(2) rational justifications belong to a different order than mechanistic explanations;
(3) so rational justifications aren’t part of the natural order at all.
And then, if reasons are extruded from nature, then it will be tempting to look for them elsewhere — such as, in a realm of “immaterial being” (the Platonic or Neo-platonic option).
But one philosophical move that we should not make is identify the rational with the quantum, for the simple reason that quantum phenomena are (likely to be) either fully deterministic (under Bohmian mechanics or the Many-Worlds interpretation of QM) or non-deterministic in the wrong way — because uncaused, genuinely random events cannot be appealed to as the grounds for deliberation, choice, etc. and so cannot shed any light on agency and responsibility.
Better (and more succinctly put): the libertarian conception of free will is motivated by the thought that we need to avoid determinism in order to account for responsibility, but it swings so far in the other direction that it is not constrained by anything at all — not even by reasons — and so gives us a picture of freedom as mere frictionless spinning in the void. And that’s a serious philosophical problem with the libertarian conception of freedom that doesn’t go away just by packaging it in quantum woo.
It matters HOW things are determined. There’s a difference between a system that is steered by feedback and one “pushed” in the manner of billiard balls.
I don’t suppose there’s any deep metaphysical difference in the level of causation. It’s still physics and chemistry, but there are observable differences in the behavior of passive and dynamic systems.
But being disembodied doesn’t solve any conceptual problems associated with free will. One still has to ask why a “free” agent “want’s” to do something or “prefers” one thing over another.
What is a motive if it is not a hidden cause?
Liz said:
Experimenter Effects – Could Experimenter Effects Occur in the Physical
and Biological Sciences? by Rupert Sheldrake – The Skeptical Inquirer, May/June 1998.
Excerpts:
This is not enough, may be you are implicitly meaning it but, to have free will you need options and forsee consequences and your decision shouldn`t be determined. The possible consequences shouldn`t determine your option, you always have to be “free” to go for one or another.
There might be a “deep metaphysical difference in the level of causation” — depending on what one means by “deep” and “metaphysical”!
In support of Petrushka’s distinction, I bring to your attention “Bio-agency and the problem of action” (2009), where Skewes and Hooker distinguish between linear and non-linear causation. Linear causation is what they call, rather nicely, “the causal thread model”:
by contrast, living things, as agents, don’t fit the causal-thread picture:
The orthodox defender of Scholastic realism would insist that “self-organised process architectures and globally organised feedback loops” are not genuine “final causes”, but just “efficient causes” in fancy drag. But, speaking strictly for myself here, I don’t see why the distinction between linear causation (“the causal-thread model”) and non-linear causation (globally-organized feedback loops) can’t just be the distinction between the kinds of causation between simple (e.g. non-living) and complex (e.g. living) systems.
Nice try, but I don’t criticize the science. I criticize the logic of the argument, generally by agreeing arguendo to whatever scientific claim is being used. Until you provide an example with a link to it, all you are doing here is blowing smoke in an attempt to undermine any argument I might make that simply involves a scientific claim or evidence.
I don’t really see why ‘billiard-ball determinism’ would in any case be a problem for responsibility. If what you do is bound to happen, so is being punished for it. Or not being. We can still act as if free will is true. Which does not make it so; the question is essentially unanswerable, since we cannot revert to the exact conditions and see how things unfolded. I think that QM suggests that, if we could do that, things would not repeat exactly, but that’s not because we have Quantum Free Will. But it would not be a major problem if they would follow exactly the same tracks, any more than knowing that the book or film one is engaged by is already finished. Just don’t give me any spoilers.
The question of socially applied consequences for behavior is rather simple. the behavior of things having brains is affected by consequences.
I expect Rupert Sheldrake was just having fun when he wrote that.
Imagine a particle accelerator, where we fire electrons at atoms.
So what should we do to make that double-blind? Do we have to make sure that the atoms do not know whether they are being hit by real electrons or placebo electrons?
Instrumentation eliminates a lit of human bias.
Whatever bias remains is presumably scrubbed by peer review, replication and so forth.
Parapsychology has positive results in inverse proportion to the care taken in experimental design and implementation.
Well, you have a point, William, because I was recalling some posts of yours at UD, where you tend to be more outspoken in your dismissal of scientific arguments than you are here, and right now I’m sufficiently pissed off with UD that I don’t feel like searching there.
However, what I will say is that one thing widely misunderstood at UD, and you may or may not be one of the misunderstanders, is that scientific reasoning is probabilistic. In other words we do not come to true/false conclusions, we come to provisional probabilistic conclusions. And the probabilities that we come up with are not the probability that a hypothesis is correct overall, but the probability that we would observe what we do observe if the null hypothesis is correct, or, alternatively, the relative probabilities of two hypotheses, given the data.
And a lot of misunderstanding of scientific claims at UD is based on a misunderstanding of the difference between a probabilistic claim and a categorical true/false claim. To take a famous example:
P1 If a person is Martian, he is not a member of Congress (True)
P2 This person is a member of Congress (True)
C Therefore he is not a Martian. (True)
P1 and P2 are true and the logic is valid, and so the conclusion is sound.
Now take:
P1 If a person is American, he is not a member of Congress.(False)
P2 This person is a member of Congress (True)
C Therefore he is not an American. (False)
Now P1 is false, but the reasoning is the same, and so the conclusion is false, not because of faulty reasoning, but because of a false premise.
In other words, if we stick with T/F logic, correcting the premise will correct the conclusion.
However, if we correct the premise by making it probabilistic:
P1 If a person is American, he is probably not a member of Congress. (True)
P2 This person is a member of Congress (True)
C Therefore he is probably not an American. (True)
We now have a true P1, P2 is still true, and the same apparent form of logic. But now our conclusion is false. In other words, what worked for us in binary logic doesn’t work in probabilistic logic.
And unless you understand the probabilistic logic (whether of null hypothesis testing or Bayesian inference) of scientific reasoning then you are liable to make mistakes with your critiques of that logic.
And I don’t think you understand the probabilistic logic of scientific reasoning. To be fair, I don’t think you claim to. But I don’t think you understand what it prevents you from understanding. It’s not difficult though. You don’t need a science PhD or even an undergrad to get it (although a few with both don’t).
OK, but then what do you mean by “free”? If your choice is indeterminate, how will you know you’ve made the right one?
I think that the right chose is the one that allow you to reach your intent.
William, that article itself is an example of bias. Clearly a parapsychology study HAS to be blinded or it would be utterly worthless.
The same is true of Phase II, and normally Phase II drug trials. It’s true of most intervention studies. It’s also true in my experience of studies in which you have two groups and any kind of judgement has to be made. For example, when examining fMRI scans or EEG samples for movement artefacts, it is important to anonymise the filenames and scramble the file order before you start so that you are not influenced by the group to which the file belongs.
In other fields methodologies other methods are used to minimise bias, for example random sampling procedures, blind raters, inter-rater reliability checks etc.
So the proportion of studies in each field that use blinding is a terrible measure of how prevalent the appropriate use is. It is simply a function of the proportion of studies for which blinding is the appropriate methodology.
For many studies in cognitive neuroscience, there is no need for blinding procedures because the studies are within-subject measures, and the randomisation is done by the randomisation protocol in the computer. But you design your protocol as carefully as possible to eliminate bias.
William seems to have found someone who argues his case for him; yet he hasn’t actually checked to see if in fact blind tests are not part of routine science, even in physics, chemistry, engineering, environmental safety labs, medical labs, and many other testing and evaluation situations where people have to rely on good data to make critical decisions.
Control groups are used routinely in biology. Even high school biology lab projects involve learning how to use control groups and why. Clinical trials in medicine involve double blind testing routinely.
Chemistry courses dealing with qualitative and quantitative analysis teach the use blind methods in analyzing samples.
Forensic chemistry and physics use blind techniques to “calibrate” laboratories to be sure that their techniques are reliable. Routine checks of labs using samples that are known only to those doing the tests of the labs are done during the normal routines of these laboratories in order to be sure they are detecting what is there and not detecting what is not there.
The experiments on the search for the Higgs boson done at the Large Hadron Collider at CERN were done on two different detectors, ATLAS and CMS, and the experimental results coming from each experimental group were kept in strict secrecy from each other for well over a year in order to prevent bias and mutual influence.
Calibration procedures in most physics and engineering laboratories are very often done using blind methods to be sure that they can be carried out reliably; and the training procedures for the technicians using such equipment also check that the protocols are carried out “by the book” so that every technician plus equipment is calibrated and standardized.
Even admission procedures in competitive educational programs are done blindly to be sure there is no gender, ethnic, or socio/political bias.
The military routinely trains its personnel with blind procedures in training exercises designed to make sure all personnel go through proper procedures and know their equipment thoroughly.
Anyone who has worked in any kind of laboratory, whether it is an academic lab, a commercial lab, a government lab, or a hospital lab knows that there are regular drills for training. All laboratories worth anything will be on a check list for blind testing at unannounced intervals.
The design of new equipment, such as the detectors at CERN, have all been through blind testing to see if researchers and analytical techniques actually detect what they purport to detect and don’t give false positives.
Exactly. So how does this Free Will Unit know which action is the most likely to reach its intent?
Oh, I see. You’re walking your earlier claim back. Now your point is that blind methodology is not appropriate for a lot of scientific research, apparently failing to remember that this is the whole original point I made – that its because of the materialist ideology that blind methodologies are not considered appropriate for so much of the physical sciences.
I agree with Mike and Lizzie that WJM is not aware of how much of science is conducted and when blinding is, and is not, appropriate. For example our lab routinely gets samples which are blinded to us for toxicity testing. On a daily basis we utilize extensive controls when we are testing for effluent toxicity using bioassay methodologies but we don’t blind those experiments because it is unneccesary to do so and adds nothing to the reliability of the data collected.
Last week I had some technicians aging some scale samples we collected during a fisheries study. I blinded those samples for size, weight, date of collection, and species so that the technicians would not be biased by the fish size or weight when making age determinations. At the same time they were working on that data collection I was conducting a series of experiments to determine blood-oxygen equilibria properties of whole blood collected from a species of native fish. There is no way to blind such an experiment nor is it necessary to do so when collecting this type of data.
Perhaps WJM will take this to heart and recognize that his self-professed ignorance of science has once again led him astray in his thinking on how experiments should be designed.
I’d like William to enlighten us as to ho many of those blind parapsychology experiments detected psychic phenomena.
Once again William is right! Rah Rah William!
Please. “materialist ideology”.
Simply show a better way that works (i.e. is more productive) and converts will fall at your feet.
Even if you could provide an example, dismissals are not arguments.
Yeah, but that’s not me nor has it anything to do with my arguments. All you’re doing here is trying to divert attention from being called out on misrepresenting me and the arguments I engage in.
OMagain,
You know, what is the lesson to be learnt here precisely William? Going to write a letter to Nature? And say what?
It’s a war, remember? Apparently not!
So your feelings are hurt if you perceive that the correct amount of attention is not being paid to you “calling” someone out?
For shame!
Everybody, William has some important calling out to do! Please, concentrate on that for now…
BK,
You might find and read the Sheldrake article I excerpted from. He points out that there is quite a bit more blind methodology employed in corporate research and in certain other situations, but nothing that approaches that of psi research.
One of the reasons clinical drug studies use so much blind protocol is because of the placebo effect, which was found to be actually much higher when both the doctor and the patient believed the sugar pill to be the actual drug. The point being that when there is a perceived significant potential for bias affecting the outcome of research in terms of known physical connections and psychologically corrupted protocols, blind protocols are used. However, when there are no such anticipated problems, blind protocols are not use – because the materialist experimenters do not believe that they can affect the outcome of the experiment immaterially in a non-local, mind-moving-matter way.
Such information is easily obtained via a few minutes on google or bing.
Then you are aware that there is no evidence whatever for psychic phenomena.
Nothing. Nada.
Not one demonstration of any psychic ability in the history of the world that is independently reproducible.
You realize, of course, that any number of TV producers would step all over themselves to broadcast an authentic psychic.
Sorry, but your example is exactly what I’m talking about. You think there is no need to use blind protocols in certain scientific activities because you don’t believe that your mindset, expectations, or views will affect the raw data. It may not be possible to run blind protocols on a lot of scientific research, but it would be foolish to claim that there is not a lot of research that could be subject to blind protocols but is not simply because of the a priori view that mind cannot directly affect that which is being researched.
Can you support this assertion? I am aware of no such thing.
Feel free to offer up a counterexample.
ETA:
I’m waiting patiently for a non-bullshit demonstration of psychic abilities.
Duke University invested 20 years or more on research and got nada. Zilch.
Randi still has his million.
No, I’m not walking my claim back. Your claim was:
I replied:
Nothing that you have posted contradicts my response. Psi research is NOT the “only area where blind, double blind and triple-blind protocols are stringently employed” – as I said, such protocols are stringently employed in many other fields including my own. But clearly you only need “triple blind” protocols where you have three layers of humans in your protocol (in the case of the psi research: sitter, proxy sitter and medium); if you have two (patient; experimenter) then you need two, and if you have one (a single rater rating non human data), then one. Even then, you often use two or more raters, blind to each other’s ratings, and then compare ratings.
This is simply SOP in a vast number of experimental protocols. But not all, clearly because not all experimental protocols require such blinding. In contrast, a non-blinded psi study simply couldn’t be published.
So attempting to compare the stringency of different fields by comparing the number of studies in which blind ratings were employed as a proportion of the total number of studies is ludicrous.
And your claim remains, as it was when you made it, false.
Much more pertinent is the quality of the blinding. In the study I looked at with the mediums, there was a glaring hole in the protocol.
Since Murray has established Skeptical Inquirer as an authority on such research, he might open any issue and see what they have to say about the results of such research.
They are, after all, affiliated with Randi and his offer of one million dollars for any independently verified instance of psychic ability.
No; you are simply attempting to piss people off and draw all attention to your narcissistic self.
You really don’t understand how science is done; and you don’t want to understand. You apparently believe that your self-enforced, arrogant ignorance absolves you of any responsibility.
You seem to think that all you have to do is analyze other people’s logic and “argue” by selectively quote mining “authorities.”
Unfortunately you cannot do that while remaining totally out of touch with reality.
You really don’t have a clue.
Hmmmm…
Nobody thinks this, William. As I said, your allegation is a slur on scientists that is without foundation.
Blinding protocols are SOP in a great many domains of science, and are taught, together with all the difficulties of achieving good blinding (something those medium experimenters don’t seem to be aware of). I teach it myself. Indeed I’m preparing some slides on that very topic right now.
And rater blinding is just the tip of the iceberg. Vast amounts of effort in science go into designing protocols that will minimise experimenter bias, and even then, scientists are taught to be wary (again something that has escaped many psi reserachers) of data that entitle you to reject the null, but don’t necessarily support your conclusion. For instance, I’m quite sure that in the mediumship study the null was properly rejected. The probability of observing that result under the null that the sitters would be unable to distinguish between a reading intended for them and reading not intended for them is very low. However, that does not necessarily support the hypothesis of the study. The issue is: how did the students know which reading was intended for them? One possible answer is that they picked the one that seemed suited to an parent, if the discarnate was a parent, and a peer, if the discarnate was a peer. And that the mediums were able to guess, at well beyond chance, from the name given to them, whether the discarnate was more likely to to be a parent or a peer.
A researcher in “materialist” scientist would report this as a caveat, and would probably not have let such a confound enter the study in the first place. Shouting “triple blind!” isn’t any kind of guarantee that you have done a sound experiment. And claiming that any scientist here doesn’t “believe that your mindset, expectations, or views will affect the raw data” is idiotic. Of course we do.
That’s why we do double blind studies.
You’ve provide no resources; quotes and no links to support the claim that there is zero evidence in support of the existence of psi phenomena. I’ll take this to mean that you cannot back up your bald assertion about psi research.
Hmmmm is right. In his case, it works either way. 🙂
Liz, I know we aren’t allowed to imply dishonesty in other TSZ posters, but dishonesty is the norm in parapsychology. Always has been. The only certifiably honest people in the business are stage performers who call their work illusions.
Murray’s Skeptical Inquirer does monthly exposés on fraud and incompetence in psychic research.
It’s also well known (and has been experimentally verified) that practicing scientists are less capable of detecting cheating than are stage magicians.
Cheating is a real problem when you are dealing with a phenomenon that has a long history of being the product of cheats. A guy in my home town bilked some major corporations out of millions of dollars for a fraudulent bit of internet hardware. Double and triple blind experiments are of no use if the people doing the experiment are dishonest.
ETA: Cheating is always a possibility, and I’m sure Murray will be happy to recite cases of cheating in science.
Liz,
I said what I said about employing stringent blind protocols was in regards to the Sheldrake article – the only field of study where published work, interviews with researchers and examination of academic curriculum revealed a stringent adherence to blind protocols (and in some cases, even the knowledge of how to use them) was in psi research.
I agree that stringent blind protocols are used in drug testing, but I stand by my point that even those blind protocols that are used are fabricated from the materialist perspective – they are certainly not engineered with the idea that experimenter expectations and mindsets can actually affect the physical nature of that which is being examined.
I am quite sure he won’t do the same for ID/creationism.
Because cheaters and frauds exist in a field, doesn’t mean the field itself is fraudulent. It just means you’re trying to poison the well and use sweeping, negative generalizations to divert from the fact that you cannot support your assertion.
Sheldrake, I have to say, designs some of the worst, most transparent protocols I have ever seen. To claim that researchers outside psi research don’t use a methodology that “approaches that of psi research” is simply bullshit frankly. You could drive a coach and horses through Sheldrake’s telepathy experiments.
Where do you read this stuff, William? It is so wrong it’s almost comical.
First of all: how would you “double blind” an experiment to prevent an experimenter affecting, say the height of mercury in a tube? How would you single blind it?
Do you have any idea what you are talking about?
Now, what you might do, is discover that there were discrepancies between the reading of different experimenters. This happens all the time, not because the experimenters themselves inadvertently affect the height of the mercury, but because they may have biases like tending to round up, or tending to round down, or picking a different part of the meniscus. In such circumstance, multiple raters are used, blind to the readings of the other raters. If the ratings don’t show a symmetrical distribution about the mean, then you suspect bias on the part of one of the raters, and discard those ratings.
But we do these things. We know that confirmation bias, for example, is a very real effect, whether or not that bias actually physically affects the observed, or whether it merely affects the observation. That’s why we take such care to minimise it. By blinding, repeated measurements, statistical controls and all kinds of other things.
We are also aware that the observer, in many cases does physically affect the observed, whether you are measuring a proton, or a child interacting with her mother. And again, we take account of this, and do our best to control for it, and report our results with appropriate caveats in case we have not succeeded.
Your bias against science is showing, William.