
Alternative Therapies 5(3),88-91, May 1999
by Rupert Sheldrake
Introduction
In everyday life, as in scientific research, "our beliefs, desires and expectations can influence, often subconsciously, how we observe and interpret things", as a recent article in the Skeptical Inquirer expressed it.(note 1) In experimental psychology and clinical research, these principles are widely recognized, which is why experiments in these subjects are often carried out under blind or double-blind conditions. There is overwhelming experimental evidence that experimenters' attitudes and expectations can indeed influence the outcome of experiments.(note 2)
In single-blind experiments, an investigator does not know which samples or treatments are which. But when human subjects are involved, as in medicine and experimental psychology, double-blind procedures can be used to guard against the expectancy of both subjects and investigators. In a double-blind clinical trial, for example, some patients are given tablets of a drug and others are given similar-looking placebo tablets, pharmacologically inert. Neither researchers nor patients know who gets what.
In such experiments, the largest placebo effects usually occur in trials in which both patients and physicians believe a powerful new treatment is being tested.(note 3) The inert tablets tend to work like the treatment being studied, and can even induce its characteristic side-effects.(note 4). Likewise, experimenter expectancy effects are well know in experimental psychology, and also show up in experiments on animal behavior.(note 2)
In a fascinating historical account, Kaptchuk (note 5) has shown that blind assessment first began in the late 18th century "as a tool for fraud detection mounted by elite mainstream scientists and physicians to challenge the suspected delusions or charlatanism of unconventional medicine." Some of the first experiments were carried out to evaluate mesmerism, and were literally conducted with blindfolds. They took place in France at the house of Benjamin Franklin, the American minister plenipotentiary, who was head of a commission of inquiry appointed by King Louis XVI.
The use of blind assessment had been adopted by the mid-nineteenth century by homeopaths, and by the end of that century was taken up by psychologists and psychical researchers. But it was not until the 1930s that the potential of blind techniques combined with no-treatment control groups in clinical trials was widely recognized by mainstream medical researchers, and only after World War II did blind assessment in randomized controlled trials became a standard and normative technique.
In medicine and psychology, blind experimentation began as a deterrent against the unconventional, but its general importance has been recognized for orthodox research; it has been internalized. Although researchers in unconventional medicine and their skeptical critics have been aware of the possible effects of expectation and belief for over two hundred years, and conventional medical researchers and psychologists for decades, how widely has this awareness spread throughout the scientific community? What about the beliefs and expectations of experimenters in other branches of science? No one seems to know how important they might be. There seems to be a tacit assumption that scientists in orthodox fields of inquiry are immune from the general principle that "beliefs, desires and expectations can influence, often subconsciously, how we observe and interpret things".
In this article I attempt by means of 2 surveys to quantify the attention or inattention to possible experimenter effects in different fields of science. The first surveys was directed toward published reports in the scientific literature to see how frequently blind procedures were used in different branches of science. In the second survey, individuals in 11 British university science departments were asked whether blind methodologies were practiced or taught. The results reveal that blind methodologies are rarely if ever practiced or taught in physics, chemistry, or much of biology. I conclude this article by proposing a simple experimental procedure for assessing the importance of experimenter expectancy effects in areas where their potential influence has so far been neglected.
Survey Methods
Literature survey
A survey of scientific literature was conducted between October 1996 and April 1998. Leading journals were selected in different fields of experimental science, and the most recent numbers available in libraries were examined. The Contents pages were photocopied, and were used for recording the category of each paper listed on them. The papers were then examined in detail, with particular attention to the Methods sections, and classified into one of the following categories:
- Not applicable: papers that did not involve experimental investigations, for example theoretical or review articles.
- Blind or double-blind methodologies used.
- Blind or double-blind methodologies not used.
On the basis of this information, the total number of experimental papers surveyed in each journal and the number involving blind techniques were listed, as shown in Table 1.
This literature survey was carried out by myself and by Dr Amanda Jacks.
Table 1
Numbers of papers reviewed and the number involving blind or double-blind methodologies in a range of scientific journals.*
Journal
|
Volumes (and Parts)
|
Number of Papers
|
Blind Methods
|
Physical Sciences | |||
Journal of the American Chemical Society |
118 (39-41)
|
86
|
0
|
Journal of Applied Physics |
80 (11)
|
76
|
0
|
Journal of Physics: Condensed Matter |
8 (48-9)
|
75
|
0
|
Totals |
|
237
|
0
|
|
|
|
|
Biological Sciences |
|
|
|
Biochemical Journal |
318-9 (1-3;1)
|
191
|
0
|
Cell |
87 (4-5)
|
29
|
0
|
Heredity |
76 (1-5)
|
58
|
0
|
Journal of Experimental Botany |
46-7 (295-302)
|
132
|
0
|
Journal of Molecular Biology |
262 (2-5)
|
48
|
0
|
Journal of Physiology |
497-8 (1;1-2)
|
145
|
4
|
Nature |
383-4 (6600-10)
|
108
|
0
|
Proceedings of the National Academy of Sciences (US) |
93 (22-3)
|
203
|
3
|
Totals |
|
914
|
7 (0.8%)
|
|
|
|
|
Medical Sciences |
|
|
|
American Journal of Medicine |
103-104 (5-6; 1-3)
|
45
|
22
|
Annals of Internal Medicine |
128 (2-7)
|
41
|
12
|
British Journal of Clinical Pharmacology |
42 (3-5)
|
49
|
4
|
British Medical Journal |
313 (7061-6)
|
53
|
2
|
New England Journal of Medicine |
338 (9-16)
|
39
|
15
|
Totals |
|
227
|
55 (24.2%)
|
|
|
|
|
Psychology and Animal Behaviour |
|
|
|
Animal Behaviour |
52 (1-4)
|
72
|
2
|
British Journal of Psychology |
87 (1-3)
|
21
|
0
|
Journal of Experimental Psychology: General |
125 (1-3)
|
23
|
2
|
Human Perception and Performance |
22 (5-6)
|
27
|
3
|
Totals |
|
143
|
7 (4.9%)
|
|
|
|
|
Parapsychology |
|
|
|
Journal of the Society for Psychical Research (1993-6) |
59-61 (830-45)
|
14
|
11
|
Journal of Parapsychology (1994-6) |
58 (3)- 60 (2)
|
13
|
12
|
Totals |
|
27
|
23 (85.2%)
|
*Only papers reporting experimental results were included in this survey; theoretical papers and review articles were excluded. All publications appeared from 1996 through 1998 unless otherwise indicated.
Survey of University Science departments
A survey of science departments at 11 British Universities was carried out by telephone by my Research Assistant Jane Turney, an experienced interviewer. She spoke either to professors in these departments, or to other members of the academic teaching staff. She first introduced herself and explained that she was carrying out a survey on the use of blind techniques in the hard sciences, and asked them two questions:
1. Do you ever use blind experimental methodologies in your department?
2. Are students taught about blind methodologies and experimenter effects in general?
The results of this survey were tabulated and are shown in Table 2.
Table 2
A Survey of Science Departments
* Members of the academic staff were interviewed by telephone and asked the following questions:
1. Do you ever use blind experimental methodologies in your department?
2. Are students taught about blind methodologies and experimenter effects in general?
Department
|
Number Surveyed
|
Blind Methods Used
|
Blind Methods Taught
|
Physical Sciences | |||
Inorganic Chemistry |
7
|
0
|
0
|
Organic Chemistry |
7
|
0
|
0
|
Physics |
9
|
1
|
1
|
|
|
|
|
Biological Sciences |
|
|
|
Biochemistry |
10
|
1
|
2
|
Molecular Biology |
6
|
1
|
0
|
Genetics |
8
|
4
|
4
|
Physiology |
8
|
6
|
6
|
*(Results of a survey of science departments carried out between December 1996 and February 1997 at the following British universities: Bristol, Cambridge, Edinburgh, Exeter, Imperial College (London), Manchester, Newcastle, Oxford, Reading, Sheffield, University College (London))
Results and Discussion
The widespread neglect of possible experimenter effects
The use of blind procedures in different branches of science gives a measure of the importance researchers in that field attach to experimenter effects. In Table 1, I summarize the results of a survey of papers published recently in a range of scientific journals. In the physical sciences, no blind experiments were found among the 237 papers reviewed. In the biological sciences, there were 7 blind experiments out of 914 (0.8%); in psychology and animal behavior, 7 out of 143 (4.9%); and in the medical sciences, 55 out of 227 (24.2%); By far the highest proportion, 23 out of 27 papers (85.2%), was in parapsychology.
In the medical journals, out of the 55 reports involving blind methods, only 25 (11.0% of the total) represented double-blind trials. The other 30 employed single-blind methods, with one or more of the investigators carrying out blind evaluations or analyses. The majority of the papers did not involve blind methods.
Confirming the findings of the literature survey, the survey of science departments at 11 British Universities confirmed that blind procedures are rare in most branches of the physical and biological sciences. They were neither used nor taught in 22 out of 23 physics and chemistry departments, or in 14 out of 16 biochemistry and molecular biology departments (Table 2). By contrast, blind methodologies were practiced and taught in 4 out of 8 genetics departments, and in 6 out of 8 physiology departments. In most of these departments they are used occasionally rather than routinely, and are mentioned only briefly in lectures.
When academic scientists were interviewed for this survey, some did not know what was meant by the phrase "blind methodology". Most were aware of blind techniques, but thought that they were necessary only in clinical research or psychology. They believed that their principal purpose was to avoid biases introduced by human subjects, rather than by experimenters. The commonest view expressed by physical and biological scientists was that blind methodologies are unnecessary outside psychology and medicine because "nature itself is blind", as one professor put it. Some admitted the theoretical possibility of bias by experimenters, but thought it of little importance in practice. And one chemist added, "Science is difficult enough as it is without making it even harder by not knowing what you are working on."
Only in exceptional cases are blind techniques used routinely. This survey revealed 3 examples. All 3 involved industrial contracts, according to which the university scientists were required to analyze or evaluate coded samples without knowing their identity.
Limitations on the use of blind methodologies
In the biological and physical sciences, the fact that almost no published research involves blind techniques reflects the fact that researchers, reviewers and journal editors in these fields assume they are unnecessary. They are not part of their scientific culture. There may be many situations in which they would be desirable and informative, but practically no attention has yet been paid to this possibility.
By contrast, blind methods are part of the culture of medical research, both conventional and unconventional. In this context, the publication of so many papers in mainstream medical journals that do not involve blind techniques indicates that these methods are not always appropriate or applicable. I have not carried out an analysis of the situations where blind methods are not used in mainstream medical research, because my primary focus was on the biological and physical sciences. But an analysis of this kind would probably be very illuminating.
One kind of medical research not involving blind methods is the study of case histories. In a recent paper in this journal, Lukoff and colleagues (note 6) observed that: "The highly regarded randomized controlled clinical trial, though often powerful and useful, is neither feasible nor ideal for understanding the effects of many unconventional treatment approaches". My survey suggests that blinded, randomized, controlled trials are also neither feasible nor ideal for studying many conventional treatment approaches.
A Simple Experiment to Test for Experimenter Effects
Although most "hard" scientists take it for granted that blind techniques are unnecessary in their own field of study, this assumption is so fundamental that it deserves to be tested empirically.(note 7)
In all branches of experimental science we can ask: do the expectations of researchers introduce a bias, conscious or unconscious, into the way they carry out the experimental procedures, make their observations or select data?
I propose the following procedure to test this matter. Take a typical experiment that involves a test sample and a control - for example the comparison of an inhibited enzyme with an uninhibited control in a biochemical experiment. Then carry out the experiment both under open conditions, and also under blind conditions, in which the samples are labeled A and B. In student practical classes, for instance, half the class would do the experiment blind. The other half would know which sample is which, as usual.
If no significant experimenter effects are found in such tests, then for the first time there will be evidence to support the belief that blind techniques are unnecessary. On the other hand, significant differences between the results under open and blind conditions would reveal the existence of experimenter effects. Further research would then be needed to find out whether the experimenters' expectations were influencing experimental systems themselves, or merely the way that the data were recorded or selected.
The more independent investigations, the better. It cannot be healthy for the supposed objectivity of regular science to rest on untested assumptions.(note 8) This is an inquiry in which the critical skills of skeptics could play a major role. The use of blind methodologies, pioneered by skeptics in the field of unconventional medicine, has now been internalized within medicine and psychology, resulting in improved rigor and a more sophisticated awareness of the effects of experimenter bias. The so-called hard sciences have largely escaped skeptical inquiry, but there seems no good reason why they should continue to be granted this immunity.(note 9)
Perhaps it will turn out, after all, that "hard" scientists need not bother with blind techniques. They may indeed be exceptions to the principle that "our beliefs, desires and expectations can influence, often subconsciously, how we observe and interpret things." On the other hand, they may be like everybody else, including researchers in psychology and medicine.
Acknowledgments
I am grateful to Dr Amanda Jacks for her help with the literature review and to Jane Turney for carrying out the university survey. This work was supported by the Institute of Noetic Sciences, Sausalito, CA and the Lifebridge Foundation of New York.
References
- Mussachia, M. Objectivity and repeatability in science. Skeptical Inquirer 1995; 19 (6): 33-35, 56.
- Rosenthal, R. Experimenter Effects in Behavioral Research. New York: John Wiley; 1976.
- Roberts, A.H., Kewman, D.G., Mercier, L. & Hovell, H. (1993) The power of nonspecific effects in healing: implications for psychosocial and biological treatments. Clinical Psychology Review 13, 375.
- White, L., Tursky, B. & Schwartz, G. (eds) Placebo: Theory, Research and Mechanisms. New York: Guilford Press; 1985.
- Kaptchuck, T.J. Intentional ignorance: a history of blind assessment in medicine. Bulletin of the History of Medicine 1998; in press.
- Lukoff, D., Edwards, D. and Miller, M. The case study as a scientific method for researching alternative therapies. Alternative Therapies in Health and Medicine 1998; 4: 44-52.
- Sheldrake, R. Seven Experiments that Could Change the World. London: Fourth Estate;1994.
- Sheldrake R. Experimenter effects in scientific research: how widely are they neglected? J Sci Exploration 1998;12(1):73-78
- Sheldrake R. Could experimenter effects occur in the physical and biological sciences? Skeptical Inquirer 1998;22(3):57-58