Photo Anna Marchenkova



Richard Wiseman's response to this paper: Skeptical Inquirer, '99 vol23 no5
PDF

Journal of Scientific Exploration 12, 73-78, 1998
by Rupert Sheldrake
PDF

In experimental psychology and clinical research, there is overwhelming evidence that experimenters' attitudes can influence the outcome of experiments (Rosenthal, 1976). The results tend to be biased in the direction of the experimenters' expectations. In order to guard against these subtle and pervasive effects, experiments can be conducted under single-blind or double-blind conditions. In single-blind experiments, the investigator does not know which samples or treatments are which. But when human subjects are involved, as in medicine and experimental psychology, double-blind procedures are necessary to guard against the expectancy of both subjects and investigators. In a double-blind clinical trial, for example, some patients are given tablets of a drug and others are given similar-looking placebo tablets, pharmacologically inert. Neither researchers nor patients know who gets what.

In such experiments, the largest placebo effects usually occur in trials in which both patients and physicians believe a powerful new treatment is being used (Roberts et al., 1993). The inert tablets tend to work like the treatment being tested, and can even induce its characteristic side-effects (White et al, 1985). Likewise, experimenter expectancy effects are well know in experimental psychology, and also show up in experiments on animal behaviour (Rosenthal, 1976).

How widespread are experimenter expectancy effects in other branches of science? No one seems to know, and there is often a tacit assumption that they are negligible.

I have attempted to quantify the attention or inattention to possible experimenter effects in different fields of science by means of two surveys. The first was of the proportion of published experiments in which blind procedures were used. In the second, university scientists were asked whether blind methodologies were practised or taught in their departments.

The results reveal that blind methodologies are rarely, if ever, practised or taught in physics, chemistry and much of biology. I conclude by proposing a simple experimental procedure for assessing the importance of experimenter expectancy effects in areas where their possible influence is neglected.

Methods

Survey of scientific literature

A survey of scientific literature was conducted between October 1996 and February 1997. Leading journals were selected in different fields of experimental science, and the most recent numbers available in libraries were examined. The Contents pages were photocopied, and were used for recording the category of each paper listed on them. The papers were then examined in detail, with particular attention to the Methods section, and classified into one of the following categories:

  1. Not applicable: papers that did not involve experimental investigations, for example theoretical or review articles.
  2. Blind or double-blind methodologies used.
  3. Blind or double-blind methodologies not used.

On the basis of this information, the total number of experimental papers surveyed in each journal and the number involving blind techniques was listed, as shown in Table 1.

This literature survey was carried out by myself and by Dr Amanda Jacks.

Table 1
Numbers of papers reviewed and the number involving blind or double-blind methodologies in a range of scientific journals.*

Journal
Volumes (and Parts)
Number of Papers
Blind Methods
Physical Sciences      
Journal of the American Chemical Society
118 (39-41)
86
0
Journal of Applied
Physics
80 (11)
76
0
Journal of Physics: Condensed Matter
8 (48-9)
75
0
Totals
237
0
 
Biological Sciences
Biochemical Journal
318-9 (1-3;1)
191
0
Cell
87 (4-5)
29
0
Heredity
76 (1-5)
58
0
Journal of Experimental Botany
46-7 (295-302)
132
0
Journal of Molecular Biology
262 (2-5)
48
0
Journal of Physiology
497-8 (1;1-2)
145
4
Nature
383-4 (6600-10)
108
0
Proceedings of the National Academy of Sciences (US)
93 (22-3)
203
3
Totals
914
7 (0.8%)
 

Medical Sciences

British Journal of Clinical Pharmacology
42 (3-5)
49
4
British Medical Journal
313 (7061-6)
53
2
Totals
102
6 (5.9%)
 
Psychology and Animal Behaviour
Animal Behaviour
52 (1-4)
72
2
British Journal of Psychology
87 (1-3)
21
0
Journal of Experimental Psychology: General
125 (1-3)
23
2
Human Perception and Performance
22 (5-6)
27
3
Totals
143
7 (4.9%)
 
Parapsychology
Journal of the Society for Psychical Research (1993-6)
59-61 (830-45)
14
11
Journal of Parapsychology (1994-6)
58 (3)- 60 (2)
13
12
Totals
27
23 (85.2%)

*Only papers reporting experimental results were included in this survey; theoretical papers and review articles were excluded. All publications appeared in1996-7 unless otherwise indicated.

Survey of university science departments

A survey of science departments at 11 British Universities was carried out by telephone by my Research Assistant Jane Turney, an experienced interviewer. She spoke either to professors in these departments, or to other members of the academic teaching staff. She first introduced herself and explained that she was carrying out a survey on the use of blind techniques in the hard sciences, and asked them two questions:

  1. Do you ever use blind experimental methodologies in your department?
  2. Are students taught about blind methodologies and experimenter effects in general?

With the consent of the interviewees, their replies were tape-recorded and later transcribed.

The results of this survey were tabulated and are shown in Table 2.

Table 2
A Survey of Science Departments*

Members of the academic staff were interviewed by telephone and asked the following questions:

  1. Do you ever use blind experimental methodologies in your department?
  2. Are students taught about blind methodologies and experimenter effects in general?
Department
Number Surveyed
Blind Methods Used
Blind Methods Taught
Physical Sciences      
Inorganic Chemistry
7
0
0
Organic Chemistry
7
0
0
Physics
9
1
1
 
Biological Sciences
Biochemistry
10
1
2
Molecular Biology
6
1
0
Genetics
8
4
4
Physiology
8
6
6

*(Results of a survey of science departments carried out between December 1996 and February 1997 at the following British universities: Bristol, Cambridge, Edinburgh, Exeter, Imperial College (London), Manchester, Newcastle, Oxford, Reading, Sheffield, University College (London)).

Results and Discussion

The widespread neglect of possible experimenter effects

The use of blind procedures in different branches of science gives a measure of the importance researchers in that field attach to experimenter effects. In Table 1, I summarize the results of a survey of papers published recently in a range of scientific journals. In the physical sciences, no blind experiments were found among the 237 papers reviewed. In the biological sciences, there were 7 blind experiments out of 914 (0.8%); in the medical sciences, 6 out of 102 (5.9%); and in psychology and animal behaviour 7 out of 143 (4.9%). By far the highest proportion (85.2%) was in parapsychology.

A survey of science departments at 11 British Universities confirmed that blind procedures are rare in most branches of the physical and biological sciences. They are neither used nor taught in 22 out of 23 physics and chemistry departments, or in 14 out of 16 biochemistry and molecular biology departments (Table 2). By contrast, blind methodologies are practised and taught in 4 out of 8 genetics departments, and in 6 out of 8 physiology departments. In most of these departments they are used occasionally rather than routinely, and are mentioned only briefly in lectures.

When academic scientists were interviewed for this survey, some did not know what was meant by the phrase "blind methodology". Most were aware of blind techniques, but thought that they were necessary only in clinical research or psychology. They believed that their principal purpose was to avoid biases introduced by human subjects, rather than by experimenters. The commonest view expressed by physical and biological scientists was that blind methodologies are unnecessary outside psychology and medicine because "nature itself is blind", as one professor put it. Some admitted the theoretical possibility of bias by experimenters, but thought it of little importance in practice. And one chemist added, "Science is difficult enough as it is without making it even harder by not knowing what you are working on."

Only in exceptional cases are blind techniques used routinely. This survey revealed three examples. All three involved industrial contracts, according to which the university scientists were required to analyze or evaluate coded samples without knowing their identity.

A simple experiment to test for experimenter effects

Although most "hard" scientists take it for granted that blind techniques are unnecessary in their own field of study, this assumption is so fundamental that it deserves to be tested empirically (Sheldrake, 1994). In all branches of experimental science we can ask: do the expectations of researchers introduce a bias, conscious or unconscious, into the way they carry out the experimental procedures, make their observations or select data?

I propose the following procedure. Take a typical experiment that involves a test sample and a control, for example the comparison of an inhibited enzyme with an uninhibited control in a biochemical experiment. Then carry out the experiment both under open conditions, and also under blind conditions, in which the samples are labelled A and B. In student practical classes, for instance, half the class would do the experiment blind. The other half would know which sample is which, as usual.

If in such tests there are no significant experimenter effects, then for the first time there will be evidence to support the belief that blind techniques are unnecessary. On the other hand, significant differences between the results under open and blind conditions would reveal the existence of experimenter effects. Further research would then be needed to find out whether the experimenters' expectations were influencing experimental systems themselves, or merely the way that the data were recorded or selected.

Acknowledgements

I am grateful to Dr Amanda Jacks for her help with the literature review and to Jane Turney for carrying out the university survey. This work was supported by the Institute of Noetic Sciences, Sausalito, CA and the Lifebridge Foundation of New York.

References

Roberts, A.H., Kewman, D.G., Mercier, L. & Hovell, H. (1993) The power of nonspecific effects in healing: implications for psychosocial and biological treatments. Clinical Psychology Review 13, 375.

Rosenthal, R. (1976) Experimenter Effects in Behavioral Research. New York: John Wiley.

Sheldrake, R. (1994) Seven Experiments that Could Change the World , Chapter 7. London: Fourth Estate.

White, L., Tursky, B. & Schwartz, G. (eds) Placebo: Theory, Research and Mechanisms. New York: Guilford Press.