This blog has been redesigned! You might have noticed the new tagline.
In 1979, Charles Lord, Lee Ross, and Mark Lepper assigned 48 undergraduates to two groups, based on their beliefs about the death penalty: did it work as a deterrent?
The students sat down with a researcher (blinded to their initial beliefs about the death penalty) and were asked to select index card containing information about a research study that investigated whether or not capital punishment results in an overall decrease in violent crime.
It’s kind of a sport to find the trick researchers have played on their unwitting sophomore psychology majors, and this one was it. Each round of index cards contained ten identical cards, such that the student had no real choice. They either were drawing from an identical hand of ten cards that summarized a study that supported capital punishment as a deterrent (pro-deterrence) or an identical hand that summarized a study that did not support death penalties as deterrents (anti-deterrence). That is, within each group, some of the students began by seeing a study that purported to agree with them, and some of them saw a study that disagreed. [These methods get hairy—there’s a chart below]
The student read the index card, then was given even more information:
The descriptions gave details of the researchers’ procedure, reiterated the results, mentioned several prominent criticisms of the study “in the literature,” listed the authors’ rebuttals of some of the criticisms and depicted the data in table form and graphically.
Given all the information, researchers asked the students to analyze the research: how methodologically sound was the study? How convincing did they find it? For each question, participants answered on a scale from -8 (completely unsound methods or completely unconvincing) to 8 (sound methods, or very convincing)
Then they repeated the whole procedure again, starting with drawing an index card with a study showing the opposite, getting more information, and then analyzing the research.
See the whole process below:
Did the converge on the truth? Did they find the same kinds of methodological holes in each study?
A single, main effect of initial belief on assessment of research, at p<.001.
That is, the participants discussed methodological holes and lack of persuasion only when the study disagreed with them. The research has since been replicated. And replicated. And replicated.
As these comments make clear, the same study can elicit entirely opposite evaluations from people who hold different initial beliefs about a complex social issue.
This blog aims to do better. Sometimes it succeeds.