Lord, Ross & Lepper?

This blog has been redesigned! You might have noticed the new tagline.

In 1979, Charles Lord, Lee Ross, and Mark Lepper assigned 48 undergraduates to two groups, based on their beliefs about the death penalty: did it work as a deterrent?

The students sat down with a researcher (blinded to their initial beliefs about the death penalty) and were asked to select index card containing information about a research study that investigated whether or not capital punishment results in an overall decrease in violent crime.

It’s kind of a sport to find the trick researchers have played on their unwitting sophomore psychology majors, and this one was it. Each round of index cards contained ten identical cards, such that the student had no real choice. They either were drawing from an identical hand of ten cards that summarized a study that supported capital punishment as a deterrent (pro-deterrence) or an identical hand that summarized a study that did not support death penalties as deterrents (anti-deterrence). That is, within each group, some of the students began by seeing a study that purported to agree with them, and some of them saw a study that disagreed. [These methods get hairy—there’s a chart below]

The student read the index card, then was given even more information:

The descriptions gave details of the researchers’ procedure, reiterated the results, mentioned several prominent criticisms of the study “in the literature,” listed the authors’ rebuttals of some of the criticisms and depicted the data in table form and graphically.

Given all the information, researchers asked the students to analyze the research: how methodologically sound was the study? How convincing did they find it? For each question, participants answered on a scale from -8 (completely unsound methods or completely unconvincing) to 8 (sound methods, or very convincing)

Then they repeated the whole procedure again, starting with drawing an index card with a study showing the opposite, getting more information, and then analyzing the research.

See the whole process below:

Simplification of Lord, Lepper, & Ross (1979) study design.

 So, some students who are Pro-Deterrence saw a study that disagreed, then a study that agreed, rating each. Some students who are Pro-Deterrence saw the opposite: first, a study agreeing with them, then disagreeing. Anti-Deterrence students were divided in the same way. Oh, and here’s an important bit: no matter what group, all students saw the same pro-deterrence study and the same anti-deterrence study. They were all asked to assess the methodology and persuasion of the same two studies, one of which agreed with their beliefs, and one which did not.

Did the converge on the truth? Did they find the same kinds of methodological holes in each study?

Nope.

The results?

A single, main effect of initial belief on assessment of research, at p<.001.

That is, the participants discussed methodological holes and lack of persuasion only when the study disagreed with them. The research has since been replicated. And replicated. And replicated.

Screen Shot 2015-08-14 at 8.37.48 PM

As these comments make clear, the same study can elicit entirely opposite evaluations from people who hold different initial beliefs about a complex social issue.

This blog aims to do better. Sometimes it succeeds.


You can read the entire study here, and about attitude polarization generally here.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s