[Early morning airport thoughts, mostly organized]
There’s been a great replication debate playing its way across psychology this year, heating up recently, and there are some particularly good things being said. Things that make me happier and more exclamatory about this field I love.
To start, two years ago, Daniel Kahneman wrote an open email to social psychologists who were doing priming research. Priming makes intuitive sense: it’s the idea that if I show you a bunch of gory and violent scenes, and then give you one of those fill-in-the-letters tasks with B_ _ _D, you’re more likely to answer with BLOOD than, say, BREAD. Of course, it fits in with nearly everyone’s sense of the world that Early Stuff impacts Later Stuff. Okay, perhaps less intuitive: does giving you a hot cup of coffee (rather than an iced coffee) make you think of me as more warmly? [Spoilers: the data seems fragile, but I will view you more warmly if you give me coffee.]
Priming is catchy–the stuff of easy headlines and whole swaths of self-help books. But individual priming effects don’t seem to replicate well at all, and this part does not make it into bestsellers. Quoth Kahneman (whose book, Thinking Fast and Slow, cites a fair bit of priming research):
For all these reasons, right or wrong, your field is now the poster child for doubts about the integrity of psychological research. Your problem is not with the few people who have actively challenged the validity of some priming results. It is with the much larger population of colleagues who in the past accepted your surprising results as facts when they were published. These people have now attached a question mark to the field, and it is your responsibility to remove it.
My reason for writing this letter is that I see a train wreck looming. I expect the first victims to be young people on the job market. Being associated with a controversial and suspicious field will put them at a severe disadvantage in the competition for positions. Because of the high visibility of the issue, you may already expect the coming crop of graduates to encounter problems. Another reason for writing is that I am old enough to remember two fields that went into a prolonged eclipse after similar outsider attacks on the replicability of findings: subliminal perception and dissonance reduction.
Hi. Yes. I would be one of those young people who would like a field I adore to avoid train wrecks.
Next up, the Many Labs Replication Project, round one, in which thirty-six psych labs collaborated to replicate thirteen findings.
Of the 13 effects under scrutiny in the latest investigation, one was only weakly supported, and two were not replicated at all. Both irreproducible effects involved social priming. In one of these, people had increased their endorsement of a current social system after being exposed to money3. In the other, Americans had espoused more-conservative values after seeing a US flag4.
Social psychologist Travis Carter of Colby College in Waterville, Maine, who led the original flag-priming study, says that he is disappointed but trusts Nosek’s team wholeheartedly, although he wants to review their data before commenting further. Behavioural scientist Eugene Caruso at the University of Chicago in Illinois, who led the original currency-priming study, says, “We should use this lack of replication to update our beliefs about the reliability and generalizability of this effect”, given the “vastly larger and more diverse sample” of the Many Labs project. Both researchers praised the initiative.
Curious about how Many Labs is conducting their replication? Oh, here’s all their materials and datasets.
As replication efforts continued, there was some pushback. Simone Schnall, the lead author on With a Clean Conscience: Cleanliness Reduces the Severity of Moral Judgments, was not best pleased by her experience. Brent Donellan defended the replication work. The replication researchers offered their email exchanges to shed more light.
(I want to add my sheer delight that I can download the datasets and materials used and flip through the email exchanges and read the blogs of the researchers, without perching myself on the physical doorsteps of all of the people involved. The internet is a wondrous thing.)
There has been further blogging and discussing and debating since, but I want to highlight this comment by Dave Nussbaum, part of which is excerpted below:
My guess is that the vast majority of people in social psychology (and beyond) could probably be characterized as believing the following things, to vary degrees.
1. There are shortcomings in the current research practices in our field. We are not necessarily unique in this way, but if our goal is to try to get a better understanding of what’s true, we have to improve. This is true in several areas, including, but not restricted to reducing p-hacking and other questionable research practices, publishing null results, and increasing the amount of direct replication that we do. […]
2. There is a collective action problem whereby it’s difficult for many individuals, particularly at early stages in their careers, to unilaterally decide that they will forego methods that will improve their productivity, particularly when it can be tantalizingly easy to rationalize one’s behavior in various ways.
3. We want the “revolution” to be peaceful and fair. Our assumption is, perhaps somewhat naively, that everyone has been acting in good faith. Improving research practices should not come at any individual’s expense (although there may be inevitable collateral damage, it should be minimized whenever possible). That means that we shouldn’t single out a single person who is one of many adhering to norms that should be changed.
This, especially #2, seems true to me. I don’t think I’m wrong if I propose that psychologists (the academic kind, not the couch kind) trade on their reputation. As young researchers, you have to build that reputation. Maybe you do a thesis project in your undergrad, get listed as an author in research from the lab you’re working in. On your way to a graduate degree, more of this. Get a position in academia? You’re working to get noticed–studies that are published and cited are the means* to that end. And somewhere along the way–I distinctly remember this happening to me, and have watched it happen with peers–you notice that perhaps you’re not so confident in your results. Or you’re not entirely sure you’d expect your findings to appear outside the laboratory. And…what then? Academia is not entirely known for possession of wiggle room.
And while I don’t think anyone produces deep thoughts at 3:11 am in Midway Airport, I do think this: replication projects are starting to snowball. I just created an Open Science Framework account, where I can poke around through pre-registered hypotheses and look at results because they’re out there. I have spent too much of this week reading these debates and papers and replications because a half-decent wifi connection means you can. We, young and squishy science that psychology is, are creating a culture of accountability and openness. We are trying, perhaps not as well as we could, but trying nonetheless, to make this a painless transition: an adventure of “look! new knowledge!” and not “you were Wrong and you should feel Bad!” Onwards!
*I worked very hard not to make a stats joke out of this.