Note: The “Classified” series spotlights some of the powerful lessons faculty are teaching in Haas classrooms.
As a young researcher, Kristin Donnelly was captivated by the work of social psychologists who published striking insights on human behavior, such as a finding that people walked more slowly after being exposed to the words gray, Florida, and Bingo. That was one of many surprising studies that had crossed into mainstream pop culture—thanks to books like Malcom Gladwell’s Blink—but there was a problem: No one could reproduce them.
“It was a sad, dark time to enter the field,” says Donnelly, who is now a Berkeley Haas PhD student in behavioral marketing. “I was pursuing similar ideas to people who had these incredible studies, but I couldn’t get any significant results. I became very disillusioned with myself as a researcher.”
Psychology has been rocked by a full-blown replication crisis over the past few years, set off in part by a 2011 paper co-written by Haas Prof. Leif Nelson. It revealed how the publish-or-perish culture—which rewards novel findings and did not reward attempts to replicate others’ work—led researchers to exploit gray areas of data analysis procedures to make their findings appear more significant.
Now Nelson, along with Prof. Don Moore, is working to train a new generation of up-and-comers in methodologies that many see as key to a rebirth of the field. This semester, they’re leading Donnelly and 22 other doctoral students from various branches of psychology in what may be a first for a PhD seminar: a mass replication of studies around one psychological theory: to see how well they hold up.
“We aren’t doing this because we want to take down the literature or attack the original authors. We want to understand the truth,” says Prof. Don Moore, an expert on judgement and decision-making who holds the Lorraine Tyson Mitchell Chair in Leadership and Communication. “There are many forces at work in the scientific publication process that don’t necessarily ensure that what gets published is also true. And for scientists, truth is at the top of the things we ought to care about.”
Examining the psychology of scarcity
The theory they’re examining is the “psychology of scarcity,” or the idea that being poor or having fewer resources actually impairs thinking. Moore and Nelson chose it not because of an inherent flaw, but because it’s relatively new (defined by a 2012 paper), high profile, and relevant to the students’ interests. Each student was randomly assigned a published study, and, after reaching out to the original researchers for background details, is attempting to replicate it. Results will be combined in a group paper.
“At Berkeley, we’re at the epicenter of this new methodological and statistical scrutiny, and as a young researcher I want to do good work that will replicate,” says Stephen Baum, also a PhD student in behavioral marketing at Haas. “Most people were willing to take things at face value before 2011. Things have changed, and we all have to do better.”
Moore and Nelson are leaders in the growing open science movement, which advocates for protocols to make research more transparent. Nelson, along with Joseph Simmons and Uri Simonsohn of Wharton, coined the term “P-hacking” in 2011 to describe widespread practices that had been within researchers’ discretion: removing data outliers, selectively reporting data while “file drawering” other results, or stopping data collection when a threshold was reached. These practices, they argued, made it all too tempting to manipulate data in pursuit of a P-value less than 0.05. That translates to a less than 5% chance that the results were due to pure chance, and it’s the standard for demonstrating statistical significance and the threshold for getting published.
Building confidence through pre-registration
At a recent session of their PhD seminar, Moore and Nelson led a discussion of one of the key ways to combat P-hacking: pre-registering research studies. It sounds arcane, but it’s simply the grown-up equivalent of what grade-school teachers require students to do before starting on their science fair project: Write out a detailed plan, including the questions to be answered, hypothesis, and study design, with key variables to be observed.
“How many of you are working with faculty who pre-register all their studies?” asks Nelson, a consumer psychologist in the Haas Marketing Group and the Ewald T. Grether Professor in Business Administration and Marketing. Less than half the class raises their hands.
Nelson and Moore estimate that only about 20% of psychology studies are now pre-registered, but they believe it will soon become a baseline requirement for getting published—as it has become in medical research. Although there’s no real enforcement body, the largest pre-registration portal, run by Brian Nosek of the Center for Open Science, creates permanent timestamps on all submissions so they can’t be changed later. Nelson co-founded his own site, AsPredicted, which now gets about 40 pre-registration submissions per day. It’s patrolled by a fraud-detecting robot named Larry that dings researchers for potential cheats like submitting multiple variations of the same study.
“Without pre-registration, statistics are usually, if not always, misleading,” Moore tells students. “They aren’t entirely worthless, but they’re worth less.”
Gold Okafor, a first-year PhD student studying social and personality psychology, says she plans to pre-register all her future studies. Though it requires a bit more work up front, it may save time in the end. “I think if you don’t use some of these methods, you could be called out and have your work questioned,” she says.
Students are also learning techniques such as P-curving, which is a way to determine the strength of a study’s results and whether data manipulation may have occurred. They’re also learning from guest lectures from other open science leaders, including Economics Prof. Ted Miguel and UC Davis Psychology Prof. Simine Vazire, who edits several journals.
The bedrock of the scientific method
Then there’s reproducibility, one of the bedrocks of the scientific method and the heart of the course. The American Psychological Association now promotes systematic replications, where multiple researchers around the world all re-create the same study. (PhD student Michael O’Donnell, who is assisting Nelson and Moore in teaching the course, recently led one such effort that cast doubt on a study finding that people who were asked to imagine themselves as a “professor” scored higher on a trivia quiz than those who imagined themselves as a “soccer hooligan.”)
Baum, the marketing student, will be replicating a psychology of scarcity study that was published in the flagship journal Psychological Science. The researchers asked people to recall a time when they felt uncertain about their economic prospects, and then write about how much pain they were experiencing in their body at that moment. The finding was that those people reported feeling more pain than those in a control group prompted to recall a time when they felt certain about their economic prospects.
“If it replicates, I will be surprised, but I’ve been wrong before,” Baum says.
No matter what the results, the replications will offer important new insights into the psychology of scarcity—important to understand in a society plagued by growing inequality, Moore says. Beyond the one theory, the fact that the course has the highest enrollment of any PhD seminar he’s ever taught gives Moore great hope for the future.
“The stakes are high,” he says. “The most courageous leaders in the open science revolution have been young people—it’s the doctoral students and junior faculty members who have led the way. The next generation will be holding themselves, and each other, to higher standards.”
Donnelly is a case in point. “This whole movement has made me a better researcher. I’ve changed what questions I ask, I changed how I ask them, and I changed how I work,” she says. “It’s a brave new world, and we may be able to lay the foundation of a new science that will build on itself.”