Feature

The war on error: A new project seeks to root out fraud in academia

.

Malte Elson, a psychologist at the University of Bern, is hoping to chip away at some of the seemingly widespread problem of incorrect or falsified research that is plaguing academia

“When I build my research on top of something that’s erroneous and I don’t know about it,” he told the Chronicle of Higher Education, “that’s a cost because my research is built on false assumptions.” 

That’s why Elson is leading a new initiative called ERROR. Starting with a philanthropic grant of $285,000, the project will compensate scholars each time they discover erroneous research. Some of these findings will be minor errors in citations or glitches in software, but others may be downright fraudulent. Scholars can propose papers they want to review, but the authors have to agree — in part because they have to be willing to give the ERROR researchers access to their original data.

(Illustration by Britt Spencer for the Washington Examiner)

The ERROR project workers are joining a growing cadre of other scholars who are investigating the work of colleagues. A blog called Data Colada, run by Joe Simmons of the Wharton School of the University of Pennsylvania, Leif Nelson of the University of California, Berkeley, and Uri Simonsohn of the Esade Business School in Barcelona, has been outing fraudulent research since 2013. 

How much of a problem is this? According to one source, there are more than 5 million academic papers published each year, and there are few checks on their reliability or validity. Peer reviewers who look at the papers before they are published are generally unpaid and rarely delve into data upon which the conclusions are based. Instead, they comment on the methodology or whether the findings add something important to the field. The number of retractions of academic papers has been going up, from 120 in 2002 to 5,400 20 years later, according to the website Retraction Watch. 

Technology has made it easier to create some of this new research. A study recently found that the use of the word “delve,” one of the hallmarks of a ChatGPT-written paper, increased almost fivefold in studies published in the PubMed database in 2023, the year the software was publicly introduced. On the other hand, technology has also made it easier to detect some of the most glaring problems in research, such as plagiarism. Discoveries about plagiarism by former Harvard University President Claudine Gay seem to be only the beginning. One administrator at the University of Wisconsin has published his own research multiple times in different journals under different titles without acknowledgment.

The incentives to publish low-quality or even fraudulent research are large. Academic employment and promotion are determined almost entirely by publication records. And academics who wish for continued funding for their research want to show that their findings are significant. Grantmakers want to be able to boast about the important research they are supporting. 

The question is how to incentivize people with the right knowledge to sus out more of this bad research. On the one hand academia has a reputation for being cutthroat — it was famously said that the fights are so bitter because the stakes are so low — but then why don’t more academics want to undermine one another? Academics who are lower down on the totem pole don’t want to question the findings of more prominent scholars, lest it interfere with their already slim job prospects. There are too many incentives for sloppy or dishonest research. 

The implications for bad research in the hard sciences are clear, particularly when it comes to fields such as medicine. But what about the social sciences? Some would ask: Who cares? But even here there can be real consequences for poor research.

Harvard is investigating a psychologist who appears to have made up data about why people lie — an irony, to be sure. The original study received a great deal of attention, but it turns out that the conclusions may have been flawed. It is also true, though, that small social science studies with ideologically driven conclusions often form the basis for public policy practices. Whether it’s bad ideas about reading making their way into K-12 classrooms or so-called anti-racist child welfare agencies, this research has large implications for significant numbers of people. Many of them are designed to win the attention of journalists and policymakers. 

It would be helpful if universities would be more realistic and skeptical when assessing the research records of faculty members. This is especially true in liberal arts disciplines in which genuine research breakthroughs are rare. In those fields, it would be more useful if candidates were simply well versed in the established literature and demonstrated an ability to teach it to undergraduates. 

In addition, in both these and other fields, academic leaders would do well to put more weight on the substance of publications instead of their sheer number. It is far more important if a scholar says something important in two or three articles as opposed to saying little or nothing in 20. 

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

In the meantime, some of this debunking will have to involve human beings in addition to technology. For instance, a person will probably have to determine whether the findings of particular research actually merit the conclusions drawn by some of these papers. And that is where a program such as ERROR would come in. It will need a lot more funding, of course, to make any dent in the problem. Over time, perhaps some academics will be a little more wary about publishing bunk if they know someone might check. 

Already some of the authors approached by ERROR have announced they will not participate, meaning that the volunteers will not have access to the data. But this is a problem that would be easily solved by the journals themselves. If they are not going to ask their peer reviewers to look at the data directly, then the authors should at least be required to make their data available to others as a condition of publication. Academia is sorely in need of sunlight, and perhaps the ERROR project will lift the shade a little bit. 

Naomi Riley Schaefer is a senior fellow at the American Enterprise Institute and the Independent Women’s Forum. James Piereson is a senior fellow at the Manhattan Institute. 

Related Content

Related Content