High-impact journals ‘contain more errors’

Placing a paper in a high-impact journal can tilt hiring and promotion decisions, but a large-scale study has found a link between such outlets and lower-quality work.

The analysis compared statistical errors from just over 50,000 behavioral and brain sciences articles and the findings of replication studies with journal impact factors and article-level citation counts.

It found that articles in journals with higher impact factors tended to have lower-quality statistical evidence to support their claims and that their findings were less likely to be replicated by others.

Although crunching the data could not show the mechanism behind the effect, the authors say the analysis further undermines the use of bibliometrics as a measure of research quality.

The reputation of high-end journals is often taken as confirmation that the work they publish is not only new and important for other fields of science, but also that the statistical tests used are also correct.

“Not only do you want them to be innovative, you want the quality of the evidence to be stronger,” Zachary Horne, one of the authors of the study, told Times HigherEducation. “You don’t see that – you actually see the relationship very weakly in the opposite direction.”

Dr Horne, a psychology lecturer at the University of Edinburgh, said the analysis had implications for wider debates around research assessment.

“Administrators and people evaluating science might want to pay more attention to representativeness, sample size, the paper having few errors,” he said, as opposed to falling back on the shine of familiar journal titles.

Previous research has shown that citation-counting can perpetuate long-standing career inequalities because citation habits often disadvantage women and those from under-represented groups.

In their paper, published in Royal Society OpenScience on August 17, Dr Horne and his co-author Michael Dougherty, a psychologist at the University of Maryland, say their findings also show that the misuse of impact factors and citation counts could ultimately promote and encourage bad science.

Although there are now many who spur the use of impact factors for judging papers or their authors, Dr Horne said they were working within a system that reaches for bibliometrics by default.

“Folks I know who are really aware that these are not necessarily indicators of quality are more open to deviating by hiring somebody who doesn’t have papers in those venues,” he said.

Pushback against the “prestige economy” of academic journals has continued to grow in recent years. A European Union-backed agreement on research assessment bars signatories from using impact factors in personnel decisions and requires them to come up with plans for alternative approaches.

ben.upton@timeshighereducation.com

Leave a Comment