Science for Progress

because science is fundamental in the 21st century

9: The Journal Impact Factor: how (not) to evaluate researchers – with Björn Brembs

What is the Journal Impact Factor?

The Journal Impact Factor is widely used as a tool to evaluate studies, and researchers. It supposedly measures the quality of a journal by scoring how many citations an average article in this journal achieves. Committees making hiring and funding decisions use the ‘JIF’ as an approximation for the quality of the work a researcher has published, and in extension as an approximation for the capabilities of an applicant.

JIF as a measure of researcher merit

I find this practice already highly questionable. First of all, it appears the formula calculates a statistical mean. However, no article can receive less than 0 citations, while there is no upper limit to citations. Most articles – across all journal – receive only very few citations, and only a few may receive a lot of citations. This means we have a ‘skewed distribution’ when we plot how many papers received how many citations. The statistical mean, however, is not applicable for skewed distributions. Moreover, basic statistics and probability tell us that if you blindly choose one paper from a journal, it is impossible to predict -or even roughly estimate – its quality by the average citation rate, alone. It is further impossible to know the author’s actual contribution to said paper. Thus, we are already stacking three statistical fallacies by applying JIF to evaluate researchers.

But this is just the beginning! Journals don’t have an interest in the Journal Impact Factor as a tool for science evaluation. Their interest is in the advertising effect of the JIF. As we learn from our guest, Dr. Björn Brembs (professor for neurogenetics at University of Regensburg), journals negotiate with the private company Clarivate Analytics (in the past it was Reuters) that provides the numbers. Especially larger publishers have a lot of room to influence the numbers above and below the division line in their favor.

Reputation is not quality.

There is one thing the Journal Impact Factor can tell us: how good the reputation of the journal is among researchers. But does that really mean anything substantial? Björn Brembs reviewed a large body of studies that compared different measures of scientific rigor with the impact factor of journals. He finds that in most research fields the impact factor doesn’t tell you anything about the quality of the work. In some fields it may even be a predictor of unreliable science! This reflects the tendency of high ranking journals to prefer novelty over quality.

How does this affect science and academia?

The JIF is omnipresent. A CV (the academic resume) is not only judged by the name of the journals in a publication list. Another factor is the funding a researcher has been able to get. However, funding committees may also use JIF to evaluate whether an applicant is worthy of funding. Another point on a CV is the reputation of the advisers, who were also evaluated by their publications and funding. Another important point on a CV is the reputation of the institute one worked at, which is to some degree evaluated by the publications and the funding of their principle investigators.

It is easy to see how this puts a lot of power into the hands of the editors of high ranking journals. Björn Brembs is concerned about the probable effect this has on the quality of science overall. If the ability to woe editors and write persuasive stories leads to more success than rigorous science, researchers will behave accordingly. And they will also teach their students to put even more emphasis on their editor persuasion skills. Of course not all committees use JIF to determine who gets an interview. But still the best strategy for early career researchers is to put all their efforts into pushing their work into high ranking journals.

What now?!

We also talk about possible solutions to the problem. In order to replace the JIF with better measures, Björn Brembs suggests to build a universal open science network. Such a network would allow collecting more data on the scientific rigor and skills of a researcher, directly. The money for this could be raised by stopping to buy subscriptions from large publishing houses. Getting rid of the JIF and moving to open access publishing using private publishers, however, would only shift the problem. High reputation journals would ask for higher submission fees from authors than lower ranking journals. So, instead of using the JIF, committees would judge applicants by the amount of funds they were able to invest in publishing. It would also not solve the problem of researchers trying to persuade editors rather than do rigorous research. But now scientific results of deteriorating quality would be openly accessible by a lay readership, which would be a disservice to the public. In the long run, we need to get rid of journals, completely. One step in the right direction could be something like SciELO, a publicly funded infrastructure for open access publishing in South America.

In the meanwhile, and this may come as a surprise, Björn Brembs suggests to stop having committees evaluating researchers at all. There is evidence, he says, that people will always have unjustified biases against women or institutes of lower reputation. So he suggests, once a short list of applications has been selected based on the soundness of the research proposal, funding should be distributed using a lottery system. The evidence tells us that as long as we don’t have a real, objective measure, a random selection is our best option.

further reading

Björn Brembs’ blog
Brembs B. (2018). Prestigious science journals struggle to reach even average reliability.
SciELO

about Dennis Eckmeier

Dennis founded Science for Progress. He received a PhD in neuroscience in 2010 in Germany. Until 2018 he worked as a postdoc in the USA, and Portugal. In 2017 he co-organized the March for Science in Lisbon, Portugal. Dennis is currently a freelancer.