Science for Progress

because science is fundamental in the 21st century

21 Altmetrics: A Better Way to Evaluate Research(ers)? – with Steffen Lemke

Who gets positions and funding in academia should depend on the merit of the researcher, project, or institute. But how do we assess these merits fairly, meaningfully and in a way that makes it comparable?

I talked about metrics with Steffen Lemke, PhD student at the Leibniz Information Centre for Economics (ZBW), in Kiel, Germany. He is part of the *metrics project, which investigates new research metrics and their applicability. The project is funded by the German Researcher Association, DFG.

Citation Based Metrics


In episode 9 I talked with Björn Brembs about the most prevalent metric used: the Journal Impact Factor. It turns out that the “JIF” is not a good metric.

Another commonly used metric is the “H-index”. Like JIF it is based on citations – the number of times a scientific paper was mentioned in another scientific paper. But it aims to measure the output of a researcher rather than the journal.

Both, H-index and JIF, have their own specific disadvantages. But they also share problems due to the source of data they use: citation indices. Citations are slow to accrue, which means it takes time to build a sufficient amount of data for proper evaluation. The indices are also incomplete and mostly locked behind paywalls. And finally, they are solely focused on journal articles.

But peer-reviewed research articles aren’t the only output scientists generate. Especially social sciences often publish in other formats, like books and monographs. STEM researchers, too, often create other outputs, such as designs for experimental setups, or code.

Finally, citation based metrics focus solely on the communication between scientists, not with the public.

Altmetrics

New, alternative metrics aim to change all that. “Altmetrics” is an umbrella term for a range of still experimental metrics. They use data one can find openly on the internet. This makes them fast and diverse. They look, for example, at the dissemination of research articles on social media. But they also look at the download numbers of open repositories for code, lecture videos, presentation slides, and other resources. In this way they may cover any research product you can find on the internet.

Whether a metric predicts a scientific impact (citations) fast and well, can be tested. So far it appears that data from online reference managers can predict citations well. You don’t need to wait for citing authors to write and publish their own papers, you just look if they bookmarked your paper for later use.

An obvious disadvantage of altmetrics is that they can be gamed. One can buy services from social media providers to advertise posts. Or one can use bots to amplify the impact on social media, or download files thousands of times.

Soberingly, researchers found altmetrics not to cover humanities and social sciences, sufficiently. Less than 12% of the research output from these fields showed up in the altmetrics tested.

Social Media Use

Steffen Lemke and his co-authors asked why there is so little representation of social sciences on social media. Surprisingly, while social scientists usually justify their work with the relevance for the public, they see interacting with it on social media as a waste of time. Some answered in the survey, that they would be overwhelmed by information. It was hard to tell the quality of information on the internet. Others say they’d not be seen as serious would they be caught using social media – even for work – by their supervisors.

Metric-Wiseness

In Steffen’s article you will find the interesting term “metric-wiseness”. Coined by a different research group, it describes the knowledge of researchers about metrics, and the ability to understand their meaning and applicability. In their research surveys, the *metrics project asks researchers about their knowledge of metrics.

Even very junior researchers know about JIF. And they try to optimize their research output to achieve publications in journals with a high JIF. However, there is little knowledge of how it is generated, and no awareness of the massive caveats this metric has. Similarly for the H-Index. Altmetrics, however, appear to be almost completely unknown. The whole concept appears to be alien to researchers.

The careers of researchers depend on metrics, and paying attention to measurements is their bread and butter. Still, after more than a decade as a researcher, myself, these findings are no surprise to me.

Conclusion

Altmetrics may be the path to better research evaluation in the future. They are fast and cover a larger portion of the overall output of researchers beyond scientific articles. But as all metrics, they can be gamed.

Once a metric becomes representative for productivity and impact, people will optimize their behavior for the metric. At the moment it appears to be a robust approach to use the bookmark data from reference managers to predict impact of articles within the scientific community. But once this has become a metric (and this is my own opinion), Elsevier, which owns the very popular reference manager “Mendeley”, will begin selling visibility for papers on that platform – and authors or their employers will buy it.

Overall, altmetrics are not ready to be universally applied. Many fields are insufficiently represented in the databases altmetrics rely on.

At the end, however, I think the most important part is to inform researchers about the metrics they are relying on.

Do you have questions, comments or suggestion? Email info@scienceforprogress.eu, write us on facebook or twitter, or leave us a video message on Skype for dennis.eckmeier.

Become a Patron!

Podchaser - Science for Societal Progress

sources:*metrics website
Steffen Lemke’s profile at ZBW
Lemke et al, “When You Use Social Media You Are Not Working”: Barriers for the Use of Metrics in Social Sciences
Rousseau, S., and Rousseau, R., Being metric-wise: heterogeneity in bibliometric knowledge.
The Journal Impact Factor: how (not) to evaluate researchers – with Björn Brembs

about Dennis Eckmeier

Dennis founded Science for Progress. He received a PhD in neuroscience in 2010 in Germany. Until 2018 he worked as a postdoc in the USA, and Portugal. In 2017 he co-organized the March for Science in Lisbon, Portugal. Dennis is currently a freelancer.