Navigating the altmetrics maze

facebooktwittergoogle_plusredditpinterestlinkedinmail
Download PDF

 

 

 

 

 

 

 

 

 

As quantum physicists well know, measuring a system ends up disturbing it.  And changing the way we measure the outcome of research is currently ruffling many feather in the scientific community.   There is concern that measurement based on an online counts like a big social media footprint does not necessarily equate with scientific value. Besides, scientists would justifiably turn pale if tenure committee meetings were distracted by twitter stats or blog counts.

Almetrics– short for alternative metrics –is the newcomer to the world of scientific measurement and is part of a move towards Big Data analytics. By its very existence, it is challenging means of assessing scientific worth, traditionally based on citation rates and journal impact factor.  A major plus is that some of these metrics gauge impact at a different, more accurate, scale such as at the article level as introduced in 2009 by the Public Library of Science (PLoS) . The new range of metrics based, for example, on assessing traffic levels in academic work mentioned in reference managers, blogs, social bookmarks and social networks is taking advantage of the facilities afforded by web technologies.

A manifesto, by Jason Priem, Dario Taraborelli, Paul Groth, and Cameron Neylon, at altmetrics.org points out that three main filters of importance in scholarship are failing: peer review is slow, encourages conventionality and fails to limit volume. Citation counting is imperfect. And journal impact factor is too often wrongly used to value individual articles and its workings are a trade secret.

Up to now, the title of a journal has acted as a proxy for quality of work, and this is recognised as less than perfect.  “Article level metrics have a role to play in unbundling the idea that the journal you publish in is equivalent to the importance of the work,” says Ian Mulvany, formerly of reference manager Mendeley and now head of technology at open access journal eLife.“This [journal impact factor] has caused distortions in the market place and put huge pressure on early stage researchers to get papers into one of these big journals.” eLife addresses complaints about peer review by promising rapid reviews with senior editors who can call on additional external reviews as needed.

Mulvany says the scientific community has allowed itself get caught up in group think about one number, the journal impact factor.  But he believes significant change is already here. “Every time you ask for a highly read or highly cited article you are already starting to use an altmetric in place of the impact factor,” he says.  Indeed, altmetrics looks at the impact of the individual article in numerous formats and could even allow expert contributors to perform crowd-sourced peer review, according to proponents.  It’s also fast and transparent, so what’s not to like?

Concerns have been expressed by the scientific community at the introduction of untested novel metrics. A 2012 report by the Dutch academic research improvement think tank SURF foundation concludes that these altmetrics cannot legitimately be used in research assessments until they adhere to a far stricter protocol of data quality and indicator reliability and validity.

Despite reluctance from the science community, the peer-review publishers were prompt to jump on the bandwagon. Since last October visitors to the Nature Publishing Group (NPG) can view an article’s citation data, page views, news mentions, blog posts and social shares including Facebook and Twitter.  “This will add a level of transparency to our authors and readers and offer them a more immediate metric than just citations, which can take up to 6 to 8 months to even begin to accumulate,” explained Kira Anthony, editorial development manager at NPG, New York.

The publisher also lent its supports to a start-up called Altmetric.com designed to track conversations around articles online. It joins the rank of many others such as Priem’s ImpactStory and Plum Analytics, a reputation-based tool that allows scientist find out how people are interacting with, commenting on and sharing their research.  Mendeley also presents a trove of altmetrics, because it offers options for collating and sharing papers.

Article-level metrics are about helping people make qualitative decisions and seeing what people are saying about a paper and where, according to Euan Adie, founder of Altmetric.com.   “People think it is an alternative metric to impact factors and citation counts, but it’s not really.  It’s an alternative to only using these.”

It’s good to talk about a paper, to write blog posts and tweet, says Adie, but focusing too much on numbers rather than quality isn’t desirable and can lead to gaming, where people try to manipulate the system to their advantage.  “Quite frankly a lot of this is an ego thing.  People want to know who is reading their paper and what people think about their paper,” says Adie.  “People love metrics and rankings, but they just hate being ranked themselves.”

 

By Anthony King

 

 

Credit  photo: jpellgen

Download PDF
facebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Reply