2015-01-14

Scientist evaluation, h, and the #PaperBubble

[A bit of self-criticism on the science evaluation system, just to start the year.]

Wicked rules can pervert a community.
In science, the wicked rule has been evaluating scientists for how much we publish, how many papers we coauthor regardless of your contribution, and in what journals do we publish, as a proxy for quality or merit. If you coauthor many articles in good (very cited) journals then you must be a good scientist, that's the institutional mantra. The basic underlaying assumption is that the citations to someone's work by other scientists are a good measure of the importance of her/his research, and I will not question this here. But in this century, to measure the citations to someone's work, we don't need impact factors and we don't need to care about in what journal is the article.
Additionally, evaluating by the “mere number of a researcher’s publications” disincentives risky and potentially ground-breaking research lacking a short-term publication outcome, as the editor of Science argues. But this is what is being done. And as a result, the scientific articles published every year have nearly tripled since 1990.
Papers published per year. Source
Cumulative number of papers
published in Biomedicine
(source: PubMed via this)

















The same has happened in most countries and in most disciplines, although some have moved much faster than the average, look :-)
Annual number of papers in Yoga studies over time.

Does this mean that our science is now three times better? Does it mean that our understanding of Nature grows 3 times faster than 25 years ago? 10 times, in the case of Yoga? Mmm. It does mean that now we can only track a small fraction of all the papers that are relevant to our research. 

[By Zhou Tao/Shanghai Daily] 

Needless to say, this has yet another additional price for all of us:

To compensate for the perversion inherent to this article-count approach, evaluators started weighting that number with something called Journal Impact Factor, assuming that articles published in highly cited journals have a higher, statistically-sound chance to have an impact. Perfect plot for a self-accomplishing prophecy; The warnings against this practice are a clamor.

As citation databases became online, another wicked parameter came on the scene:
h, the Hirsch index, was adopted in the last decade to come over the number of publications criterion. It has now become a commonplace in the evaluation of proposals. And h keeps overvaluing multi-authored papers beyond reason, because it disregards the number of authors and their relative contribution (in most research areas, the relative scientific contribution of the authors of a paper can be approximated in first approach by the position in the author list). The citations to an article of yours count equally if you are the 1st author or the 100th. Therefore, a paper with 100 authors has 100 times more impact on the evaluation system that a single-authored paper. And I'm not being rhetorical here. Please, meet two of the highest-scientists in Spain (just as an example):
Profile 1: 61k citations, 137 papers in 2013 only, h=112.
Profile 2: 117k citations, 164 papers in 2013 only, h=75.
I leave it to you to find the flaw.

Very predictably, the number of authors per paper has grown wild, and former research groups have often become author pools, with the entire group signing every paper published by each of its members. A symptom of this is that few researchers dare to publish on their own today:
Average number of authors per paper during the
last 100 years.
Source: PubMed
% of single-authored papers
over the last century. 
















The left bar indicates the average number of articles published 
by authors that stopped publishing 15 years after their first 
publication. The blue bar on the right shows the articles 
published in the same timespan but by researchers that 
continued publishing after 15 years. The red bar on top 
indicates the articles of those same researchers after the 
15th year. One can see that the researchers that continue 
publishing are those having a high research output. 
It also shows that the research output before the year 
break is the portion that contributes most to the 
overall valuesSource
So the drive to scientific publication is still based on quantity, not quality.





Ask any editor how many of their requests to review a manuscript are refused by peers, and you'll learn that they often end up doing the reviews themselves. Too many papers for such few reviewers/authors. It is unsurprising that you find funny bugs like this in articles that were supposed to have been reviewed.















It is difficult to find objective (quantitative) criteria for quality. And perhaps it is also time to question the trust on objective parameters. Alternatives such as interpreting the subjective impact foreseen for a given research are also risky. But if the citation criterion is to be adopted, then we do not need h-like indexes or journal impact factorsThere are better metrics proposed (examples), they just need to be adopted. Why not accounting for the author order, for instance? Or the number of coauthors?

Under the present rules, young researchers are pressed to publish as much as possible instead of publishing as good as possible, not only perverting the research system but also inflating a huge publication bubble. The warning lights are long on. China has already realized the problem and may be soon taking action. Why not Europe? Will we wait until this bubble bursts?

Wicked rules pervert communities, so let's just adopt better rules. In 10 years the science publishing panorama will be unrecognizable anyway.

Source: Science Mag


PD: Interesting discussion in the comment section of this column in last week's Nature.
PD2: Ironically, the journal Bubble Science just went closed earlier this year: 

PD3: A new metric proposed: Author Impact Factor: tracking the individual scientific impact, by Kumar Pan & Fortunato. link

PD4: A new journal now allows publishing citable articles of less than 200 words with a DOI. What next? Citable tweets?

PD5: paper reinventing the trapezoidal rule https://t.co/CANpYZ8GkX published in high-impact journal & has 268 citations pic.twitter.com/ECI04VaoQ3 — ∆ Garcia-Castellanos (@danigeos) January 28, 2016