In the beginning it was the Impact Factor

BibliometricsWhen I started my career as a researcher (very late 80's...), I was told to publish on international journals of good reputation. And in those days good reputation basically meant journals with an Impact Factor as high as possible (i.e., the index delivered by Thomson Reuters on their Journal Citation Report, updated yearly). Not all journals had an Impact Factor, and in my field a journal was considered to be good enough when its IF was at least 0.4-0.5. Conferences lagged behind by a long distance: many did not even have archived proceedings; the probability of acceptance at a conference was (and is) much larger than at good journals (coupled with much shorter refereeing cycles...).

But, after a while, we were told that the number of papers published on journals with IF was not meaningful enough to evaluate someone's scientific activity. And we got to know the citations-based H-index. The newly arrived index has changed the publishing strategies of many (and has many shortcomings, even as a metric). With the same effort put in to write a journal paper, you could write several conference papers, multiplying the chance of getting cited. Survey papers, though including no original scientific content, could bring you many more citations than original research papers. And people   working on fashionable subjects could count on a much larger (potentially citing) audience. Finally (putting on my **** hat), tricking the H-index can be much simpler than tricking the Impact Factor: if you can count on a circle of friends (or at least of colleagues publishing on the same topics as you), you can cite one another and have your citation index pop up rapidly.

Anyway, it was then the H-index. Some variations started to circulate: the overall number of citations could make justice for prolific but not very popular authors. And there is even a time-weighted version of the H-index: the not-so-notorius Katsaros index (that name has anyway a funny sound in Italian....in nomen omen), where citations' value faded in time (I wonder what Einstein's publications would be worth now when evaluated with such a metric).

But new metrics come forward, pushed by changes in communications means. And the latest fad is the one collectively known as Altmetric (the name is a short for Alternative Metrics). The new set of metrics includesall the new forms of communications on the net: presence on blogs, sharing of datasets and codes, comments on existing work, links, etc. Even traditional publishing houses as Elsevier are embracing the new approach: a third party application (Altmetric for Scopus) has recently been launched, running within the sidebar of Scopus.

However, keeping abreast of the increasing number of indices and striving to achieve visibility is proving to be a big effort, taking up an increasing slice of our time and energy. Many of us continuously update their homepage, in addition to hosting our papers on platforms such as Mendeley, Academia, and ResearchGate. And we may compulsively visit our page on Google Scholar or Scopus, checking our citations and H-index.

While this modern form of sticker collection may be funny and gives us a sense of advancement as our counters grow and grow, are these figures suitable to measure our contribution? Well, they do give us some indication on how much a researcher is active, but relying on them alone to rank people may lead to gross mistakes. A researcher with an H-index of 20 is certainly much more prominent than one having an H-index of 2, and having 1000 citations puts you certainly steps above one having just 100 citations, but can we say something trustworthy when the difference is narrower or the figures to be compared are anyway large?

 

 

This entry was posted in Uncategorized and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *