(Translated by https://www.hiragana.jp/)
Metrics and Evaluation | DIAL.pr - BOREAL

User menu

Accès à distance ? S'identifier sur le proxy UCLouvain

Metrics and Evaluation

Nowadays, research evaluation is of main importance in research institutions’ debates. Metrics and evaluative tools have sharply developed, and it’s sometimes hard to find the appropriate metrics. Bibliometric indicators provide a quantitative evaluation of a journal, article or researcher. They are used by institutions, or research center to decide on researchers’ promotion, recruitment, etc. Here, we present several bibliometric indicators that aim to evaluate research outcomes and researchers.

Research outcome metrics

Quotation is the most used research evaluation tool, as far as research articles, editorials and reviews are concerned. Depending on the kind of text evaluated, there are several quotation indicators (Web of ScienceScopus, PubMed, Google Scholar, …). Each of these providers give its specific quotation indicator, which depend on the coverage of their articles database. For example, while Scopus coverage of ancient articles is quite low, it however covers a larger amount of references in Social Sciences than Web of Science.

Articles quotation analysis, however, has several limits that need to be emphasized: self-quotattion, exclusive quotation of articles from the same journal, difficulty to evaluate each co-authors’ real participation to the article, etc.  

https://dial.uclouvain.be/pr/boreal/sites/PR.boreal/files/librairian/t%C3%A9l%C3%A9chargement.jpg

For several years, an additional metric has been developed and used as an alternative way of traditional quotation indicators: Altmetrics (Article Level Metrics). This metrics rely on article visibility on the Internet, and in particular on social media. It makes an inventory of every time an article is referred to, seen, or downloaded on social media (regular or academic), repositories (disciplinary or institutional), blogs, publisher websites, etc.  

Several tools are able to collect such data (CitedIn, ImpactStory, Altmetric ou encore PlumAnalytics (EBSCO)), and allow Altmetrics to provide an immediate measure of impact. This measurement is however criticized by a part of the scientific community, which claims that Altmetrics evaluates social impact of a research article, not its scientific impact. Moreover, Altmetrics is often blamed for its use of social media that have an uncertain future. 

Journal metrics

  • L’Impact Factor (IF) : This indicator, provided by Thomson Reuters, takes the ratio between the number of received quotation in years X-2 (or even X-5) and the number of « quotable » articles published in the same years X-2 (or even X-5). This indicators thus rates every journal and allow a comparison between journals.

    Ex

    Although this indicator is the most famous, it is also the most criticized metrics for journal evaluation. First, not all articles from high IF journal do not received lots of quotation. Moreover, the pace of publication greatly differ across disciplines (SSH, SSS, SST), and so does the number of quotation. Finally, it has been shown that a powerfull journal with consequent financial means is more able to communicate on its articles, and consequently increase its IF, not because of its real impact, but because of its popularity.

  • Schimago Journal Rank (SJR): This indicator is generated for journals referenced in Scopus. It provides the number of quotation received by a journal’s articles in the first three years after being published. A special algorithm then weights this number with the relative notoriety of a journal. This number is finally divided by the number of articles published by this journal in the 3 reference years.

  • Eigenfactor (EF) : This indicator is generated for journals referenced in Web of Science. L’Eigenfactor is calculated via the ratio between the number of a journal’s publications in the last 5 years, and the number of quotation received. This number is also weighted by the notoriety of the journal, and then normalized in order for the sums of all journals’ scores to be equal to 100. Yet, self-quotation are not taken into account in this indicator.

    Source Normalized Impact per papoer (SNIP) : This indicator is generated for journals referenced in Scopus. The SNIP provides a discipline-specific metric of journals. It calculates the number of quotation received by a journal’s articles in the last three years, divided by the number of the journal’s published articles in the last three years, and weighted by a disciplinary-specific indicator.  

    The researcher

  • < H-index> : This indicators evaluates the production of a (group of) researchers. It makes the ratio between the number of his/her published articles, and the number of quotations received. It is based upon the publications referenced by several databases (ScopusGoogle Scholar et WoS). The H-Index may consequently be very different from one database to another. This indicator is commonly used to evaluate publications for the promotion or hiring of a researcher. It however favoured researchers with a long carreer, as well as it takes both positive and negative aspects of quotations into account.   This indicator also depends on the coverage of the database taken into account.

     

    Exemples :

  • 50 articles and non quoted : h index = 0

  • 50 articles and 10 quoted at least 10 times: h index = 10

Advantages

Limits

Impact measure of a researcher, not of articles

Very different across disciplines

 

Favoured old scientists with a long career.

 

Takes into account articles in which the authors had a very limited role

 

Does not take into account high impact articles.

  • < G-Index> is a variation of the H index that takes into accounts the number of publications of a scientist.
    • Advantages are similar to H-Index
    • Differences : higher valorisation of high-impact articles.

Publish or perish

Publish or perish is a software developed by A.W. Harzing, which allows a researcher to analyze its quotations. It is available here: site. It provides the major bibliometrics indicators (e.g., G and H-indexes) based on the Google Scholar database, which covers a larger amount of scientific deliverables tha WoS ou Scopus

  • Consult bibliometrics indexes :

Indice

Sources

Accès

Citation

Web of Science (Thomson)

charged : no UCLouvain subscription

Scopus (Elsevier)

charged : UCLouvain subscription

PubMed

free of charge

Google Scholar

free of charge

Impact Factor

Web of Science (Thomson)

charged : no UCLouvain subscription

JCR (Thomson)

charged : no UCLouvain subscription

Bioxbio

free of charge

Site web éditeur

free of charge

Schimago Journal Rank

Schimago

free of charge

Scopus (Elsevier)

charged : UCLouvain subscription

Eigenfactor

Eigen

free of charge

JCR (Thomson)

charged : no UCLouvain subscription

Source Normalized Impact per papoer (SNIP)

Journal Metrics Value

free of charge

Scopus (Elsevier)

charged : UCLouvain subscription

AGORA

free of charge

HINARI

free of charge

OARE

free of charge

H-index

Web of Science (Thomson)

charged : no UCLouvain subscription

Scopus (Elsevier)

charged : UCLouvain subscription

JCR (Thomson)

charged : no UCLouvain subscription

Harzing (Publish or Perish)

free of charge

G-Index

Harzing (Publish or Perish)

free of charge