About SJR

The metrics

For Scopus users, SNIP and SJR have been integrated into Journal Analyzer


Journal Metrics Factsheet
Measuring journal performance has always been the work of a single metric. But no two journals are alike, and neither are the tools that measure them. Two metrics, driven by Scopus, are changing the way we look at journal analysis.

Journal Metrics Whitepaper
Journal metrics are central to most performance evaluations, but judging individual researchers based on a metric designed to rank journals can lead to widely recognized distortions. In addition, judging all academic fields and activities based on a single metric is not necessarily the best basis for fair comparison.

Papers about the metrics

Some modifications to the SNIP journal impact indicator
In this paper, a number of modifications that will be made to the SNIP indicator are explained, and the advantages of the resulting revised SNIP indicator are pointed out. It is argued that the original SNIP indicator has some counterintuitive properties, and it is shown mathematically that the revised SNIP indicator does not have these properties.

‘SJR and SNIP: two new journal metrics in Elsevier's Scopus’
This is a paper by Lisa Colledge, Félix de Moya‐Anegón, Vicente Guerrero‐Bote, Carmen López‐Illescas, M'hamed El Aisati and Henk Moed, published in The Journal for the Serials Community. It introduces SNIP and SJR, and underlines important points to keep in mind when using journal metrics in general. It presents comparisons between these and other metrics, and discusses their potential in regard to users and theoretical beliefs about the concept of journal performance.

'Measuring contextual citation impact of scientific journals'
This is a research paper by Professor Henk Moed (previously at CWTS), developer of the SNIP journal metric, and published in the Journal of Informetrics. This paper explores a new indicator of journal citation impact, denoted as source normalized impact per paper (SNIP). It measures a journal's contextual citation impact, taking into account characteristics of its properly defined subject field, especially the frequency at which authors cite other papers in their reference lists.

‘The SJR indicator: A new indicator of journals' scientific prestige’
This is a paper by Professors Félix de Moya, Research Professor at Consejo Superior de Investigaciones Científicas and Vicente Guerrero Bote at University of Extremadura, published in the Journal of Informetrics. This paper proposes an indicator of journals' scientific prestige, the SJR indicator, for ranking scholarly journals based on citation weighting schemes and eigenvector centrality to be used in complex and heterogeneous citation networks such Scopus.

Other resources about the metrics

Research Trends, January 2010
This is a special issue of Research Trends introducing the metrics and discussing research-performance evaluation in general.

'The Evolution of journal assessment’ (pdf)
This white paper reviews the evolution of journal metrics until today. It discusses how research-performance assessment has changed both in scope and objective over the past 50 years.

Download the SNIP and SJR fact sheet (pdf) for a two-page overview on what each of the metrics mean and how they are calculated.

Learn how SNIP and SJR are calculated, in five minutes. View this short demo.

Get a snapshot overview of the differences between SNIP, Impact Factor and SJR here (pdf).

Finding a Way Through the Scientific Literature: Indexes and Measures
Thomas Jones, Sarah Huggett Judith Kamalski (Elsevier)

SNIP information website
Learn more about what CWTS (Centre for Science and Technology Studies) in the Netherlands are working on, and about their role in developing the SNIP metric.

SCImago journal and country rank website
From this site you can download specific SJR metric values on country levels. It also gives in-depth information on related publications on SJR

Debate in the literature

‘Scopus’s Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations’
In this paper published in the Journal of the American Society for Information Science and Technology Loet Leydesdorff and Tobias Opthof argue that when using SNIP the normalization for sub-fields should be performed before a subsequent division. They propose that this be done by fractional counting based on the number of references in the citing papers, so that a citation from a paper with n references is weighted at 1/n citations. They state that this method enables investigation of statistical differences in addition to field normalization. Henk Moed replies below.

In two Letters to the Editor (2010, Journal of Informetrics; 2011, Journal of the American Society for Information Science and Technology), Henk Moed replies to Loet Leydesdorff and Tobias Opthof’s paper ‘Scopus’s Source Normalized Impact per Paper (SNIP) versus a Journal Impact Factor based on Fractional Counting of Citations’. While agreeing that debate is useful, Moed highlights the following strong features of SNIP that are absent from Leydesdorff and Opthof’s proposed measure:

  • Citation potential must be calculated for the same citation window as raw impact per paper.
  • SNIPs citation potential corrects for differences in database coverage between fields, in addition to differences in citation frequency.
  • SNIPs citation potential normalizes for differences in the ages of references observed between subject fields.
  • The range of values is similar to that of the Impact Factor.

Moed also points out two problems with the fractional counting of references:

  • The effect of papers with no citations in the three-year window will be discarded, and give a biased citation potential.
  • There will be variance in the value of a citation within exactly the same field depending on the number of references it happens to find itself with.

Read the Letters