What are Journal Metrics?
Journal metrics measure the performance and/or impact of scholarly journals. Each metric has its own particular features, but in general, they all aim to provide rankings and insight into journal performance based on citation analysis. They start from the basic premise that a citation to a paper is a form of endorsement, and the most basic analysis can be done by simply counting the number of citations that a particular paper attracts: more citations to a specific paper means that more people consider that paper to be important. Citations to journals (via the papers they publish) can also be counted, thus indicating how important a particular journal is to its community, and in comparison to other journals. Different journal metrics use different methodologies and data sources, thus offering different perspectives on the scholarly publishing landscape, and bibliometricians use different metrics depending on what features they wish to study.
What is SNIP?
SNIP, or Source-Normalized Impact per Paper, measures a source’s contextual citation impact. It takes into account characteristics of the source's subject field, especially the frequency at which authors cite other papers in their reference lists, the speed at which citation impact matures, and the extent to which the database used in the assessment covers the field’s literature. SNIP is the ratio of a source's average citation count per paper, and the ‘citation potential’ of its subject field. It aims to allow direct comparison of sources in different subject fields.
A source's subject field is the set of documents citing that source. The citation potential of a source's subject field is the average number of references per document citing that source. It represents the likelihood of being cited for documents in a particular field. A source in a field with a high citation potential will tend to have a high impact per paper.
Citation potential is important because it accounts for the fact that typical citation counts vary widely between research disciplines – they tend to be higher in Life Sciences than in Mathematics or Social Sciences, for example. If papers in one subject field contain on average 40 cited references while those in another contain on average 10, then the former field has a citation potential that is four times higher than that of the latter. Citation potential also varies between subject fields within a discipline. For instance, basic journals tend to show higher citation potentials than applied or clinical journals, and journals covering emerging topics tend to have higher citation potentials than periodicals in well established areas.
For sources in subject fields in which the citation potential is equal to the average of the whole database, SNIP has the same value as the ‘standard’ impact per paper. But in fields with a higher citation potential – for instance, a topical field well covered in the database – SNIP is lower than the impact per paper. In fields in which the citation potential is lower – for instance, more classical fields, or those with moderate database coverage – SNIP tends to be higher than the impact per paper. In this way, SNIP allows you to rank your own customized set of sources, regardless of their subject fields.
What is SJR?
SJR, or SCImago Journal Rank, is a measure of the scientific prestige of scholarly sources.
SJR assigns relative scores to all of the sources in a citation network. Its methodology is inspired by the Google PageRank algorithm, in that not all citations are equal. A source transfers its own 'prestige', or status, to another source through the act of citing it. A citation from a source with a relatively high SJR is worth more than a citation from a source with a lower SJR.
A source's prestige for a particular year is shared equally over all the citations that it makes in that year; this is important because it corrects for the fact that typical citation counts vary widely between subject fields. The SJR of a source in a field with a high likelihood of citing is shared over a lot of citations, so each citation is worth relatively little. The SJR of a source in a field with a low likelihood if citing is shared over few citations, so each citation is worth relatively much. The result is to even out the differences in citation practice between subject fields, and facilitate direct comparisons of sources.
SJR emphasizes those sources that are used by prestigious titles. SJR allows you to rank your own customized set of sources, regardless of their subject fields.
How do SNIP and SJR compare with other journal metrics?
All journal metrics calculate journal prestige in different ways, and each has specific strengths and weaknesses. A comparison table with all the major journal metrics is available in our White Paper.
What are the main advantages of SJR and SNIP in relation to the Impact Factor?
- Transparency – calculated from a database that you can see, so you can check the numbers. The Impact Factor database is hidden away within Thomson – it's not calculated from the Web of Science
- Subject field normalization
- Life Sciences journals have huge Impact Factors compared to Math journals, which reflects different citation behaviors between fields, not quality.
- Even within one subject field, there can be different citation behaviors, making it difficult to know if a difference in Impact Factor is due to quality or behavior. Compare basic and applied journals, for example
- SJR and SNIP take this difference in behavior into account in the way they are calculated, so you don’t have to worry about which field
- An added benefit is that the way SJR and SNIP account for subject field differences is independent of the source classification in Scopus. Even if a journal has recently changed scope, making its classification outdated, the field used to correct for subject differences will, nevertheless, reflect its current scope because the journal makes its own subject field based on what it is cited by.
- Three-year citation window
- The ‘citation window’ is the number of years of content that a metric is based on.
- Both SCImago and CWTS use a three-year citation window for their journal metrics. They demonstrate that this window is the fairest compromise for a broad-scope database like Scopus and that it includes the citation peaks of the majority of fields. The graph below illustrates this point. It is taken from a paper published by the SCImago research group: ‘What lies behind the averages and significance of citation indicators in different disciplines?’ http://jis.sagepub.com/content/36/3/371.short by Bárbara S. Lancho-Barrantes, Vicente P. Guerrero-Bote and Félix Moya-Anegón, Journal of Information Science, June 2010; vol. 36, 3: pp. 371–382.
- Thomson Reuters publishes two versions of the Impact Factor – with two- and five-year citation windows. A two-year citation window favors rapidly moving fields and may be unfair to slower moving fields. Five years is often considered the best compromise, although it favors slower moving fields, and may be biased against rapid fields
- It is relatively easy to manipulate the Impact Factor because it is generated from citations from all content – including non peer-reviewed content like editorials.
- SNIP and SJR only consider citations made by peer-reviewed content and directed to peer-reviewed content. It is also much more difficult to interfere with peer-reviewed content.
- Peer-reviewed content in Scopus = articles, reviews and conference papers.
- Breadth of coverage – not all journals have Impact Factors, yet librarians and scientists need values for all the journals they work with. Scopus’ much broader coverage means that this is no problem for SJR and SNIP.
What are the weak points of SJR and SNIP in relation to the Impact Factor?
- The calculation method is more complicated than the Impact Factor. Any attempt to normalize across subject fields necessarily results in more complex algorithms. You cannot have this feature and simplicity as well.
- They do not address the bias of review articles. This is not always a problem, but in some situations it can be:
- Review journals, or original research journals with a high proportion of review content, tend to have higher Impact Factors than purely original research journals in the same field because reviews tend to be more heavily cited than original research.
- Impact Factor, SNIP and SJR do not compensate for this, so review or review-rich journals tend to have higher Impact Factor, SJR and SNIP.
- Note: It is possible to see the proportion of review content in Scopus by looking at the documents for a particular journal. This means you can assess the impact such reviews might be having. In Scopus Journal Analytics simply click on the ‘Percent reviews’ tab to see the proportion of review content. This means you can better assess the strength of a particular journal.
- The numbers need to be understood in context.
- An Impact Factor of two means an average of two citations per article/review published in that journal.
- SJR is based on a random-walk model. It calculates the percentage of time a researcher would spend reading content from each journal if they randomly followed references from one article to another. An SJR of two means that two percent of the researcher’s time is spent reading this particular journal.
- A SNIP value that is higher than one means that the journal has an above average SNIP for its field. A SNIP that is lower than one means that the journal has a below average SNIP for its field. If SNIP = 1, the journal is absolutely average for its field.
- SJR and SNIP values make sense within the context of values for other journals. They are just a way to put journals in order.
- The range that SJR and SNIP cover (about 0 to 10) is smaller than the range of the Impact Factor (about 0 to 60). As a result, some journals will have higher SNIP and SJR values than the Impact Factor and for others it will be lower.
- A possible disadvantage in some situations for SNIP: If a journal is cited by Nature in one year, this will increase its citation potential, and reduce its SNIP value. Citation potential tends to be highest for topical journals, so if your journal is becoming more topical, it is likely to end up with a higher citation potential and will need more citations to compensate (more or less the definition of being topical). But remember that SNIP and SJR are complementary, and SJR will ‘reward’ a journal for a citation from a high prestige (topical) journal.
- A possible disadvantage in some situations for SJR: If a journal is often cited by low prestige (low SJR) journals, it might not get as much credit as you would expect from the extra citations. Low prestige journals will tend to be less topical and have a lower citation potential, so if this is a problem you may choose to use the complementary metric SNIP instead.
How does the SJR differ from the Eigenfactor?
Both are ‘prestige metrics’, and follow the type of approach used by Google PageRank, but the method of calculation has some differences:
- Eigenfactor excludes all journal self-citations. SJR only excludes journal self-citations above 33% of the total received by a journal.
- Prestige metrics like SJR and Eigenfactor treat journals as a network linked together by their citations. These networks contain 'dangling nodes' – journals that have cited other journals, but have not received citations themselves. These dangling nodes can be handled in different ways:
- Eigenfactor discounts them. If a journal has not received any citations, then the citations it makes have no value. The result is that not all journals in a database have an Eigenfactor value.
- SJR gives all citations from peer-reviewed content a value, whether or not the source has received citations itself. As a result, all journals in the database have a value.
- Eigenfactor uses a five-year citation window; SJR uses a three-year citation window.
- Eigenfactor is calculated based on the publicly unavailable Journal Citation Reports database. SJR is powered by Scopus.
- Note: the proper comparison is between Article Influence and SJR. Eigenfactor tracks citation power (all citations received by a journal), so bigger journals tend to have higher Eigenfactors. Article Influence is Eigenfactor / number of documents.
How does SJR differ from Google PageRank?
Google PageRank is derived from a similar philosophy to SJR and Eigenfactor. Values can be derived from a linked network in which elements have reciprocal links. These links have different values depending on where they come from. But there are some differences:
- In Google PageRank, value (prestige) is derived from the number of incoming hyperlinks. For SJR it is the number of incoming citations.
- Google PageRank rounds everything to an integer between 1 and 10; SJR uses a continuous scale.
- Google PageRank is open to manipulation because a hyperlink is a hyperlink . SJR can distinguish between citations based on the document type that they come from, making it highly resistant to manipulation.
- Google PageRank does not apply a ‘hyperlink window’ – it counts total incoming hyperlinks on the day it is calculated. SJR applies a three-year citation window.
- Google has not disclosed details of how it generates a toolbar PageRank value. SCImago have published a peer-reviewed article with details of the calculation method.
When should we best use SJR and SNIP, and not Impact Factor?
- For journals that do not have Impact Factors
- When subject field differences may affect ranking, and not only quality
- When comparing basic and applied journals
- When investigating multidisciplinary fields; e.g. in SciVal Spotlight
When should we best use SJR or SNIP?
The following are guidelines and not hard and fast rules. They are taken from: 'SJR and SNIP: two new journal metrics in Elsevier's Scopus'
Guidelines on when to consider SJR:>
- To enhance position of post-prestigious journals (SJR emphasizes the differences).
- If focusing on Life and Health Sciences.
- If topicality is important in journal performance.
- If you want to weight citations based on the status of the citing journal.
Guidelines on when to consider SNIP:
- If value is less important than rank (SNIP reduces the differences).
- If focusing on Engineering, Computer Science, and Social Sciences.
- If you are focused on subject field normalization.
- If you want to weight citations based on the status of the citing journal.
Who are these new metrics for?
Scopus incorporated SNIP and SJR into the Scopus database because bibliometricians, editors, researchers, librarians, and many others in academia said they wanted free, transparent alternatives for ranking journals. Please see our Vision. We believe the incorporation of SNIP and SJR in Scopus, and making our Scopus data available to SNIP and SJR to calculate the values, brings the following benefits to specific groups:
- Bibliometricians: SNIP and SJR provide alternative values that can assist bibliometricians create more refined and objective analyses. This can include:
- Measuring the quality of the research output of universities (research performance)
- Helping governments/universities allocate research funding.
- Editors: SNIP and SJR help editors evaluate their journal and understand how it is performing compared to its competition. SNIP and SJR provide more contextual information, and can give a better picture of specific fields, such as Engineering.
- Researchers: SNIP and SJR can help all academics identify which journals are performing best within their subject field so they know where to publish.
- Everyone: SNIP and SJR values are updated twice a year, providing an up-to-date view of the research landscape.
Who developed these journal metrics?
SNIP: Professor Henk Moed developed Source-Normalized Impact per Paper at CTWS, University of Leiden, the Netherlands. Please see:
- 'Measuring contextual citation impact of scientific journals' http://arxiv.org/abs/0911.2632
- SNIP information website http://www.journalindicators.com
- Both papers can also be downloaded from this site
Professors Félix de Moya, Research Professor at Consejo Superior de Investigaciones Científicas, and Vicente Guerrero Bote at University of Extremadura developed SCImago Journal Rank (SJR). Please see:
- 'The SJR indicator: A new indicator of journals' scientific prestige’ http://arxiv.org/abs/0912.4141
- SCImago journal and country rank website: www.scimagojr.com
- Both papers can also be downloaded from the journalmetrics.com site
Scopus provides raw data to SNIP and SJR, and provides access to these journal metrics, both on www.journalmetrics.com and in Scopus Journal Analyzer.
How is Scopus involved?
Why did you incorporate journal-ranking metrics in Scopus?
Metrics help researchers, librarians and decision-makers achieve their desired outcomes. Ultimately, our customers want to improve the quality and impact of research, whether they perform research, disseminate it or fund it. Through customer interviews, research studies and end-user focus groups, we have learned that various users, buyers and influencers use journal performance to answer a variety of questions, such as which journals to track, where to publish and how to evaluate research outcomes. Furthermore, we have noticed that the existing tools to compare journals do not fully meet the needs of the research community in terms of coverage, transparency and robustness.
For example, only journals covered in Thomson Reuters' Journal Citation Report have an Impact Factor: that's about 8,000. However, researchers want to compare more journals in their fields than just those listed by Thomson Reuters.
With its breadth of journal coverage, Scopus is well positioned to fulfill this need: Scopus covers more than 18,000 publications and thus enables researchers to compare practically any journal they want to on the basis of transparent, robust and fair metrics.
Why release two indicators at the same time in Scopus?
No matter what you are evaluating – journals, researchers, institutions, countries – one metric can never encompass all the aspects of performance that the different users and situations demand. One-dimensional evaluation is limiting, misleading, and will give questionable results. There is no single ‘perfect’ indicator of journal performance. As a publisher, we use multiple indicators, including revenue, usage, and the opinion of the editor, to help broaden our view on our own journals’ performance, and we are not alone.
For this reason, we felt that it would give the wrong message to release one metric. We could have endorsed any number above two, but the research we have done led us to believe that SJR and SNIP are a good complimentary pairing. The fact that Scopus endorsed two complementary measures (SNIP and SJR) reflects the notion that journal performance is a multidimensional concept.
How did you select SNIP and SJR?
We had some criteria in mind when we were considering which journal-ranking metrics we should include in Scopus:
- Enable multidimensional journal evaluation – i.e. one metric would not be a solution to the problem.
- Multiple metrics must highlight different aspects of journal performance – for instance, we would not include both SJR and Eigenfactor.
- Address user concerns with Impact Factor.
- Suitable to be calculated using Scopus data structure – we worked with bibliometricians to run test calculations for more than just SJR and SNIP.
- As many journals as possible indexed by Scopus should end up with metric values.
- Institutes and organizations involved in research and academic relations, including internal Elsevier departments, gave us valuable insights into journal performance.
- ion not considered:
- Institutes and organizations involved in research and academic relations, including internal Elsevier departments, gave us valuable insights into journal performance.
For which source types will metrics be calculated?
All active peer-reviewed source-types in Scopus will get metrics. The Scopus ‘source browse’ file includes journals, proceedings, and book series, but not trade journals. Only the peer-reviewed content within these peer-reviewed sources (articles, reviews and conference papers) is used to generate the metrics. For comparison, the Impact Factor is calculated using citations from all document types, whether peer reviewed or not.
How will Scopus handle new journals?
New could mean (i) newly launched and indexed journals, or (ii) already established but newly indexed by Scopus.
Scopus, and thus SNIP and SJR, have a significant timing advantage for new titles. Metrics will be produced as soon as data are available for one complete publication year and some of a citation year. This gives you a head start of between one and two years.
- New journal (indexed in Scopus at launch): SNIP and SJR can be calculated for a new journal launched in, say, 2010, when Scopus has content for 2010 and part of the 2011 citation year. This means a new journal gets a SNIP and an SJR value in the year after launch. For Impact Factor, a 2010 journal would get a 2011 Impact Factor, first published in June 2012. (Based on one year of content and one year of citations.)
- Established journal (newly indexed in Scopus): This is the same as above. The first values will be published in the year after launch. So, if a journal is indexed in 2010, it gets 2011 values in 2011. For Impact Factor, this journal would get a 2012 Impact Factor, published in June 2013. (Based on two years of content and one year of citations.)
For which years are metrics available?
The first year for which metrics are available is 1999. This is because it takes four years of data to calculate a complete value for SNIP and SJR, and Scopus citation data start in 1996.
At launch in January 2010, metrics were available from 1999 to 2009. The metrics are updated twice a year.
If any data are available for a particular year, metrics will be calculated even if that citation year is not complete.
When will metric values be refreshed?
We will refresh metric values twice per year. We aim to refresh in April and September, in line with the updates in Scopus’ Journal Analyzer.
An early version of any year’s metric can be calculated in that year – so in 2010, a 2010 metric can be calculated. At the date of writing, the most recent refresh was November 2010, based on a data cut taken in April 2010. Values up to 2009 are available in Scopus.com, and 2010 values are already available on www.journalmetrics.com, under the Journal Metrics Values tab.
For comparison, new Impact Factors are added to Journal Citation Reports in June of each year. The 2010 Impact Factor will be available in June 2011, significantly later than 2010 SNIP and SJR.
What exactly will be published when you refresh the data?
To answer this question, we first need to consider the effect that a dataset, in this case a specific database, has on the resulting values. SNIP and SJR are calculated from Scopus, which not only adds new content as it comes out, it is also continuously updating historical content. As a consequence, SNIP and SJR values cannot be fixed in time; when the values are published, they will take all the historical updates into account as well.
Scopus adds content retrospectively to fill gaps, include back files of additional publishers, and to add significant new areas of content, such as with the Arts & Humanities project. Scopus is dynamic, and always shows citations per document received up to the current moment.
Transparency in the generation of SNIP and SJR is very important. If we artificially fix SNIP and SJR at a particular point in time, then in the future we might find that what is in the database no longer reflects the fixed metric values. Transparency means being able to relate the metrics to the current state of the database.
Can I view SJR and SNIP by subject classifications?
You can do this, but one of the key advantages of SNIP and SJR is that you don’t have to use them in this way.
You can use SJR and SNIP within a journal category, but you can also create your own set of journals (your ‘virtual category’). You don’t have to worry about different subject fields and behavior because the metrics take care of this for you.
You are probably used to viewing journals within a subject field because metrics that don’t ‘normalize’ the differences in behavior between subject fields, like Impact Factor, are only useful when viewed in this way.
For reference, Scopus currently includes the following three categories:
- The four subject areas shown on the basic search page.
- 27 main categories, within which you can search and view results in Scopus.
- 330+ sub-categories, which are not displayed or searchable in Scopus. These categories are supplied with the Custom Data sent to SCImago and CWTS.
Why might a journal in Scopus not have a metric for a particular year?
There are a few reasons why this could be possible – take a look at the list below to see if any apply to the journal you are looking at:
- Newly launched/indexed and in the first year of content in Scopus.
- Newly launched/indexed and in second year of content in Scopus before the current year’s metrics release.
- Newly launched/indexed and in second year of content in Scopus but after current year metrics release. Journal could be behind schedule, or indexing could have run into problems (e.g. lack of delivery) meaning that there is no current year content in Scopus and so it is not yet possible to calculate metrics.
- Discontinued – no content in previous three consecutive years.
- The source is a trade journal – we only calculate metrics for peer-reviewed titles.
- Inactive titles according to Scopus source browse file – metrics are not calculated for these sources.
- Journal does not contain peer-reviewed content (articles, reviews or conference papers), although it has other content that is still indexed in Scopus.
Therefore, with each data refresh, all values (current year and backwards) will be recalculated and refreshed. The info site will house an archive of values for verification purposes.
Impact Factors, in comparison, are fixed for all time once published because no data are added retrospectively into their source database. Impact Factors are not calculated from the Web of Science, but from Journal Citation Reports – an internal database only available within Thomson. This also means users cannot access the data to understand why they are seeing certain results.
Why is this result (value/rank etc) different from the Impact Factor?
There are two aspects that can play a role here: (i) the database from which the metrics are calculated and (ii) the metrics themselves:
- Simply using a different database will cause a different result. If you calculated SNIP and SJR using Thomson’s database, you would get different values than you do using Scopus. They have different coverage.
- The metrics are calculated differently. SNIP and SJR are designed to evaluate journals based on different aspects of performance than those emphasized by the Impact Factor.
Think about the following examples for why there could be differences:
- Document type classification differences.
- Database content coverage differences.
- If your citation count is lower – remember that SJR and SNIP only count citations from articles, reviews and conference papers. Impact Factor counts citations from all document types, including editorials.
- SJR – citing journals could have low or high prestige.
- SNIP – normalization process using citation potential – a journal could sit in a field with a high citation potential, which will significantly reduce its raw impact value. Or it could sit in a field with a low citation potential, which will increase its raw impact value.
My journal’s SJR value/rank shows a sudden drop. Why?
The SJR value of a specific journal is not only affected by the prestige of the journals citing it, it can also change in relation to the (evolving) coverage of the database.
Professor Felix de Moya explains that in calculating the SJR, a value of ‘prestige’ is first calculated for every journal in the database. Prestige is size-dependent, and could be mathematically expressed as the probability that if we ask any researcher in the world which journal the paper they are reading is part of, the paper belongs to the selected journal. The probabilistic values for all the journals in the database sum to 1; in other words, the bare fact of being included in Scopus gives a journal some prestige, but Scopus has a fixed amount of prestige so if there is increasing journal coverage over the years the prestige is shared over more journals, and previously included journals will experience a decrease in base SJR value. Note: This decrease will not necessarily affect a journal’s rank unless more prestigious journals are included in Scopus.
It is also important to note that the Citations and Documents charts in Journal Analyzer don’t refer to the same windows as SJR and SNIP. The citations chart shows citations to all publications up to the respective year. For a journal to maintain or increase its citation rates, the line should go upwards, as it does for Nature and Science in the screenshots below. However, for Cell the line stays flat, indicating that each document is being cited less often because there are more available to be cited.
When using SJR, it is more important to look at the patterns rather than absolute values. SJR is good for assessing the relative ranks between journals.
Are other publishers allowed to display these metrics?
Yes. SJR and SNIP are freely available outside Scopus (www.journalmetrics.com) and we welcome their use on other websites. This is intended as a basis for the free distribution of and open debate on journal metrics.