The following questions were posed during the Library Connect webinar, "Research impact metrics for librarian: calculation and context." The webinar presenters have shared their thoughts below.





Two “tricks” that I’ve seen mentioned to increase article impact and citations are writing articles in collaboration and open-access publishing. Do these methods improve visibility?

Yes, papers with multiple authors are generally thought to be more highly cited. Each author might self-cite, and together they will have a broad network of contacts who might cite the paper. A nice study of this can be found on PubMed:http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3296690/ 

With open-access papers, more people can read those papers, so it stands to reason that there is a better likelihood of it being highly cited. Many published studies have investigated this effect. See this list for more information http://sparceurope.org/oaca_list/ 

There are many such “tricks,” and I think that on balance the two that you mention are worthwhile because of the wider benefits, beyond citation accrual.


What is the difference between the h-index and g-index?

For more information on both the h-index and g-index see the following by Jenny Delasalle:




What would an average h-index score be in chemistry, as compared to politics or nursing? 

I would expect that the average h-index of the chemist would be higher than that of the politics researcher, and perhaps the nursing researcher somewhere in between, but I couldn’t tell you numbers. There’s the question of citation practices, and of the numbers of journals and articles in the data set for each discipline. It’s not easy to calculate such an average. You can do a literature search to find out what others have calculated and published about, but even if you do, you won’t get the most up-to-date picture. And why would you want an average for a whole discipline anyway? A professor will have a much higher score than an early-career researcher, even within a high-citing discipline, and it seems unfair to talk about averages. I think that the best you can hope for is to look at the profiles of a number of individuals of similar career stage and discipline to the researcher whom you hope to benchmark.


Does a researcher need at least four documents to have an h-index?

No. If there is a researcher with 0 citations, then they have no h-index. But if a researcher has even one citation for one document, they would have an h-index.


Could you explain how Google Scholar calculates h-index?

The calculation for h-index is the same regardless of where it’s found. However, the h-index could vary depending on the data source. For example, if one database indexes and draws data from a larger pool of journals, it may have more citations to include in the calculation.


What metrics be can be used to measure open access journals that are not indexed or listed in mainstream resources?

If you’re evaluating journals, have a particular purpose in mind. If you’re selecting items to either add to your library collection or weed out from it, for example, you might look at different features than an author who is choosing where to publish, or a researcher who is choosing which articles to read.

Authors should look first at the subject match, and whether the type, scale and significance of their research project fits with the journal’s usual published content. Authors should read some of the articles featured in the journal of interest, and perhaps also talk to experienced authors.



For the study of MRIs of subjects shown a prestige journal containing their article, was there a difference for men and women?


Aside from mentioning that nine of the 18 participants were female, it offered no sub-analysis by gender.


Is there any correlation between all four of the journal metrics?

We could point at a large number of studies that have looked at the differences between the metrics, but none has compared all four of them, with one exception, this article written in Spanish. In any case, Table 2 shows the correlations between the scores, and they are typically reasonably high. (But of course, it’s in the scores that do not correlate where things get interesting!)


Benchmarking is not easy in general, and it is more difficult for cost per use or download. Do you have any resources that could help? 

You would want to ensure your downloads are counted in the same way. Publishers will supply that information, but to compare the outputs from one publisher against another publisher, I think you would have to do it yourself. Also, you can’t really compare one journal with another. A highly practice-oriented medical journal would give you a high download count, while a journal with a different focus may be used in a different way. You can’t even compare one download with another. What about the researcher who was so inspired by what they read in an article that they went on to successfully bid for a project to research that subject further, as compared to the download that is still sitting in someone’s digital library waiting to be read? So I would be wary about using cost per download on its own.

I can really only recommend the COUNTER website for further reading:http://www.projectcounter.org/


There's an increasing emphasis on interdisciplinary research. How could you benchmark impact in these circumstances where publishing a paper in a journal might not appear in the usual category for that discipline?

Citation impact of an individual article or an entire journal can be done in a variety of ways, but in essence it comes down to comparing the observed count of citations to a derived ‘expected’ count of citations for a document or documents of the same age, document type (article or review, for example) and subject field or topic. The latter needs most care, as the field can be defined by top-down journal classification into subject groupings, or at the document level by (for example) analyzing the spread of cited references the document itself includes. For the measurement of truly interdisciplinary research outputs, where the journal classification may not (yet) sufficiently capture emerging or novel disciplinary intersections, metrics using the document-level benchmarking is often more appropriate. Since the four metrics we looked at today are defined at journal level and are not article-level metrics, they do not use document-level benchmarking.


How does SciVal's field weighted citation index relate to SNIP or SJR - which of the latter journal level citation metrics can be comparable across disciplines?

SciVal’s Field-Weighted Citation Impact (FWCI) indicator is a Snowball Metric and is defined as follows in the Snowball Metrics Recipe Book:

Field-Weighted Citation Impact is the ratio of the citations actually received by the denominator’s output, and the average number of citations received by all other similar publications. A Field-Weighted Citation Impact of:

  • Exactly 1.00 means that the output performs just as expected for the global average.
  • More than 1.00 means that the output is more cited than expected according to the global average; for example, 1.48 means 48% more cited than expected.
  • Less than 1 means that the output is cited less than expected according to the global average; for example, 0.91 means 9% less cited than expected.

Field-Weighted Citation Impact takes into account the differences in research behavior across disciplines.

FWCI is similar to SNIP and SJR insofar as it uses the same data source when implemented in SciVal as is used for the calculation of SNIP and SJR (i.e., Scopus). It also inherently accounts for field-dependent citation differences. All three metrics can be used to compare journals across different disciplines.


What do grant funding agencies consider the best proxy for impact?  Which factors, if any, do agencies like NIH and NSF consider important?

I don’t think that we can answer for the NIH or NSF, or indeed for other grant funding agencies. It is possible that peer review panels that are assessing grant applications for funders might be aware of a researcher’s impressive publication and/or citation scoring history, but I think that they would focus more on the quality of the research proposal.

I think that a read through of literature produced by many funding agencies will show that they are actually interested in research impact, which goes beyond citation measurement. I believe that they are more interested in the context, which can sometimes be gleaned from (alt)metrics and analytics tools. 
  What is DORA?
 The San Francisco Declaration on Research Assessment (DORA), initiated by the American Society for Cell Biology (ASCB) together with a group of editors and publishers of scholarly journals, recognizes the need to improve the ways in which the outputs of scientific research are evaluated. The group met in December 2012 during the ASCB annual meeting in San Francisco and subsequently circulated a draft declaration among various stakeholders. It is a worldwide initiative covering all scholarly disciplines. See: http://www.ascb.org/dora/