Tuesday 10 April 2012

Role of the library in research evaluation

Jenny Delasalle, University of Warwick

This afternoon breakout session focused on what can a librarian do to use their expertise and knowledge to help support their institutions in research evaluation.

Jenny set the scene by outlining the Research Excellence Framework (REF) and explaining how REF 2014 ratings will range from 4*(outstanding) to U, and then moved on to discuss how institutions look for the impact of their research and the different metrics they employ.

Measuring the impact
There are many different ways an institution can measure the impact of research, including:
  • Bibliometrics
  • Outputs that can be counted and citations and calculations based on this
  • Involvement as peer review
  • Journal editorships
  • Research grant applications and research income
  • Prestigious awards
  • PhD supervision load of staff
Citations still remain core to output measurement, but Jenny noted there are many different motivations behind a citation, including paying homage to experts to those likely to be peer reviewers; to lend weight to own claim; giving credit to peers whose work you have built on; providing background reading criticising or correcting previous work signposting under-noticed work or simply self-citation.

What else can we measure?
Jenny then covered a huge variety of output-based measures, across bibliometrics, webometrics and altmetrics, that can be used in addition to the traditional paper counts, Impact Factors and citations:
  • H-index, calculated using number of publications and number of citations per output
  • M-index, = h/m where n is number of years since first published paper
  • C-index,  measuring the quality of citation
  • G-index, which gives more weight to highly cited articles
  • H-1 index, showing how far away a researcher is from gaining 1 more point on H index
  • E-index, looking at surplus citations in the h set
  • Contemporary H-index: recent activity
  • Google’s i10-index, showing the number of papers with at least 10 citations
  • Number of visitors
  • Number of blog entries, likes, tweets etc.
A great deal of this information is available through library citation sources and data repositories managed by librarians, so librarians are ideally placed to advise on different metrics to help researchers and be the expert advisor for all bibliometrics.

Why measure outputs?

These outputs can be very valuable to researchers as well. Keeping a record of what a research has published is useful for his or her CV, for webpages that describe the work, as well as providing information to institutional data-gathering exercises.

Keeping an eye on who is citing the researcher’s work will help him or her identify future collaborators, maintain awareness of other research in their field and be aware of which articles are influencing their research profile the most.
 
The data can also underpin ideas for the researchers to tell a story to sell themselves:
  • List articles published, with the citations for each, comparing with the average citations per paper over 2 years old.
  • Establish if this is high for their discipline.
  • Compare their article’s citation number with the journal average for that year
  • List any outstanding individuals that have cited their work.

Other valuable metrics for the institution could be the number articles with no citation, or the number of joint articles (particularly good to identify levels of collegiality and interdisciplinarity)

Altmetrics and webometrics
Jenny gave two examples of publicly visible data for articles.  The PLoS website shows views, downloads, citations, bookmarks, likes, and tweets, giving authors more context and detail about their articles.  The Warwick Research Archive Portal (WRAP) similarly has publicly visible data in repository for every article.

Advice on gaining visitors
Jenny also shared some recommendations on getting more people to view researchers work.  Having more visitors to your paper will "boost your Google juice" so authors should put links to their papers everywhere you can, including Academia.edu, as well as getting someone to cite your paper (even in draft) as Google Scholar will pick it up.

The discussion was opened to the floor, with contributions being made around the following themes:
  • Not all libraries are involved in researcher evaluation and, of those that are, some have started doing so on their own initiative where others are given a remit to do so.
  • As information professionals, librarians should be involved, having excellent understanding of scholarly publishing and institutions. The library is the linchpin in an institution, and academics look to librarians for expertise and understanding.
  • The importance of checking that Web of Science and Scopus links are correctly linking to researchers’ papers was noted, as well as working with the researchers themselves - they will recognize if their top paper is missing from their list.  
  • Some pan-industry solutions for this include Project ORCID and the Names project, but none have come to fruition yet.
  • Queries from researchers are extremely varied, with researchers wanting to help on deciding where to publish, how to get lists of Impact Factors, how to work out their H-indexes and where to get ideas for collaboration options.
  • Jenny said that when she is approached by researchers for advice on collaboration options, she only offers a source that researchers can use to find options themselves, rather than providing actual selections
  • It was felt that there was sometimes a lack of clarity over Shanghai Rankings and the point was made that universities should make clear to their faculties their expectations regarding rankings in terms of output.

No comments:

Post a Comment