Grace Baynes, Nature Publishing Group
Grace took the breakout group through the complex issue of value: what does it mean and how do we measure it?
If we define value as relative worth, utility or importance, we have a dizzying number of things that we can measure, from absolute to relative, quantitative to qualitative. With the advances in technology, we can measure more than ever before, but being able to measure every single thing is not necessarily a good thing.
Using the Gartner Hype Cycle’s different stages of enthusiasm and disappointment triggered by new technologies, we can see we are aiming toward the Plateau of Productivity with a lot of these new metrics, but we might have some way to go with some.
How do we pick through all the available information meaningfully? Her advice is straightforward: think about what you really need to know, the questions that you need to ask, and then think about how exactly you want to go about discovering the answers.
The underlying areas of value measurement are still the journal, the article, grant, research output, the researcher, as well as the Eigen Factor and Article Influence score, but how reliable are all these metrics?
Usage continues to help understand the uptake of holdings decision and initiatives like COUNTER help benchmark and break down the information so that librarians can compare use across multiple publishers for their institutions.
The Journal Usage Factor allows the evaluation of journals and fields not covered by ISI as well as permitting the measurement of journals that have high undergraduate or practitioner use, although Grace noted that if you compare JUF with IF, you see very little correlation.
Cost Per Download should help us understand what is good and what is not so useful, but is there an absolute value to a download? Is the download you cited more valuable than the one that you just read? Recent research carried out by Nature Publishing Group show that Cost per Local Citation might move us closer to evaluating the real impact in research, as might Cost per Local Authorship.
And what about the Eigenfactor, Peer Evaluation or altmetrics including tweets, likes and shares?
It is a bewildering task to try to measure all this data, and while products from Symplectic or Elsevier’s Scival can help gather and measure critical information, we have to think about what are the most important factors for decision making.
Grace then opened the floor to consider which metrics are important for the participants:
- Information needed depends on who you are talking to and what is most meaningful, or will help academics keep the resource they need.
- CPD is still important to feed back to finance department, and some institutions use CPD versus Document Supplied or ILL to get an idea of value for money.
- Some institutions don’t look into great detail, gathering data on the bundle or platform, rather than individual titles. This is usually done to convince funders that the content being bought is useful.
- Others have to go into detail to identify the best titles for the institution. This is due to funding restrictions.
- CPD isn’t always accurate, as cheap journals aren’t necessarily good value for money, even if on the surface they look good.
- Usage stats are helpful at a local level when deciding to buy or cancel, but from discipline to discipline download levels and cost per downloads vary.
- Local citations and actual use may be more helpful to understanding value, but this is very time consuming.
- There’s a big call for being able to access denial data, to understand patron demand, but up until recently one had to ask publishers - difficult if you don’t have a relationship with the publisher. The next release of COUNTER will include denials.
Grace ended this highly interactive session with a caveat: we can’t quote metrics in isolation, we need to contextualize. We must present metrics responsibly.