Helen Adey, Nottingham Trent University
Helen gave
us interesting insight on Nottingham Trent University’s different approach to
performing their recent serials review.
Background
Typically,
libraries perform serial reviews because of money. At Nottingham Trent, they
rarely get full inflation allowance each year – e.g. the university might give
2%, but publishers might raise prices by 4% - so they are forced to make cuts.
In the past,
they have given subscription lists to their academics to identify titles that
are no longer relevant or, if there is a new title they wanted, they had to
cancel something else of equivalent cost.
This was not successful as the faculty didn’t engage with this this
approach and, additionally, if you have taken this approach for years, it can
feel like you have already cut all the optional bits and are already down to
the bare bones.
Additionally,
the book budget has been used to support serials but this is not sustainable as
every year journal prices outstrip the RPI inflation.
Researching methodologies
They
launched a survey last May, asking how different libraries reviewed their
holdings and received 97 responses from
12 countries. Most were in Higher
Education, but they also got responses from corporate bodies and specialist
libraries. Responses came in on the
following topics:
Frequency of
serials review:
- annually (64%)
- when required (18%)
- 2-3 years (17%)
- 4-5 years (1%)
What
approach do you adopt:
- in depth on all subjects (47%)
- selective review by subject (35%)
- other (17%)
What factors
or data sources are used in the review:
- change in sub cost
- usage data
- qualitative feedback from faculty
- qualitative feedback from students
- librarian discretion and expertise
- changes in research activity
Who gets to
vote on serials selection:
- no voting (65%)
- academic staff (14%)
- researchers (8%)
- students (2%)
Common
themes across all respondents included budget driven decision making, with CPU
and prices being a main consideration, and usage statistics being the main
driver for cancellations.
They also
identified various different methodologies for serials review:
- annual review
- subscriptions committee
- discussion amongst library staff
- discussion with faculties
- annual review by academics
- faculty ranking journals, using the 100 points and sorting system
A new methodology
So with this
in mind, they decided to try a new methodology: Zero Based Budgeting. Rather than having what you had last year and
try for more, starting completely from scratch and bidding for the money. Forget
what you have now and tell us what you really want.
They
performed a pilot across 3 schools, starting with a survey to find out which journals
people used daily, weekly and monthly; which 7 journals would they take to a
desert island; if a storm washed all the journals away, which one would they
save.
The level of
engagement varied from school to school.
The School of Science did not choose to engage, and the School of Art
and Design having already started thinking about their serials. This meant they were only able to fully trial
the methodology with the School of Social Sciences.
They asked
the School what they used for their research and what they recommended their
students, and gave them nothing to influence thinking – no statistics, no lists,
no prices.
Results
Art and Design
had already done some voting, so the library attended their School Day,
bringing along sample copies of journals (both existing titles that had not
been voted for and new requests that hadn’t been in the library before) and
coloured stickers for the School to use for voting. The data was then collated and the titles
were ranked. They found there were
commonalities, and definite correlation between usage statistics and cancelled
titles. In the end they got rid of 6
titles, and got 22 cheaper titles instead.
In the School of Social Sciences there was a massive amount of voting
and disparity of voting. The library has identified some possible cancellations
based on usage, and some new additions based on priority listings. They are hoping the new subscriptions are
cheaper than the cancellations and at the moment they are confident they can
hit the top two priority levels.
The liaison librarians have prioritised subscriptions based on number
of votes and occasionally on costs, and are trying to ensure a balance across
the different needs of the School.
Evaluation of the approaches
The pros and cons of the traditional review
Pros included: quick process and can fit with sub year and finding
that mythical time when you can get academics’ attention and get their input;
can fit this in with renewal timings; much less work that blank page, but
voting restricted to current subs.
They felt that the cons outweighed the plus points: methodology feels synonymous
with “cuts” in academics’ minds; it requires academic input; it can be
challenging for a new researcher, who might have less influence than an
established academic, to get their preferences considered.
Traditionally, the library asked academics to see what can be cancelled,
rather than the process being driven by usage stats.
The impact on collection was minimal with stable subscription profiles. Is that good or is it static and
moribund? Seeing as there are new
journals and research areas you would expect more fluctuation so perhaps low
engagement.
The pros and cons of the blank page approach
Pros: more holistic view of what is required; embedded usage stats as
part of process, the stats have added reassurance; a fit for purpose collection
that meets needs, rather than historic profile; very positive faculty feedback
as they enjoyed being part of the process rather than it being a paper exercise.
Cons: mixed levels of engagement; sometimes low response levels (what
level do you need to see before you can respond if only half of school bothered
to respond it would be skewed); poor fit with the library subs year (the School
responded in June last year but they are waiting for signoff so it will be almost
18months); it is a huge piece of work.
They are not happy with the traditional method, and the jury is still out on the blank page method - isit sustainable or is there a better
evidence based way of doing this?
What about other ways of finding out what users want?
The library uses Tallis Aspire to produce report
on all journal and articles on Resource Lists, which answers what they are
recommending to students.
ILL data can be catagorised by school
The library can capture requests to digitise content and put on the VLE
Turnaway data from publisher- should count towards evidence of what
users want
What evidence is there for what ppl don’t want?
Low usage stats can be evidence- they have started looking at CPU v ILL
Reports of loss of e access and nobody noticed for months – how would
you collate and use that data.
In-house knowledge of subject teams.
Conclusions and learning points
- Don’t underestimate:
- how important it is to engage academics to tell them what you are trying to achieve, why they need to engage. Don’t assume.
- the workload pre and post review, and all the analysis
- the unpredictable nature of voting patterns
- the likelihood of top wish list items being something they already have
- Don’t make survey too complex and don’t ask too many questions as this can lead to unfinished surveys.
- Be aware that the journal they really need might not be in the list that they’ve voted for.
- Some academics might deliberately or unknowingly misunderstand questions
- Think about metrics: does frequency of journal use bear relation to importance of journal to the academic?
- Consider other approached to find out information (slot on courses meetings etc.)
- Make sure that, having engaged the academics, you feed back on actions taken and outcomes.
- Use this approach with caution if you have to cut journals as you need to be confident of a “good news” outcome at the end or some sort of contingency plan so you can follow through and not disappoint people
Future activity
The jury is still out on if this approach is the way to go, so another
pilot would be an idea, perhaps involving a combination of both survey and face
to face.
As blank page involves a great deal of work, they are considering a rolling
cycle of department blank pages reviews on different years, with departments getting
equal value in and out reviews in between.
They are also thinking about trying the evidence based metrics
approach (ILL, rec lists and usage stats).
No comments:
Post a Comment