Tuesday, 10 April 2012

Role of the library in research evaluation

Jenny Delasalle, University of Warwick

This afternoon breakout session focused on what can a librarian do to use their expertise and knowledge to help support their institutions in research evaluation.

Jenny set the scene by outlining the Research Excellence Framework (REF) and explaining how REF 2014 ratings will range from 4*(outstanding) to U, and then moved on to discuss how institutions look for the impact of their research and the different metrics they employ.

Measuring the impact
There are many different ways an institution can measure the impact of research, including:
  • Bibliometrics
  • Outputs that can be counted and citations and calculations based on this
  • Involvement as peer review
  • Journal editorships
  • Research grant applications and research income
  • Prestigious awards
  • PhD supervision load of staff
Citations still remain core to output measurement, but Jenny noted there are many different motivations behind a citation, including paying homage to experts to those likely to be peer reviewers; to lend weight to own claim; giving credit to peers whose work you have built on; providing background reading criticising or correcting previous work signposting under-noticed work or simply self-citation.

What else can we measure?
Jenny then covered a huge variety of output-based measures, across bibliometrics, webometrics and altmetrics, that can be used in addition to the traditional paper counts, Impact Factors and citations:
  • H-index, calculated using number of publications and number of citations per output
  • M-index, = h/m where n is number of years since first published paper
  • C-index,  measuring the quality of citation
  • G-index, which gives more weight to highly cited articles
  • H-1 index, showing how far away a researcher is from gaining 1 more point on H index
  • E-index, looking at surplus citations in the h set
  • Contemporary H-index: recent activity
  • Google’s i10-index, showing the number of papers with at least 10 citations
  • Number of visitors
  • Number of blog entries, likes, tweets etc.
A great deal of this information is available through library citation sources and data repositories managed by librarians, so librarians are ideally placed to advise on different metrics to help researchers and be the expert advisor for all bibliometrics.

Why measure outputs?

These outputs can be very valuable to researchers as well. Keeping a record of what a research has published is useful for his or her CV, for webpages that describe the work, as well as providing information to institutional data-gathering exercises.

Keeping an eye on who is citing the researcher’s work will help him or her identify future collaborators, maintain awareness of other research in their field and be aware of which articles are influencing their research profile the most.
 
The data can also underpin ideas for the researchers to tell a story to sell themselves:
  • List articles published, with the citations for each, comparing with the average citations per paper over 2 years old.
  • Establish if this is high for their discipline.
  • Compare their article’s citation number with the journal average for that year
  • List any outstanding individuals that have cited their work.

Other valuable metrics for the institution could be the number articles with no citation, or the number of joint articles (particularly good to identify levels of collegiality and interdisciplinarity)

Altmetrics and webometrics
Jenny gave two examples of publicly visible data for articles.  The PLoS website shows views, downloads, citations, bookmarks, likes, and tweets, giving authors more context and detail about their articles.  The Warwick Research Archive Portal (WRAP) similarly has publicly visible data in repository for every article.

Advice on gaining visitors
Jenny also shared some recommendations on getting more people to view researchers work.  Having more visitors to your paper will "boost your Google juice" so authors should put links to their papers everywhere you can, including Academia.edu, as well as getting someone to cite your paper (even in draft) as Google Scholar will pick it up.

The discussion was opened to the floor, with contributions being made around the following themes:
  • Not all libraries are involved in researcher evaluation and, of those that are, some have started doing so on their own initiative where others are given a remit to do so.
  • As information professionals, librarians should be involved, having excellent understanding of scholarly publishing and institutions. The library is the linchpin in an institution, and academics look to librarians for expertise and understanding.
  • The importance of checking that Web of Science and Scopus links are correctly linking to researchers’ papers was noted, as well as working with the researchers themselves - they will recognize if their top paper is missing from their list.  
  • Some pan-industry solutions for this include Project ORCID and the Names project, but none have come to fruition yet.
  • Queries from researchers are extremely varied, with researchers wanting to help on deciding where to publish, how to get lists of Impact Factors, how to work out their H-indexes and where to get ideas for collaboration options.
  • Jenny said that when she is approached by researchers for advice on collaboration options, she only offers a source that researchers can use to find options themselves, rather than providing actual selections
  • It was felt that there was sometimes a lack of clarity over Shanghai Rankings and the point was made that universities should make clear to their faculties their expectations regarding rankings in terms of output.

Business Models

Ken Chad’s (@kenchad) breakout session on Business Models was a really useful plain English walk through everything any business or organisation should think about when shifting focus or approaching a new challenge. Devoid of jargon and full of references to business and marketing books and articles backing up his points, it was skilfully pitched to appeal to the multiple audiences attending the conference. I came away with some great tips I’ll tap into from time to time. Librarians could do the same when making strategic plans to face the future. Slides are here and give a very good taste of the session without overdoing the word count.


A helpful definition Ken used was “‘a business model describes the rationale of how an organization creates, delivers and captures value.” Crucially, for a UKSG audience, he explained that a business model “applies as much to a public sector organisation and not-for-profit, social ventures as to a a commercial company. To survive every organization that creates and delivers value must generate enough revenue to cover its expenses, hence it has a business model”.


He began by positing that organisations involved in scholarly communication face the challenge of relentless, disruptive, technology-driven change and tough economic times. Scary but best to face facts head-on! The start of any journey into developing a business model is to be clear on your organisation’s mission and the strategy. Sounds simple but getting these right is the cornerstone of the business model and probably often not given enough time or credence. He went on to explain that strategy is not goal setting but is ‘a cohesive response to an important challenge and that good strategy includes a set of coherent actions.’ 


Ken spent a bit of time on “the capabilities approach” and most useful to me was asking the question “what are the three to six capabilities that describe what we do uniquely better than anyone else?” Once determined, ask “can everyone in the organization articulate our differentiating capabilities?” and “is our leadership reinforcing these capabilities?”  To help determine capabilities he suggests we focus on value. “What’s valuable/special about what we do? Why should people use our products/services instead of alternatives?”


Towards the end of the session there was a helpful section on the building blocks of business strategy which incorporated the following six areas: Customer Segments; Value Propositions; Channels; Customer Relationships; Revenue Streams; Key Resources; Key Activities; Key Partnerships; Cost Structure. Boiled down, the key elements are about understanding your value propositions and how they seek to solve customer problems and satisfy needs and how successfully delivering on these value propositions creates the organisation’s revenue streams. 


I’d recommend taking 5 mins to skim the slides when you are next considering business models or need to focus on any of the building blocks. Slides are text-light and I think some key questions therein could produce useful food for thought and a way in to what can seem a daunting task to those who might consider themselves non-business types.


Finally, he has read a lot of books so you don’t have to. As I tweeted whilst at the session, I would not get time to do that much reading until I retire. The irony!

Monday, 2 April 2012

Identity before Authentication

Breakout Session - The Importance of Global Identity to Education led by Mark Williams

This session examined the many problems of a users identity, how that identity is established and how the user gains access to the protected resources they were looking for. The key theme was:

You shouldn't need a piece of string to navigate the internet

And yet in many cases the process of getting to a resource can be complicated and time consuming and many users get frustrated and give up.

At this point in the session Mark demonstrated the process of getting to an article through Shibboleth from a URL someone had sent to him in an email.

First click on the URL, select the PDF, get taken to a pay per view screen. Realise that you already have access and so select the tiny institutional login at the bottom of the screen. Skip over the username/password option and finally get to a list of countries, scroll to the bottom of this list, choose UK, enter details, website automatically takes back to the journal homepage and you have to find the article all over again.

He suggested ways that publishers can improve these processes. For example, how does the real-estate on your login screen divide? Is it a large individual login box, as in the example, with a small institutional option off to one side? Test how these choices affect abandonment throughout the process. Make the user journeys as simple as possible for as many users as possible. Remove the need for the string, journal sites should not be a labyrinth.

Publishers need to think about when login should be as well as where. Get sharing options to provide a one click to article experience. And provide clear logged out wording as well as dealing with login failures in clear language: we could not login you in because ...

Mark then went on to discuss the trend for portals and half-jokingly termed it:
One ring to rule them all

But he stressed that publishers cannot expect all users to be channelled into the same pathways, and that there is a need to offer sign-in at any point during the article or resource discovery process. As a way to start standardising this he referenced the recent espresso (Establishing Suggested Practices Regarding Single Sign-On) document published by NISO which provides crib sheets of best practice for publishers to consult during development programs.


Espresso is not just about authentication. It includes discussion around design of landing pages, discovery pages, protected pages, login pages (institution), and recommendations on rewriting open URLs, use of SP and error handling and branding.

Mark then opened the floor to discussions around things we would like from JISC - as a mixed room of publishers, librarians and vendors this proved to be an interesting and lively discussion. Key recommendations from the attendees spread beyond the remit of JISC but were very thought provoking, these included:
  • A glossary of key terms and acronyms
  • Error diagnostics - is the system working or is it me?
  • A widget that is the best practice implementation and is downloadable and customisable for publishers and librarians
  • Geo-location on drop-down menu on login form on JISC
  • Subdivisions within institutions


Friday, 30 March 2012

The future of the eTextbook


Breakout Session: -Sara Killingworth, Maverick Outsource Services

Sara’s session discussed the market situation, development and possible future for eTextbooks. The data focuses on faculties and user behaviour.

Market transition from print to e:

ETextbooks are the last of the ebook category to be opened up. Sara pointed out that while they have been evolving through various mediums, they are still essentially in their original formatting.
  • The market value of eTextbooks in 2008 was $1.5bn and is expected to rise to $4.1bn by 2013.
  • In 2010 there were 19.5million ereaders sold and 18million tablets (15million were iPads). This is expected to rise to 150million ereaders and 100million tablets in 2013.

Even so, print textbook sales are still growing and students currently still prefer printed textbooks where they prefer the look, feel, permanence and ability to resell. Second hand print books are still cheaper which better suits a student budget. There are book rental options where a license can be bought on a chapter by chapter basis.

Subject area will affect the need for a permanent reference copy. For example: medicine students will want a reference copy of a textbook they can consult all through their studies and then as they progress on to a career but engineering students will find electronic textbooks better for gathering the latest data.

Faculties tend to choose relevant content over format and there is still a lack of titles in e-form. It was commented at this stage that there has been reluctance from faculties to allow students to use tablets rather than print books as they cannot tell in lectures whether the student is working or on Facebook. Despite this, data suggests the market is set to implode.

JISC usage study findings:
  • 65% of users use ebooks to support work/study
  • 50%+ access them through the library
  • Use of eTextbooks is linked to teaching/assessment
  • Flexibility and convenience of ebooks is valued
  • Use is hindered by platform limitations such as printing/downloading and access speeds

Basic Requirements of eTextbooks:
  • Access across all platforms and operating systems
  • Ability to personalise with notations and highlighting
  • Inclusion of self assessment tools
  • Inclusion of support materials from lectures
  • Links to real time data
  • Online tutorials
  • Video/audio to liven text

Development of e Textbooks
The JISC Observatory project showed ebooks are mostly used for quick fact finding, whereas printed books are preferable for extended reading. This type of usage suggests an expectation of a lower price point for ebooks. It was found that there wasn’t a considerable impact on print textbooks throughout the trial.

Benefits of eTextbooks:
  • Ability for them to ease bottlenecks in libraries when print items are on loan, particularly as there is increased usage of mobile devices amongst students.
  • The interactive tools can increase student engagement and learning outcomes as well as offer the ability to break them up into chapters and for them to be added into course packs along with videos, article and audio appropriate to the subject.
  • The online environment also offers the ability to collect usage statistics and faculties can see whether students are using non-recommended texts.
  • They could address students’ use of Wikipedia/Google if developed in line with user behaviours and expectations, but with the added benefit of the information coming from companies of professionals.
  • Tablets are also beginning to emerge as alternative access devices to laptops as their prices are driven down and they better suit the ubiquitous lifestyles of students

Apple iBooks
Sara mentioned iBooks which are eTextbooks designed specifically for iPads and apple devices and feature materials from large published such as Pearson and McGraw-Hill. There is also an option to create PDF versions for other devices. Apple are looking at selling preloaded iPads to schools in the US, though there was a general feeling that this was a marketing opportunity to sell iPads it was thought others would release competing products.

Pearson Foundation Study:
The study showed tablet ownership had trebled for college students in the last 3 years with 70% of students reading digital text and 75% using tablets for daily learning. It is believed that eTextbooks will replace print within 5 years.

The Future?
Sara finished by saying it was an evolutionary process and the speed of adoption was likely to depend on the subject area. Ease of access and use would also feature heavily.
There are different business models and it is still uncertain which one will be most popular. These include individual purchase by students, material-included based fees, PDA or all library budget being absorbed by digital materials.
Sara stated we are most likely going to live in a hybrid world for the foreseeable future.

Some comments from the audience at the end of the session:
  • Librarians are keen to buy eTextbooks for their students but institutional packages set forward by the publishers are felt to be unrealistic, particularly as they are then restricted by DRM issues.
  • DRM is a big problem, particularly as students will use an ebook to scan chapters/TOC to see if they want to read the whole item and then want to print the bits they are interested in.
  • Students are still reluctant to use purely e over print and not everyone has a tablet yet. Ebooks on smart phones are not ideal
  • There is a demand for eTextbooks but they are not being delivered.
  • Whilst the individual prices of ebooks may have gone down, the institutional prices are still very high.
  • Librarians will look at smaller publishers who are willing to offer more competitive prices over the larger companies.
  • There is a want for perpetual access to books.



Thursday, 29 March 2012

“I wouldn’t start from here” Overcoming barriers to accessing online content in libraries


Breakout Session 1:  – Dave Pattern, University of Huddersfield

This breakout session discussed the issues users have when trying to access electronic resources and why we should be making it as easy as possible to access information.

Dave had used Twitter to ask for feedback on what the one thing would be that people would improve about e-resources if they had a magic wand. Responses were:

  • Authentication
  • Ease of Access
  • Discoverability
  • Affordability
  • No DRM
  • Licensing

Conspiracy Theories:

Before going into these points a little further he discussed some conspiracy theories about libraries:

  • MARC 21: Why is there still the punctuation? Is it so cataloguers can print off perfect cataloguing cards? What are they really up to?

  • Why are librarians trying to turn users into mini-librarians and bombarding them with library terminology? We should be aware that users will use the path of least resistance, the easiest way from point A to point B, for example Wikipedia and Google.
    As an example he discussed helping students and troubleshooting issues they had getting into resources. This showed a user following an almost never ending chain of links and password logins (some not so obvious) before finally being turned away from the article they wanted to use. Then trying Google by searching for the article title and finding the first result to be an open access PDF. – Why would users want to go through all those complicated steps when the information they want could have been found so much easier? This led on to the last conspiracy theory:

  • We don’t want our users to be able to access our e-resources!?  There appear to be multiple barriers to gaining access to resources and this all works against Ranganathan’s 4th law of “Save the time of the reader”. Seamless access to resources is possible when everything works as it should so we need to simplify the process as much as possible for the user.

Discoverability Tools:

He then discussed discoverability tools and proxy server authentication and the impact it had had on e-resource usage. At the University of Huddersfield students are being directed to Summon as a first point of call and stats showed that full text download numbers increased suddenly with the use of a discovery tool.
Data they had gathered also showed that full text COUNTER statistics shot up after a publisher became indexed on Summon and that there was a decline in usage for those that were not indexed. There was also a decline in the use of platforms with open URL issues.

These statistics can of course have a significant impact once it comes to renewals so could be used as ammunition to get publishers to work together with discovery services (in this case Summon).

He then discussed serendipity in the library using recommendations like “people who borrowed this item also borrowed…” Adding these messages showed a wider use of library stock.


Library Impact Data Project:

This project run in 2011 aimed to prove the value that libraries give to students and to prove correlation between library usage and academic success or failure.
Usage data was taken from eight UK universities and strong correlation was found between good grades and the number of Athens logins, the total number of downloads and average total number of resources.
However, coming to a library PC was not necessarily as productive.

A study by Manchester Metropolitan University shows there is a possibility that students who use the VLE late at night are more likely to be struggling and to drop out. It also appears that students who use the library between 9am and 11am are most likely to be the highest achievers.


In Summary:

  • Save the time of the user
  • Make accessing e-resources as easy as searching Google
  • Information literacy is important but goes against the path of least resistance
  • E-resource usage is linked to attainment
  • Publishers need to make content available to discovery services
  • Build e-resources with serendipity

"I'd like to thank" linklist

I wanted to post a big public thank you to our blogging team, who have managed to capture so much of the conference, so quickly, despite the challenges of spotty wifi, lack of sleep and the temptation to get offline and into the sunshine. We've had over 500 visitors on this blog in the last few days, and no doubt many more reading via RSS. It's great to know so many people are benefiting from the hard work put in by the bloggers; do look out for a few final posts appearing in the next few days.

It was also exciting to see our Twitter stream so densely populated by faces old and new. You can view an archive of tweets here (big thanks to @mhawksey and @chriskeene for this genius bit of Tweet gathering and analysis - there's also an alternative archive with some interesting content analysis here). Thanks to everyone who participated in this way - the backchannel discussions were a fascinating mix of additional perspectives and good-natured banter.

Elsewhere:
  • Conference photos will appear in due course here, thanks to the peerless @SimonPhotos - @daveyp has also put a few on Flickr while @arendjk has some lovely timelapses on YouTube
  • There are already lots of presentations online here
  • Videos of the plenary sessions will arrive here
  • @archelina has written a great summary of the whole conference here and @AnnMichael has managed to capture the Wednesday morning debate here
Thanks again to all involved - bloggers, tweeters, speakers, sponsors, delegates, exhibitors, volunteers, committee members, staff and especially to the SECC wifi team .. oh, wait. Maybe not ;-) Hope to see many of you at our one-day conference, "Rethinking Collections: approaches, business models, experiences" - put it in your diaries now! (15th November, London). Meanwhile, don't forget to enter our photo competition now that you've got your hands on our new logo!

Use and abuse of analytics in the search for value

Grace Baynes, Nature Publishing  Group 

Grace took the breakout group through the complex issue of value: what does it mean and how do we measure it?

If we define value as relative worth, utility or importance, we have a dizzying number of things that we can measure, from absolute to relative, quantitative to qualitative. With the advances in technology, we can measure more than ever before, but being able to measure every single thing is not necessarily a good thing.  

Using the Gartner Hype Cycle’s different stages of enthusiasm and disappointment triggered by new technologies, we can see we are aiming toward the Plateau of Productivity with a lot of these new metrics, but we might have some way to go with some.

How do we pick through all the available information meaningfully?  Her advice is straightforward: think about what you really need to know, the questions that you need to ask, and then think about how exactly you want to go about discovering the answers.

The underlying areas of value measurement are still the journal, the article, grant, research output, the researcher, as well as the Eigen Factor and Article Influence score, but how reliable are all these metrics?

Usage continues to help understand the uptake of holdings decision and initiatives like COUNTER help benchmark and break down the information so that librarians can compare use across multiple publishers for their institutions.

The Journal Usage Factor allows the evaluation of journals and fields not covered by ISI as well as permitting the measurement of journals that have high undergraduate or practitioner use, although Grace noted that if you compare JUF with IF, you see very little correlation.

Cost Per Download should help us understand what is good and what is not so useful, but is there an absolute value to a download?  Is the download you cited more valuable than the one that you just read?  Recent research carried out by Nature Publishing Group show that Cost per Local Citation might move us closer to evaluating the real impact in research, as might Cost per Local Authorship.

And what about the Eigenfactor, Peer Evaluation or altmetrics including tweets, likes and shares?

It is a bewildering task to try to measure all this data, and while products from Symplectic or Elsevier’s Scival  can help gather and measure critical information, we have to think about what are the most important factors for decision making.

Grace then opened the floor to consider which metrics are important for the participants:
  
Common themes 
  • Information needed depends on who you are talking to and what is most meaningful, or will help academics keep the resource they need.
  • CPD is still important to feed back to finance department, and some institutions use CPD versus Document Supplied or ILL to get an idea of value for money.
  • Some institutions don’t look into great detail, gathering data on the bundle or platform, rather than individual titles.  This is usually done to convince funders that the content being bought is useful. 
  • Others have to go into detail to identify the best titles for the institution. This is due to funding restrictions.
  • CPD isn’t always accurate, as cheap journals aren’t necessarily good value for money, even if on the surface they look good.
  • Usage stats are helpful at a local level when deciding to buy or cancel, but from discipline to discipline  download levels and cost per downloads vary.
  • Local citations and actual use may be more helpful to understanding value, but this is very time consuming.
  • There’s a big call for being able to access denial data, to understand patron demand, but up until recently one had to ask publishers - difficult if you don’t have a relationship with the publisher. The next release of COUNTER will include denials.
Grace ended this highly interactive session with a caveat: we can’t quote metrics in isolation, we need to contextualize. We must present metrics responsibly.

Wednesday, 28 March 2012

Fail fast and frequently

The role of games, play and not being afraid to take risks were the major order of the hour in this breakout session led by Ruth Wells.

Innovation is one of those things that can be surrounded by management speak and a feeling of something I should be doing, but I don't know where to start. Or maybe that's just me?

To start the session Ruth led us on a journey of what innovation is and how it comes about? We discussed the role of games and play; that innovation is chiefly the meeting point between insight and invention and one of the ways to gain this meeting is to be free to play.

How much time do you get in your working day to play?

We discussed this in small groups and it was clear that the "doorway discussion" was quite important in publisher working environments, but for others who worked more remotely, there was quiet contemplation time, but less option for collaboration. In other organisations there was little or no time for this sort of play, unless it was taken out of personal time such as lunch breaks.

We then watched a video from Steven Johnson about the creation of good ideas. This introduced the concept of the slow hunch. The very best innovations are cumulative ideas that evolve over long periods of time, and during this time ideas are thrown out, reworked, refined and incubated until the innovation is born.

Hunches cannot progress in a vacuum, they are usually part formed and need collisions in order to fuse into ideas. The great driver of scientific progress has been collaboration and the internet, mobile devices and the increasingly sociability of the world around us offers many new ways to connect with people who have that missing hunch we are looking for. Chance favours the connected mind.

The group then talked about how chance can be enabled within our organisations, including creating the right spaces and dedicated time for people to come together. Much like the doorway collaboration, a coffee area can provide inter-team discussion and spark new innovations by providing a fresh perspective on problems.

Then we discussed company culture of allowing play and discussion and the drivers to this sort of experience:
  • a concise company mission
  • an understanding of organisational values
  • the strategic goals agreed and aligned
  • clear business objectives articulated
  • an understanding of the need for project planning and resources
  • buy-in from organisational leaders
It is not enough to say Go Innovate! the culture must come from the top, and be accepted from the employee to the CEO.

We then talked about workshops as a means of achieving the culture and collaboration. One group suggested that a sort of speed dating for innovation, or as I thought a musical chairs scenario, could work very well to mix up ideas between different employees from different departments.

It was explained that capturing the results of workshops and closing each idea that was opened, no matter how off topic, was as important as the process of idea generation itself. The ideas that were left after this closing process need to be followed up and acted upon.

As a summary of how enable this kind of culture, Ruth gave us the key points for leadership on innovation:
  1. Encouragement
  2. Leading by example
  3. Create space for discussion
  4. Actively feedback on ideas
  5. Direct but do not control
  6. Accept the potential to succeed AND fail
  7. Provide resources and mechanisms to deliver ideas
I've highlighted point 6 as this was the major take home message from the session for me. There is no point trying to create a culture of innovation if you cannot allow those innovations to fail. Pursuing ideas involves risk, an evaluation of that risk is important in projects, but the idea generation in itself must be free of this risk assessment, lest it be curtailed by it. Ideas can be closed before the project stage if the risk is deemed to be great.

In order to highlight the importance of failing we watched a snippet of this presentation from Tina Seelig from the Stanford Technology Ventures Program, entitled Fail Fast and Frequently, where she explains that if you are not failing sometimes then you are not taking enough risks. As long as you learn from failure then what you are doing is worthwhile.

After a short departure from talk about gameplay into an actual game, where we passed around bits of paper with ideas on about the function of a publisher rather than the form, the discussion moved onto ideas as a response to a problem without a solution.

Radical ideas can be like gambling and it makes sense for many organisations to not want to or not be able to gamble, therefore in closing out ideas it is important to have a common set of evaluation criteria.

These will help with the creation of a roadmap to move your ideas and innovations into projects, put your ideas into a four stage funnel:
  1. filter
  2. research more detail, consider the implications, lifetime costs
  3. develop
  4. provide ongoing support or abandon
Note that in step 4 there is still the possibility of an idea being closed. If at any point during the delivery process costs are expanding beyond the worth of the idea then it should abandoned.

Finally, Ruth outlined some top tips for innovation in organisations:
  • define process and strategy first
  • define what innovation means to your organisation
  • do no harm, but don't be anti risk
  • prototyping can avoid technical ambiguity
  • look at innovation as a function of your whole business









Finding out what to cut, how far to go and getting users to champion the library in a healthcare setting

This plenary session given by Anne Murphy discussed the systematic approach taken by the library in Adelaide and Meath Hospital in Ireland, when facing cuts of 25% in 2011 and 15% the following year to their library budget.

It addressed the key questions of when every journal is seen as essential what markers of true value can you assign, and how can you get your users to accept cuts within their department, or more precisely how to keep cuts fair and not lose the engagement of library patrons?

The first point discussed was how openness surrounding planned cuts was important to retain the library champions within the hospital. Working against this was communication channels that were not always as obvious as they should be. For example, use of the hospital email system is poor, so users had to be contacted by mail to ensure good coverage. It is no good trying to be open if you cannot reach the people you need to tell.

The second point was that the project was not just about balancing the books (or journals). The library saw the cuts as an opportunity to promote themselves with a use it or lose it message and to build credibility within the hospital's senior management.

How did they go about deciding what to cut? A three pronged approach combined to give a rounded picture of a resources value:
1. Cost per use
2. Responses from departments about value
3. The librarian's knowledge (e.g. is it a no-brainer keep or a small niche journal)

The library also tried to adhere to a few ground rules that attempted to retain the balance of the collection, such one journal cut in each department. All of this information and the ground rules were then used to assign each journal a category through a 3 stage process:
1. The no-brainers
2. The very expensive or low download journals
3. Department or specialty cut

After stage three the budget was totalled and they were still not at a 25% reduction, so a fourth was introduced:
4. Larger departments who have more than 1 journal

After one last "sanity check" evaluation, 73 journals were finalised as to be cancelled, one quarter of their total collection.

The library published a report on the process and marked all cut journals in red, retained ones in green, to make the whole process transparent and again re-enforce the idea that usage is important in retention decisions.

The comments from users on feedback forms demonstrated that expectations, throughout the project, were successfully managed and the library did not suffer any disengagement despite the large percentage cuts.

In 2012, a cut of 15% in budget was proposed. The library underwent the same process with usage analysis and mailing out questionnaires, except this time they asked users to nominate one title for cancellation.

The comments were overwhelmingly negative and it demonstrated that users felt that the cuts had gone too far. Overall the process was much more difficult and the library expects there to be a full report published to the hospital staff soon and for this to help galvanise users to support the library from further cuts.

The next step is a survey about content discovery and literature use, and there is a possibility of documents on demand in the future, depending on the outcome of this surveying process.

Debate: The future for scholarly journals: slow evolution, rapid transformation – or redundancy?

Plenary Session 5
The first session of the last day at UKSG took the form of a debate between Cameron Neylon and Michael Mabe on the future for scholarly journals. There was an impressive turnout despite the 9am start and the Ceilidh the night before!

The transformation is already here - it's just unevenly distributed
Cameron Neylon

First presentation in the debate, arguing that the transformation is already here.

"Large institutions seek to preserve the problem to which they are the solution" - Clay Shirky

What do we mean by a journal?
Traditionally we have thought of a journal as having the following characteristics:
  • Journals contains articles
  • There is a process to select articles for inclusion
  • There is a publisher who manages the process
  • A journal will only belong to one publisher and a single article will only belong to one journal
  • There is a single version of record
How is technology changing the look of the 'journal'?
Neylon argued that new tools are changing how content is made available and that this should challenge our view of that a journal is.

He gave 2 examples of this:

WordPress - WorldPress, the free blogging software, now supports many journal type publications. The service is free and will only continue to get better. The software gives anyone the ability to put together a 'journal' in 10 minutes and lots of free plugins are available to add functionality such as commenting or PubMed IDs for citations. Some examples of journals on the WordPress platform includes PLoS Current: Disasters the Journal of Conservation and Museum Studies

Figshare - Site for sharing figures and metadata, doesn't fit traditional idea of a journal, but provides very useful information to researchers. For Neylon this raises questions of what is smallest useful piece of research? why are we still tied to the idea of a journal article?

Do we need journals any more?
Neylon argues that you can get answers from Google that direct you into trusted databases. When looking for an answer to a specific question to progress an experiment he did a comparison between Googling for the answer or looking at scholarly articles. After spending 6 hours collating the information from the articles he had what he needed, yet he had the answer in minutes from Google. Neylon said he would choose a database or Wikipedia over a journal article when looking for answers to specific questions as it is just so much quicker.

"The research literature just has a poor user interface" Greg Gordon - SSRN

Neylon gave the examples of stackoverflow and mathoverflow as great forums for finding answers.

Why do researchers still write articles?
Researchers are attached to journals, but why? Neylon argued this was more about prestige and wanting to feel like their research is important; collecting notches on their bed posts! When reading they hate people who write articles, but yet they keep writing the articles, as they need them for advancement, this is just not sustainable.

What will the future look like?
Neylon argues that someone somewhere is going to figure out best way to make the user interface work, if publishers don't look to do this another player will. Once we stop presenting and consuming articles people will stop writing them.

He suggested that the future would involve publication in smaller pieces (think Lego heads), which might then be built into larger things (think Meccano cars), with different pieces being put together to create exactly what the user wants, delivered differently to different audiences.

So why hasn't the journal changed more as a result of the internet?
Michael Mabe

Second presentation in the debate, arguing that the fundamental appearance of journal articles has remained, and will remain, remarkably unchanged.

Why hasn't the journal changed more?
Mabe argued that he wasn't defending a non-technical status quo, but that even those cross-referencing and linking between content is now the norm, the fundamentals seem unchanged.

Digital Incunabula argument
Mabe argued that the real revolution was not in the introduction of printing, but in the idea of the book with the introduction of the Codex; splitting up long scroll into pages. The structure of a book is deeply embedded in human culture and 2 millennia of habit and utility are going to take some undoing.

Darwian Angle
Mabe argued that researcher behaviour is key to understanding why the journal still exists. With a researcher having 2 very different modes: Author mode and reader mode.

Author mode:
  • To be seen to report and idea first
  • To feel secure in communicating that idea
  • To have their claim accepted by peers
  • To report their idea to the right audience
  • To get recognition for their idea
  • To have a permanent public record

Reader mode:
  • To identify relevant content
  • To select based on trust and authority
  • To locate and consume it
  • To cite it
  • To be sure it is final and permanent
Functions of the journal a la Oldenburg
Henry Oldenburg outlined the key roles for academic journal publishing as registration, certification, archiving, dissemination, navigation. These roles are still seen today.

Generational Change?
Mabe argued that we are confusing the mass market with the scholarly market. How people act in professional life is different to how act in private life. Researches are still required to publish their work and young researchers are actually more conservative that their older peers as they need to make their name. There are NEW tools but they serve OLD purposes - Technology just enables greater efficiency. The system has evolved to satisfy needs, the human needs of researchers, until these change the scholarly article will remain the same. If an asteroid hit tomorrow and we rebuild from scratch we are likely to create something very similar.

The debate

After both Neylon and Mabe had presented their arguments the session was opened up to debate, a summary of which is included below.

Neylon: Where the disagreement is on what will be the important pressures. The asteroid that could makes us re-think things will be public looking at what we are doing and saying it is not up to scratch. Researchers are very conservation, but there will be pressure to change.

Mabe: It is a case of publisher or perish, is not just about career progression. Authors from industry, rather than in academic institutions, are not being promoted for publishing, but publishing because they want to be recognised as being first person to think of idea.

Gedye: We are getting little views through little windows, but what going on in room behind the window? Where are the PDFs in the examples show, a lot of contradictions in what has been said.

Neylon: Loath looking at PDFs, want to read on screen. Sense shifting, people no longer printing out PDFs.

Mabe: Download figures still show a predominance of PDF usage. The form of the article is more about establishing trust and authority not consumption. We only so much time so want to read something we trust. There are 2 types of behaviour information seeking and literature consumption.

Neylon: Most information is in text form in journal articles, people using tools that sit on top of those.

Audience: There seems to be a disconnect between what early adopters think is important and what mass market wants.

Twitter: Isn't this discipline specific?

Neylon: Within Physical and Biological sciences there are smaller fragments that are useful. A lot more work to be done to understand the differences; boundary between smallest useful fragment and how this needs to be aggregated to be useful to different audiences. Likely to end up with different forms in different disciplines.

Mabe: There is a tendency to paper over differences in disciplines. It is the idea that really matters. In the sciences are concerned about speed of publication, not such a concern for other disciplines. Who does registration and do you trust them to register it, this is where trusted 3rd parties comes in.

Twitter: Micro-publication will change behaviour and needs.

Mabe: Argument about reducing publication down to lower level such as paragraph, could become more of a network of links.

Neylon: Will see a change in what researchers do. Lower burden of publication and authoring; very expensive process, lots of things never get authored as too much work. But what can we do with the content?

Audience: Driving force is finding more satisfying way to meet needs - Can make better ideas when work together rather than alone.

Neylon: Stack Exchange model - Asking and answering questions, can up vote responses to build reputation, then get more control to down vote, remove comments etc. Managed by community, reputation is the key. Great place to find people with specific expertise. Registration and certification still very important. Works in specific domains.

Mabe: Moderation is very important. Community need to have confidence in something.

A very thought provoking and lively session, a great way to start the final day!


Tuesday, 27 March 2012

Mobilising your e-content for maximum impact

Breakout session 5 led by Ruth Jenkins (Loughborough University) and Alison McNab (De Montfort University).

The session kicked off with a brief overview of some of the mobile services currently offered. Most are based around issues and articles and browsing content and are publisher specific. This creates a number of issues.

  • The user has to know who publishes the journals they want to read (this also assumes they know what they want to read) and go and download the right App!
  • Users are publisher agnostic, they just want the stuff and don’t really care about who the publisher is
  • Apps are often designed for browsing - Issue to table of contents to article, whereas users want to search
  • No link with resource discovery systems such as Primo or Summon
  • No integration with reference management software
  • May not be available on all platforms - Device specific apps
  • Off campus access is often limited – not truly a mobile service

Positioning of the library

Mobile gives publisher opportunities to interaction directly with end users, previously libraries were the gatekeepers and directed users to the content.

Publishers overestimate how much end users know their brand, certainly undergraduates and early years researchers don't know the publishers or in some cases the titles they should be focusing on. Libraries try and present everything they have access to, not publisher by publisher.

What are challenges mobilising your e-content?

At this point the post-it notes came out and the audience was asked to think about the challenges they face in mobilising e-content, both from the library and the publisher perspective.

Common issues for libraries:

  • No single place listing which publishers have mobile offering
  • How to make users aware of the mobile sites/apps available
  • How to integrate mobile optimised links in the library catalogue
  • Support for large number of interfaces - lack of standardisation. How do you test access problems on multiple devices? Budgets don't extend to purchasing all types of devices let alone ensure these are up to date
  • Connectivity issues. Not everyone has or can afford 3G and wireless can be unreliable
  • Sites try to replicate all of desktop functionality, but it this what the users want?
  • Multiple authentication processes, hard to explain to users
  • Off campus authentication - in some institutions e.g. the Open University there is no campus or the student never comes onto campus
  • No way to search across apps
  • High student expectations
  • Licensing restrictions
Common issues for libraries:

  • Cost of development
  • Pace of technology change
  • Whether to create device specific apps
  • Providing user friendly tools to allow libraries and users to get the most out of mobile
  • What features to include

Kevin Ashley on the Curation of Digital Data


Curation is often thought of as a passive activity.  Once deposited, content is simply “preserved” into perpetuity.  This couldn’t be further from the truth and Kevin Ashley, Director of the Digital Curation Center made the point deftly in his plenary talk at the end of day one of UKSG.  If there was a person who could keep a group in rapt attention after a long day of sessions, it certainly would be Kevin.

Much like the curation of publications, active curation of research data is critical to its good stewardship. Curation implies active management and dealing with change, particularly technological change related to electronic information.  Ashley made the point that while curation and preservation are linked, they are not synonymous activities.  Curation is both slightly more and slightly less than preservation.  Curation implies an active process of cutting, weeding and actively managing the content, and regularly deciding when things should be retired from the collection. Ashely also made the pint that there are benefits to good preservation management.  It can generate increased impact, it can add a layer of accountability, and can address some legal requirements.
The DCC Curation Life Cycle model during Ashley's presentation

Interestingly, within the UK, while most of the Research Councils place expectations on data management policies on the researcher, the Engineering and Physical Sciences Research Councils (EPSRC) has begun putting the expectations onto the institutions, not on the PIs.  (NOTE CORRECTION, applies only to EPSRC, not all RCs as originally noted). In part the UK’s system of educational funding allows for this type of central control on institutions.  Each approach has its benefits, but from a curatorial perspective the institutional mandate focus, will likely ensure longer term and more sustainable environment for preservation.  The current mandate in the UK is that data be securely preserved for a minimum of 10 years from the last use.  Realistically, this is a useful approach for determining what is most valuable.  If data are being re-used regularly, than curating it for the next 100 years or more is a good thing.  Any content creator would hope for that type of success in the long-term continued use of their data.

The other aspect of data curation is actually to support the data’s eventual use.  “Hidden data are wasted data,” Ashley proclaimed.  Again, it is important to reflect on why we are preserving this information; for use and reuse.  Which reinforces the need to actively encourage and manage digital data curation.

Particularly from a data sharing perspective, data are a more than an add-on to publication process, but it also poses some other challenges.  One example Ashley described is that “Data are often living”, by which he means that data can frequently be updated or added to regularly, so the thing an institution is preserving is constantly changing.  This poses technical problems as well as issues with metadata creation and preservation.

There are several projects ongoing related to scientific data curation, use and reuse.  Those interested in more information, certainly should look to some of the reports that the DCC has published on What is digital Curation?,  Persistent Identifiers, and Data Citation and Linking.  There is also a great deal of work being undertaken by DataCite and the Dryad project.  NISO and NFAIS are working on a project on how best to tie these supplemental materials to the articles to which they are related, one question this project is addressing is who in the scholarly communications community should be responsible for curation of these digital objects.

One might well reflect on one of the quotes that Asley began his presentation with:
 “The future belongs to companies and people that turn data into products”
-- Mike Loukides, O’Reilly. 
If this is really to be the case, ensuring those data are available for the long term will be a crucial element of that future.


Marshall Breeding on the future of web-scale library systems


Every information management business seems to be moving “to the cloud”.  Over the past few years, a variety of library software providers have been applying this model to a rapidly growing segment of the library community.  The technology that libraries use to manage their operations is undergoing significant change and transformation.  Marshall Breeding, Director for Innovative Technology and Research at Vanderbilt University Library, presented during the second plenary session on The Evolving Library.  Breeding’s talk “The web-scale library – a global approach” focused on the opportunities that this move could include.
Breeding's Slide of current LMS/ERMs


Beginning with the observation that current library management systems are overly print-focused, siloed and suffer from a lack of interoperability. In addition, the online catalog – as a module of most ILS – is a bad interface for most of the resources that patrons are most interested in.  For example, an OPAC’s scope doesn’t include articles, book chapter, or digital objects.  The fact that libraries don’t have the appropriate automation infrastructure and while this creates significant challenges, it also presents an opportunity for libraries to rethink their entire technology stack related to resource management. 

Moving library information from in-house servers to a cloud solution provides a variety of benefits and cost savings.  There are the obvious benefits, such as hardware purchases, regular maintenance, power, and system updates and patches.  However, this really not the core benefit of a cloud solution.  Breeding described simply having data hosted on the network, provided only the simplest and least interesting benefits.  Breeding focused more on the potential benefits and efficiencies of having a single application instance and a cooperatively collected and curated data set

Breeding's vision of how new Library Management Systems will be integrated
In Breeding’s view, the future consideration of which systems to select will not be based upon features or services.  All systems providers will end up concentrating on a similar set of services.  What will distinguish and differentiate product will be how open the systems are.  Of course, as many of these systems move increasingly to an integrated service suite, fewer libraries will want to or need to patch some other service onto the system.  He also made an interesting note that since the launch of these new systems, the pace of implementations has skyrocketed.

Breeding covered a tremendous range in his talk, so one can’t be critical of what wasn’t included.  That said, here are some questions this move will elicit eventually: Who can claim ownership of data that is collectively gathered and curated?  What is specifically one institution’s versus another?  Once an institution moves into a web-scale system that is based on a collective knowledgebase, how might an institution transfer to a new provider and what data would be taken with them to a new provider?  A great deal of these issues will be the focus of many conversations and best practice developments over the coming years as libraries work to deal with these new systems.

Marshall tweets @mbreeding and blogs regularly on library technology issues at http://www.librarytechnology.org.