Subgroup 4: Use & Users, Web Stats, Circulation, Gate Counts, and Community Impact

INTERIM REPORT – APRIL 1, 2013

Based on a subgroup conference call held on Monday, March 25.

For a recording of the meeting, visit:

Subgroup members and participants: Elizabeth Call, Christian Dupont, Mark Greenberg, Emilie Hardman

Subgroup 4 was charged was considering the metrics and assessment approaches relevant to special collections and archives “use and users,” including “circulation, gate counts, and community impact.” Much of our discussion during our most recent meeting as well as in previous meetings has gone into rationalizing or disambiguating our domain from the domains assigned to the other task force subgroups.

In order to fulfill the task of preparing and submitting a preliminary report for our subgroup for discussion at our Midwinter task force meeting, we took the approach of segmenting our domain according to five discrete sets of services or situations in which users encounter and use special collections materials, namely:

  • Reading Room Services
  • Reproduction Services
  • Interlibrary Loan Services
  • Physical Exhibits
  • Website Usage

We defined and delimited these areas as best we could in order to avoid overlap with other areas of special collections/archives operations and activities, such as reference, instruction, exhibit preparation, and collections processing.

While this approach enabled us to identify and comment upon the availability and relevance of metrics that are used to gather data and evaluate similar types of services in other areas of the library, it tended to emphasize the service at the expense of the user, making it difficult for us, upon further reflection, to consider questions relating to user impact, which appear to us to be the major concern of assessment: whether and how what we do as librarians and archivists brings value to the users of our collections and services.

None of the above bulleted service areas, in fact, provide any direct information about the characteristics of our library and archives users or their expectations and experiences of our services. At best, what the preceding approach would get us are measures of productivity and efficiency—and while such is arguably important from a library management standpoint for assessing relative input/output costs—it really only tells half the story, if that.

What we really want to know—and what would be more relevant to the larger conversation about the value of academic libraries—is whether our services are effectively reaching our targeted audiences and meeting their research needs (or perhaps, in the case of members of the general public who use or otherwise encounter our collections, desires for educational enrichment).

With that thought in mind, during our last meeting we went back to considering our task as subgroup and potential contribution to our final task force report from the perspective of our original rubric of “use and users”—or rather putting the users first and considering value from their perspective: “users and use.”

From this premise, we tried to rationalize metrics to support assessments of user value along with evaluations of our productivity and efficiency in the delivering services along two essential lines:

  • Metrics describing our visitors
  • Metrics describing their use of our services

As we thought it about more, it occurred to us that these same two dimensions could also be applied to the services examined by other subgroups, especially in the areas of instruction and reference, which are likewise targeted to external users. As we did so, however, we recognized that the operational domains covered by the first two subgroups would not fit into two dimensional. Hence a third dimension of collection management emerged in our thinking:

  • Metrics describing collection management functions

Reflecting further, it occurred to us that there are likewise three dimensions or directions in which we exercise responsibility as librarians and archivists:

  • We are responsible to our users for providing access to collection materials and information about them suited to their immediate and individual purposes.
  • We are responsible to our collections for preserving and safeguarding them for potential use by future researchers.
  • We are responsible to our resource providers for making the best use of the various resources they provide us to support the above functions of access and preservation.

These three axes of responsibility would seem to align well with the axes of descriptive metrics mentioned previously.

So where does all this get us?

It seems to us that this approach might yield a rationale and framework for organizing our overall task force report and the recommendations we will ultimately make to our RBMS Executive Committee for creating and charging successor task forces to develop the kinds of guidelines and best practices that we are wanting to help our profession create.

It also occurred to us that it might provide a better way of formulating certain types of metrics—better in the sense that the resulting metrics might better facilitate the kinds of analyses and comparisons that get at questions of user perceptions and experiences of value.

For example, while it would certainly be possible to develop a “circulation” metric from the standpoint of material usage in a supervised reading room setting that corresponds to metrics used for counting circulation “checkouts” for main library operations, there may be more utility in conceiving of “circulation” as not being intrinsically tied to the reading room setting but instead as a function through which an individual user gains access to particular collection item. From that vantage, the access and potential research/enrichment use that a user gains from examining an item in a reading room setting may not be qualitatively different from the perspective of the user’s research need than examining a digital surrogate obtained as a result of using a personal camera to photograph the item in the reading room, or requesting the repository to make a reproduction, or requesting such a reproduction via an interlibrary loan or document delivery service—or, for that matter, finding such a reproduction by searching a digital repository or discovering that it is linked to an online finding aid or exhibit. Unless the user’s research purpose involves a close examination of the physical characteristics of an item that cannot be supported by a surrogate, the function of access is essentially the same in every case, and hence a “circulation” metric might be applied equally to all of these “transactions” (although it might be the more aptly and appropriately called an “access” rather than “circulation” metric).

A metric for counting the numbers of times users request to access items held by the repository by whatever means might enable repositories of various types to articulate a comparable user impact factor. For instance, a repository that has relatively low onsite reading room usage but high digital document delivery might compare favorably in terms of fulfilling its mission of providing user access by comparison to a repository that serves large numbers of onsite users but provides relatively little in the way of virtual access. What would make the comparisons all the more meaningful would be if the definitions for what constitutes an “access” count were comparable whether requested material is examined in a reading setting or in the user’s home or office through the means of a surrogate. From this perspective, it would perhaps make more sense to count multiple “checkouts” of the same item to the same user during successive reading room visits as one “access” rather than tallying each checkout as a “circulation” statistic so that “onsite” and “virtual” “access” could be treated equally—although arguably from a reading room management perspective the number of checkouts required to support user access may impact reading room staffing considerations, so a distinct definition and metric for assessing staff productivity and efficiency might be needed to complement a user-oriented definition and metric for access.

This same type of approach—formulating one set of definitions and metrics for assessing impacts and outcomes from the user’s perspective and a separate, distinct set for assessing operational productivity and efficiency—might also be applied to other repository services that are directed to external users (as distinguished from collection management functions that are directed to preserving and describing collection materials).

For example, definitions and metrics for assessing reference request services (an activity included in the domain of Subgroup 3) from a user’s satisfaction and success perspective could be developed to apply to both onsite/in-person interactions as well as email/virtual interactions while a distinct set of definitions and metrics for assessing staff inputs and outputs related to the delivery of such services.

One challenge we debated but did not resolve in trying to apply this framework to the scope of service areas that we were asked to cover as a subgroup is the concept of a visit. What kind of meaning—from an assessment standpoint—does a “visit” have that is distinct from an “access”? What value might there be in tracking aspects of a “visits” in addition to “access”? Answers to these questions would seem to depend on how broadly we define the term “visit.” Is a visit to reading room in some ways analogous to a visit to physical exhibition? If there answer to that question is affirmative, then how different is a visit to a virtual exhibition, or, for that matter, to a repository’s main website? Certainly there are differences, but is it useful to try to distinguish them from the characteristics of how the spaces of the various encounters are deployed or managed? Or stated otherwise, would it be more useful to distinguish them in this manner or from the standpoint of a user’s intentions in initiating a visit? Might it not be more useful to assessing whether a repository is fulfilling its mission or evaluating how well it does so in comparison to other repositories by gathering information about the expectations users bring to their visits, whether in person or virtually, and whether those expectations are satisfied? To support such assessments across a sector of repositories or even across the range of a repository’s services would seem to require some common terminology for defining user intentions.

For example, in registering reading room visitors, it could be useful (as some repositories are accustomed to doing) to ask users to describe the end product of their research—are they seeking materials to use for a class assignment, for a professional publication, for personal interest, etc.? Likewise collecting information about user demographics can help repositories understand more about the user communities they serve. Such information can be correlated with information gleaned from their “access” records to provide a fuller picture of their usage and to assess from quantitative and even qualitative perspective characteristics of their “visits.” In this scenario, the essential concept of a “visit” that emerges is that of a time-bound encounter with a service provided by the repository for some definite user purpose. In this regard, a user visit to a repository website to view a digital object for a research purpose is not essentially different in character from a visit to reading room to view the same object in tangible form. In fact, by regarding them as comparable in intention and character, one can begin to compare and outcomes of the respective “visits”: how well was the user able to satisfy his or her research interest from the respective types of visits? Similar scenarios could be developed for examining and assessing reference transaction, instructional sessions, and exhibits, insofar as the user’s encounter with each can be described from the perspective of how the user’s intention are satisfied within the temporally (or even spatially) bound framework of the encounter.

We believe that the above approach to examining the characteristics and relationships of users and use will be helpful in defining useful metrics and assessment methodologies for special collections and archival access. And because they are interrelated, we believe that a single task force should be appointed to work out the metrics definitions and develop guidelines for applying them.

Going a step further, it seems to us that the assessment of services offered to support collections access by external users may be functionally and practically distinguished from the assessment of services performed by repositories to preserve their collections. In the latter case, the purpose of conducting assessment cannot be described with direct reference to the intentions of a specific user but rather as the fulfillment of repository staff intentions to achieve outcomes they believe are in the long-term general interests of potential users. How well does a particular book repair treatment compare to another in terms of quality and costs is a question that can be answered effectively only with reference to other possible treatments since the evaluation concerns the outcome of the treatment with respect to the object treated rather than the user who might ultimately consult the object.

Because the framework and perspective for assessing internal operations differs from the assessment of services for external users, it seems to us that a separate task force should be appointed to work out the metrics definitions and application guidelines for the former, although it would of course make sense to have the two task forces coordinate their work at various points to ensure that the metrics and guidelines they respectively produce will exhibit an overall coherence so that repositories will find it sensible and practical to use the products of both.