David O. McKay Library – BYU-Idaho

Harold B. Lee Library – BYU-Provo

Howard W. Hunter Law Library – J. Rueben Clark Law School, BYU-Provo

Joseph F. Smith Library – BYU-Hawaii

LDS Business College Library

LDS Family History Library

A report on observations made from data collected from respondents at the participating libraries of the CES Library Consortium during the spring 2004 LibQUAL+™ survey including an assessment of comments

Prepared by

Brian C. Roberts

Process Improvement Specialist

Harold B. Lee Library

May 2004

INTRODUCTION

LibQUAL+™, the library service quality assessment tool of the Association of Research Libraries (ARL), was administered at the CES libraries during spring 2004. It is the intent of this report to point out some contrasts between the libraries in the consortium from the data generated from the surveys, both quantitative and qualitative.

EXECUTIVE SUMMARY

Response rates at the various libraries of the CES Library Consortium were very favorable as well as representative of the native population of the respective institution. Perceptions of respondents were similar in some aspects but varied in others, as would be expected.

Overall, the quantitative data showed the following tendencies:

1)  Library employees were perceived as courteous, caring and responsive,

2)  Efforts should be made to improve the variety and accessibility of electronic resources,

3)  Effectiveness of library Web sites could improve,

In addition, there were areas where some libraries shined and others were less than satisfactory. For instance, at half the institutions, the respondents felt the library was a space that inspires study and learning. At the other half of the institutions, respondents felt that that aspect stood in need of improvement. As such, in those instances as well as the points cited above, it is important for everyone to assess best practices and seek ways to improve.

Other quantitative results of note included:

1)  Patrons felt they are well treated and rate high their overall satisfaction with the quality of service at each of the libraries,

2)  Improvement could be made in the support of patron learning, research and/or teaching needs,

3)  The libraries could do more to help patrons stay abreast of developments in patron’s respective fields of interest and help them better distinguish between trustworthy and untrustworthy information,

4)  Patrons still use, and probably will continue to use non-library gateways, such as Yahoo™ and Google™ as their primary initial tool for searching for information.

The qualitative data from comments were plentiful and several themes emerged from them. In most cases, survey respondents found their respective library wonderful, but there were some needs that became prevalent:

1)  There is a desire of patrons for libraries to address the noise issues at their institutions and provide more space and other physical resources,

2)  Personnel are generally well thought of, but steps to improve relations and interactions with patrons are needed,

3)  It may be well worth the effort at each institution to look at extending hours, providing more resources and the help to use those resources, upgrading and improving their respective web sites, and increasing the types of and access to online electronic resources.


SURVEY GENESIS

Following the completion of the spring 2003 LibQUAL+™ survey at the Lee Library at BYU-Provo, the libraries of the CES Library Consortium approached the Lee about the possibility of coordinating assessment activities to evaluate patron satisfaction and observe best practices at each of the institutions. Given the success the Lee Library had had with LibQUAL+™, it was proposed that the consortium participate in the spring 2004 survey. The consortium agreed and during the spring of 2004, the Lee Library, the Hunter Library of the J. Rueben Clarke Law School, the McKay Library at BYU-Idaho, the Smith Library at BYU-Hawaii, the LDS Business College and the Family History Libraries in Salt Lake City participated with 209 other libraries from around the world in ARL’s LibQUAL+™ survey to assess library service quality.

The intent of this effort as set forth by ARL and the LibQUAL+™ staff is to:

·  Foster a culture of excellence in providing library service,

·  Help libraries better understand user perceptions of library service quality,

·  Collect and interpret library user feed back systematically over time,

·  Provide libraries with comparable assessment information from peer institutions,

·  Identify best practices in library service,

·  Enhance library staff members’ analytical skills for interpreting and acting on data.

The Lee Library has experienced this in a very real sense from participation in the 2001 and 2003 surveys. The hope has been that the CES Library Consortium will also benefit in similar fashion. Formal reports of the results from the 2004 survey have been prepared by ARL and Texas A&M University for each institution that participated in the survey as well as for specific groups and consortia. These reports have been disseminated to each of the institutions for review. In addition, a significant aspect of the survey is a comment box at its end where respondents were asked to provide any other comments they might have about library services. These reports did not include any qualitative analysis conducted on information provided in those comments.

It should be noted that the comparisons in no way imply that any one institution is better than any other institution in any given area. The results from the survey data simply show that patrons perceive their institution differently than patrons at another institution. The hope is that where one institution’s patrons feel it is doing well in a given area, the other institutions can work with it to learn where they may be able to improve in that area. In keeping with LibQUAL+™ requests/requirements concerning disseminating results, where actual figures from the survey (average scores and associated charts from the quantitative data) is shown for the respective institutions, the names of the institutions will be removed. This criterion does not apply to demographic data or to the comments.

SURVEY ADMINISTRATION SUMMARY

Due to the varying nature and size of the six participating libraries, different criteria were used to sample respondents from each. Since its inception, the LibQUAL+™ minimum required sample size for a large academic library has been 900 undergraduates, 600 graduates and 600 faculty/staff. They also recommended sampling a few more in the event some email addresses proved invalid and would need replacement in the sample. Participating institutions are asked to follow these suggested criteria to the extent possible. Since each library in the consortium serves a different set and number of patrons, not all could meet the above expectation. Therefore, alternatives were devised in order to optimize response to the extent possible and provide an adequate reflection of patron perception of library services.

The survey itself was conducted during the better part of the month of March (8th through the 31st). Each institution sent out separate invitations to their sample groups with follow-up messages each succeeding Monday plus whatever additional follow-ups deemed necessary by their respective site coordinator so as to improve response as much as possible.

In the end, participation at each institution was more than satisfactory. The actual numbers were 1795 at the Lee Library, 310 at the Hunter, 1214 at the McKay, 430 at the Smith, 858 at LDSBC, and 3916 at the Family History Library. When these figures are translated into response rates, they varied at the academic units from 51.2% at Idaho to 59.7% at LDSBC (because of the nature of the data collected at FHL, no such rate could be determined). However, for varied reasons, not all those that attempted to take the survey at any of the libraries actually completed it nor were all the surveys considered valid. A survey was deemed completed if all required questions were answered. A survey was considered valid if all core questions were answered with less than 12 NA answers and/or 10 invalid answers (minimum > desired for example). The chart below (Figure 1) shows the number of surveys completed and valid for each of the institutions. In general, the number of valid surveys was less than the number of completed surveys. The exception was at FHL where the reverse was true.

Figure 1 - Number of Completed & Valid Surveys

From this information two additional “rates” can be determined. The completion rate (the number of surveys completed divided by the number of total responses) for each institution was fairly high, with the exception being the FHL. The Hunter Law Library had the highest completion rate with 61.3%. Lee was next at 55.9%. The rates at the other academic institutions were 44.8% at LDSBC, 42.6% at Hawaii, and 39.9% at Idaho. The completion rate at FHL was only 20%.

The effective response rate (the number of valid surveys divided by the final sample size) was very respectable at each institution and within the mid-range to upper-range of rates seen historically at LibQUAL+™. Again, Hunter had the highest rate at 32.5% with Lee next at 29.2% (which exceeded its 2003 rate by more than 1%). LDSBC had the next highest at 25.2% followed by Hawaii at 22.0% and Idaho at 19.8%. Again, no rate was possible for FHL since they did not have a “sample size” from which to determine that figure.

A couple points should be noted about the validity of these rates. A “response” was recorded by LibQUAL+™ whenever anyone attempted to take the survey (they referred to this as “viewing page 1” of the survey). However, if a survey was not completed, there was no way to know why. Therefore it was entirely feasible that some individuals my have initially attempted to take the survey and did not complete it for whatever reason, but later returned and made a second attempt (or conceivably more). And given the anonymous nature of the survey, there was no way to know if an individual took the survey more than once. Historically, according to the administrators at LibQUAL+™, given the complicated nature of the survey, repeat takers have been very minimal. And if an individual attempted to take the survey again in order to better their chances at earning the incentive, if they input the same email address again, that survey was deleted.

As touted by LibQUAL+™, what really counts is representativeness – how well the final numbers match the demographic profiles of the respective institution. For example, if an institution breakdown of male to female were 50/50 but the survey response was 30/70, then the results would not necessarily be representative of the population and therefore inferences from the response would need to be couched in terms that reflect that disparity. The breakdown for gender for each of the CES institutions is summarized in Table 1 below.

Table 1 – Response Summary

Gender / Population N / Population % / Response N / Response %
Family History / Male / 226 / 28.75%
Female / 560 / 71.25%
BYU-Hawaii / Male / 73 / 41.01%
Female / 105 / 58.99%
Law School / Male / 349 / 63.92% / 105 / 56.76%
Female / 197 / 36.08% / 80 / 43.24%
BYU-Idaho / Male / 5,498 / 46.30% / 212 / 45.20%
Female / 6,377 / 53.70% / 257 / 54.80%
LDSBC / Male / 548 / 40.83% / 102 / 28.18%
Female / 794 / 59.17% / 260 / 71.82%
BYU-Provo / Male / 18,244 / 52.50% / 477 / 50.05%
Female / 16,505 / 47.50% / 476 / 49.95%

Note that population figures were not possible for FHL, and BYU-Hawaii did not provide those figures to LibQUAL+™ for inclusion in their report. For the other institutions, BYU-Provo and BYU-Idaho had response percentages virtually equal to their respective population percentages. Hunter, though not as good as Provo and Idaho, had figures that were close enough to be assumed that responses by gender were representative of the Law School population. LDSBC had figures that were different, a bit more weighted with females than the population indicated, but still in line with being relatively representative.

A more comparative breakdown for the academic institutions is by User Group – Undergrad, Grad, Faculty, Lib Staff, and Staff – which is shown in Figure 2 below (this breakdown does not apply to the Law School). As expected, the majority of respondents at each of the institutions were undergraduates.

It was interesting to note, however, that Idaho, Hawaii and LDSBC all had “Graduate” responses when none of those institutions have graduate programs. The answer to this stems from the nature of the emails sent out. The email databases for each most likely included individuals that had attended those institutions, but had recently graduated and/or gone on with their studies at other schools. For them, the only logical response for that demographic question would have been “Graduate.”

Figure 2 - Academic User Group Breakdown

Another option that allows for comparisons across all the institutions is that of age. All respondents, regardless of institution, were asked to provide an age demographic. The summary of that can be seen in Figure 3. As expected, the majority of respondents from the academic institutions fell in the 18-22 or the 23-30 groups. In contrast, though again as expected, the primary age of respondents at FHL were 46 or older. When comparing that to the academic institutions, the percentages for those age groups mirrored very closely the age percentages seen for faculty respondents.