Quinn 1

Colleen Quinn

Digital Libraries 17:610:553

Term Project Final Report

What Makes a “Good” Digital Library?

Abstract

While research and development in digital libraries have grown, evaluation has fallen to the wayside. It is imperative that library professionals take the time to determine and consider evaluation criteria in order to create “good” digital libraries. But what exactly makes a “good” digital library? This paper reviews current literature on digital libraries and evaluation. This paper also investigates the various aspects and qualities of digital libraries that make them successful and user-friendly.Sections of this paper include a discussion of what a digital library is, specific evaluation criteria, and user feedback. A list of “good” digital library qualities are determined in conclusion.

Keywords: digital libraries, evaluation criteria, user-friendliness

Introduction

Digital libraries are becoming more and more prevalent, yet there doesn’t seem to be a standard set of evaluation criteria out there. Evaluation should be an integral part of studying digital libraries, but instead it is skipped and skirted around for the more “exciting” aspects of researching and developing various digital library projects.One reason it seems that evaluation criteria aren’t discussed is because there is no single definition for what constitutes a digital library. Without this uniformity it becomes difficult to create a standard set of evaluation criteria for all digital libraries. However, if the evaluation criteria were both comprehensive and adaptable, then it could be customized appropriately depending on the particular digital library. In any case, it is imperative that the qualities that make a “good” digital library be parceled out.

Literature Review

What is a Digital Library?

Perhaps the first step is to determine what exactly constitutes a digital library.It seems like knowing what one is dealing with should be the basis for evaluation. Yet, digital libraries come in all shapes and sizes; as of yet there is no “one-size-fits-all” description for digital libraries. Saracevic (2000) points to the ambiguity in digital libraries when he suggests, “A simplistic answer is that whatever is called a ‘digital library’ project is therefore considered a digital library, thus a construct candidate for evaluation” (p. 360). However, he does concede that a more formal definition is necessary if more generalizations are to be made.

Gonçalves, Moreira, Fox, and Watson (2007), in the spirit of defining what exactly is a digital library, present the “5S” model for the framework of digital libraries: streams, structures, spaces, scenarios, and societies (p. 1416). These authors described their method as the following:

“For eachmajor DL concept in the framework we formally define a number of dimensions of quality and propose a set of numericalindicators for those quality dimensions. In particular, we consider key concepts of a minimal DL: catalog, collection, digitalobject, metadata specification, repository, and services. Regarding quality dimensions, we consider: accessibility, accuracy,completeness, composability, conformance, consistency, effectiveness, efficiency, extensibility, pertinence, preservability,relevance, reliability, reusability, significance, similarity, and timeliness. Regarding measurement, we consider characteristicslike: response time (with regard to efficiency), cost of migration (with respect to preservability), and number of servicefailures (to assess reliability)” (Gonçalves, Moreira, Fox & Watson, 2007, p. 1416).

Within this explanation, Gonçalves, Moreira, Fox, and Watson suggest that a digital library should at least have a collection, catalog, digital object, repository, metadata specification, and services. While these concepts do establish some sort of definition for digital libraries, they notably leave out any specific mention of size or particular kinds of materials as a necessary aspect.

The authors ultimately conclude that their goal was “to identify constraints and tradeoffs needed to ensure adequate quality, to plan how to monitor system behavior in order to facilitate evaluation, and to set priorities that will help ensure desired levels of performance as well as desired collection characteristics” (Gonçalves, Moreira, Fox & Watson, 2007, p. 1436).The persistent openness of their proposed definition for digital libraries highlights the fact that the quality of content and services can still be evaluated regardless of the relative size of a particular digital library. In other words, perhaps the lack of effort in establishing evaluation criteria shouldn’t be pinned simply on a flexible definition of digital libraries.

Determining Evaluation Criteria

Hariri and Norouzi (2011) attempt to establish a framework for evaluation criteria specifically for the user interface of digital libraries. They focus on their interface because “they are treated as agateway for entering DLs information environment” (Hariri & Norouzi, 2011, p. 718).The authors determined some 22 different kinds of criteria for evaluation. The first ten of these include the following: Navigation, Searching, Design, Guidance, Error management (recovery), Presentation, Learnability, User control, Consistency, and Language (p. 717). An additional twelve kinds of criteria were also highlighted by the authors as ones that have not really been considered but that definitely should be a part of digital library user interface evaluation. These include the following: Feedback, Ease of use, Match between system and the real world, Customization, User support, User workload, Interaction, Compatibility, Visibility of system status, User experience, Flexibility, and Accessibility (p. 717). Hariri and Norouzi (2011) also discuss a positive aspect of their particular framework for evaluation criteria:

“Another notable value ofthe article is its relative comprehensiveness. In other words, because 22 evaluationcriteria identified here are extracted from a variety of components like evaluation,design, interaction, common user interface, usability, standards and so on future researchers can utilize all or part of them consistent with their own studies” (p. 717).

Essentially, the authors provide some 22 criteria for evaluation, but they leave it up to whomever to decide on which specific criteria they want to select for their particular evaluation needs. This kind of approach to evaluation criteria allows for more flexibility and customization. It can also be relevant to a wider range of digital libraries.

Kelly (2014) also discusses evaluation and assessment of digital libraries. She suggests that there is a growing interest in the justification of creating digital collection and the improvement of the user experience. With that comes a variety of methods being used to evaluate and assess digital libraries. Methods of assessment mentioned in her article include the following: Usability and user studies, Web analytics, Altmetrics, Reuse, and Cost benefit (p. 386-396). Each of these methods assess different sides or aspects of digital libraries. Kelly (2014), however, seems to advocate for a “holistic approach to DL assessment” (p. 397). In this case, a combination of the previously mentioned methods of analyzing should be used to evaluate digital libraries. Kelly (2014) states:

“Holistic evaluations of DLs are the most complete method of DL assessmentand should be viewed by DL stakeholders as a best practice. The possible combinations of DL assessment methods and tools are vast, so theliterature in this area is important for institutions to determine which typesof evaluations can be combined to achieve the greatest overall picture of aDL’s success” (p. 399).

This combination of methods is reminiscent of Hariri and Norouzi’s (2011) selection of criteria. Both advocate for the use of multiple methods or criteria in digital library evaluation, but they also leave it up to whomever to decide which combination would be applicable and appropriate for their particular digital library.

Saracevic (2000) likewise discusses the need for digital library evaluation. In this he questions the purpose for evaluation in order to gain an understanding for which kind of approach would be best. He states, “We must fully recognize the appropriateness of different approaches for different evaluation goals and audiences” (Saracevic, 2000, p. 358). Some specific approaches he mentions include the ethnographic approach, the sociological approach, the systems approach, and the economic approach (p. 358). Saracevic (2000) though is quick to clarify that “every approach has strengths and weaknesses, there is no one ‘best’ approach. It is naïve to argue for a predominance of any given approach” (p. 358). What is important is that an evaluation identifies what will be evaluated, whether it be effectiveness, efficiency, or some sort of combination. In other words, evaluation criteria for digital libraries can be a variety of different things, but they must be established in order to be able to actually evaluate successfully.

Saracevic (2000) also suggests the following five elements as requirement for evaluation: Construct for evaluation; Context of evaluation; Criteria reflecting performance as related to selected objectives; Measures reflecting selected criteria to record performance; and Methodology for doing evaluation (p. 359). However, no specifics have been established for these elements to determine what actually needs to be specified. In order for a set of evaluation criteria to be determined, specifics of the digital library need to be established.

Buchanan and Salako(2009), in discussing evaluation criteria for digital libraries, address the importance of including usefulness as part of the criteria. The authors argue that usability, a common aspect of criteria, is not the same as usefulness. A system could be usable, but it doesn’t make it useful to users by default. Being useful is just as important as being usable. Buchanan and Salako (2009) claim, “The distinction is akin to one of form vs function” (p. 639). In other words, usability describes what the service is while usefulness describes how the service is used. The authors proceed in breaking down each attribute into other supplementary measures. Usability includes the following: Effectiveness, Efficiency, Aesthetic appearance, Navigation, Terminology, and Learnability (p. 639-641). Usefulness, on the other hand, includes the following: Relevance, Reliability, and Currency (p. 641-642). Buchanan and Salako (2009) insist that these attributes can overlap as well as remain unique.

Seeking User Feedback

One way to determine whether digital library services are successful or not is to seek feedback from users. Fry and Rich (2011) attempt to do this in their article, “Usability testing for e-resource discovery: How students find and choose e-resources using library web sites.” In it they discuss a usability study conducted at Bowling Green State University for their library website. During this study, the goal of library staff was:

“to design and conduct a usability study to discover if the library web site was doing an effective job at presenting and providing access to electronic resources. The goal was to learn how the library's users discover electronic indexes and databases and use its ERM pages. The study was also designed to reveal if users were aware of the library's course and subject guides (which offer alternate subject access to e-resources), and if they used the library's electronic resources to help them with citation” (Fry & Rich, 2011, p. 386).

Their study concluded that students frequently used the navigation bar; however, they were confused by the layout of links. They seemed to search for keywords like “books” and “articles” to help them determine where particular links led to. As Fry and Rich explain, links with “branded terms”—WorldCat, RefWorks, ILLiad, etc.—were often considered confusing to students. An exception was EBSCO, which is often introduced to students through class tutorials or library instruction. It also appeared that students did not pay a lot of attention to material contained in the middle of webpages.

Fry and Rich (2011) also explain that the survey determined that students had a general understanding of using the library website for finding articles and books; however, they had more difficulty when looking for a particular article or an unfamiliar database. It seemed also that students did not always associate searching for databases with searching for books and journals. Knowledge of databases may need to be a primary focus for digital library instruction in order to improve this. Students were also asked about the meaningfulness and descriptiveness of link names. The results suggested that the descriptive nature of link names varied, and it seemed that the location of the link would more likely prompt students to use it more rather than just a labeling change. Lastly, it seemed citation tools were used least frequently, whether because students didn’t know how to use them or they used other methods for citing.

Essentially, Fry and Rich (2011) concluded the following: Students stick with what they know; Students do what their professors tell them to do; Students generally understand the term “database”; and Subject lists are for librarians (p. 397-399). In light of these study results, BGSU Libraries did pursue a bit of redesigning of its library home page; however, Fry and Rich noted that “Well-designed pages are not enough for student access and use. Perhaps more important is these pages' promotion of certain types of information: namely, database brand names and value-added information about them like coverage dates and descriptions” (p. 399).In other words, Fry and Rich promote the combination of their first two tenets: users stick to what they know, and they do what they’re told to do. It sounds like some sort of instruction needs to be offered in order to familiarize users with different services, and once they are familiar with them users will be more likely to use them in the future.

Xie (2008) also seeks feedback from users as a form ofdigital library evaluation. Where her method differs though is that she also takes the evaluation criteria from the user’s perspective. This ensures a user-focused digital library. Xie (2008) describes her method in the following way:

“In order to incorporate users’ perspectives into the development of digital libraries we need to integrate users’ perceived importance of DL evaluation criteria, their use of digital libraries, and their evaluation of digital libraries as well as their preferences, experience, and knowledge structures. Since users’ DL evaluation is co-determined by users’ perceived important DL evaluation criteria, their actual use of digital libraries, and their preferences, experience and knowledge structures, just focusing on one aspect cannot portray a complete picture of user evaluation of digital libraries” (p. 1371).

In other words, Xie places the user first and foremost when evaluating and creating digital libraries. It’s an all-encompassing strategy that ensures a dimensional evaluation as well. If librarians want their digital library services to be user-focused, then they should be evaluating them based on that standpoint.

In comparing the evaluation criteria from users, researchers, and professionals, Xie (2008) finds that there are similarities as well as differences. Similar criteria include user satisfaction, interface usability, system performance, service quality, and collection quality (p. 1371). On the other hand, users tended to not care about preservation, cost, treatment, social impact, and other similar aspects (p. 1371). Basically, users seem more concerned with the usefulness of digital libraries from their personal perspective. However, Xie does point to a dilemma with this user-determined approach. As mentioned previously, there is no “one-size-fits-all” when it comes to digital libraries, and this is still very much true when including various user perspectives into evaluation criteria. Each different type of user has different needs and preferences, and it is not realistic that a digital library can meet every single one of them. Although a user-centered approach to evaluation is desired, it also seems foolish to completely ignore the perspectives of researchers and professionals. As Xie suggests, it would be prudent to integrate perspectives from each different group in order to develop good digital libraries.

Conclusion

As digital libraries become more and more popular, evaluation criteria become even more important. Is picking and choosing criteria acceptable, or should there be a rigid set that must be met to be considered a “good” digital library? There should be a little flexibility for which materials or services are offered by each digital library. After all, digital libraries have a range of materials, purposes, and users. Forcing a “one-size-fits-all” mentality would not be beneficial here. Some libraries should be valued for their uniqueness as well as the different kinds of information they provide. However, it is possible to establish a set of qualities for “good” digital libraries in regards to meeting the needs of their patrons. After evaluating the above-mentioned literature, somequalities of “good” digital libraries, regardless of their size, an be determined:

-Seeks feedback from their users. This includes adapting evaluation criteria from users’ perspectives.

-Designslibrary webpages, labels links, etc., with the user in mind.User-friendliness is key.

-Promotes names of services for added familiarity. This is very important in order to help provide a better user experience since users are most likely to use what they’re familiar with.

-Innovates to provide usable and useful services to their patrons. Usability does not guarantee usefulness.

Ultimately, a “good” digital library shouldn’t be all about having particular materials, specific software, etc. A “good” digital library successfully provides the services asked of them by users, and it provides them in a way that puts user-friendly usability first and foremost. Without happy and satisfied patrons we do not have “good”, successful digital libraries. In conclusion, the discussion of these issues and the suggestion of evaluation criteria here will help library professionals understand what it means to be a “good” digital library, as well as help guide them to develop a “good” digital library.

References

Buchanan, S., Salako, A. (2009). Evaluating the usability and usefulness of a digital library. Library Review, 58(9), 638-651.

Fry,A. & Rich,L. (2011). Usability testing for e-resource discovery: How students find and choose e-resources using library web sites.Journal of Academic Librarianship, 37(5), 386-401.

Gonçalves, M. A., Moreira, B. L., Fox, E. A., & Watson, L. T. (2007). “What is a good digital library?” – A quality model for digital libraries. Information Processing And Management, 431416-1437. doi:10.1016/j.ipm.2006.11.010

Hariri, N., Norouzi, Y. (2011). Determining evaluation criteria for digital libraries' user interface: A review. Electronic Library, 29(5), 698-722.

Kelly, E.J. (2014). Assessment of digitized library and archives materials: A literature review. Journal of Web Librarianship, 8(4), 384-403.

Saracevic, T. (2000). Digital library evaluation: Toward evolution of concepts. Library Trends, 49 (2), 350-369. Special issue on Evaluation of Digital Libraries.

Xie, H. I. (2008).Users’ evaluation of digital libraries (DLs): Their uses, their criteria, and their assessment. Information Processing & Management, 44 (3), 1346-1373.