Page 1 of 8

Erik Mitchell

Comprehensive Exam Question 1

Time Started: 8:12

Time completed: 11:50

1. Information Literacy

It may be argued that, over the years, the growth and evolution of information literacy programs have helped to advance the process of information seeking and comprehension among students. Some in the LIS and education fields championed and contributed to these advancements. Brevik, for example, conceived a more traditional view of information literacy. On the other hand, Kuhlthau’s constructivist approach encourages the creation of new information or meaning as a result of the process.

While these and other information literacy frameworks such as the Big 6 and ACRL vary to some degree, each include an element related to evaluation or judging information as part of their scope. However, discussions of assessment or evaluation of the models themselves seem to be less prevalent in the literature.

In your response, please discuss methods for evaluating information literacy models in practice particularly at the college/university level. Also, what barriers, if any, exist to implementing a thorough evaluation of IL programs? How might those issues be addressed or overcome?

  1. Overview

This question examines how information literacy (IL) models are used in practice in higher education. It asks what approaches can be used to evaluate IL programs in the academic environment, what challenges exist for this evaluation, and how those challenges can be addressed. In order to answer this question, the following response defines the differences between an IL model and IL program, examines elements consistent across multiple models/programs, and elements not consistently shared across models/programs. It continues by proposing some evaluative methods, discussing issues associated with those methods, and suggesting alternative approaches or solutions.

  1. What is IL and how do you define an IL program?

IL is defined in the literature in a number of ways. In many models, it is discussed as a set of skills and concepts that enable a student to answer an information problem or engage in an information task (ACRL, Big6, UNESCO, Sundin). In other models, IL is discussed as a way of thinking about information interaction (Hughes & Shaprio, Bawden, Bruce). For example, Hughes & Shapiro discuss IL from multiple perspectives including IT literacy, emerging technology literacy, research literacy, and critical literacy. Likewise, Sundin discussed IL from four perspectives; resource, process, behavioral, and communication. Finally, IL models view IL as a way of describing the interaction between individuals, social groups, and information objects (Talja, Tuominen, and Savolainen).

An overarching discussion in these models/programs is a disagreement aboutwhat IL characteristics students in higher education have. There is a debate in the literature that surrounds the question “Do these students already have sophisticated IL and technology skills?” For example Lotherington commented on the Canadian EQAQ study suggesting that it ignored a set of technology skills that the students had by virtue of growing up with technology. Likewise, Mabrito & Medley assert that the current generation of college students has dramatically different but equally valid information skills driven by their interaction with new types of documents on the web. Conversely, Rowlands et al. assert that the possession of specific gaming or internet skills do not immediately translate to research or other technology based skills. In fact, Rowlands et al. assert that the fascination with information technology (such as twitter, facebook, etc) is not as widely held among the generation of college students as is claimed.

Given the varied literature on IL models, defining IL or creating a consolidated model is difficult given some core disagreements about what constitutes IL, the role of the student in IL, and the roles that different types of literacy (digital literacy, research literacy, critical literacy) play. A good example of this last statement is the disagreement among the models of the role of IT. Bruce for example asserts that technology surrounds every facet of literacy while Hughes & Shaprio compartmentalize technology as an important but discrete aspect of literacy.

Within this understanding of models, how do we define an IL program? Despite the issues listed above, many of the IL programs in place in higher education gravitate towards common themes. For example many programs are defined not with a strong tendency towards a model but with a specific operational goal, such as providing directed bibliographic instruction or filling a curriculum need, in mind. As such, the differences in IL models do not necessarily reflect a wide diversity in content of IL programs. For example, the ACRL model which emphasizes themes such as Know, Access, Evaluate, Use and Ethical/Legal issues is the most widely adopted program in higher education in the United States. The ACRL website has created a long list of criteria and guidelines that libraries can use when creating IL programs to define goals, curricula, and course content. The UNESCO model which is used internationally holds many of the same themes. Sundin (2007?) found both variety and homogeneity among IL programs in areview of Scandinavian online IL tutorials from which he distilled four overarching approaches: Process, Behavioral, Communication, and Resource. In the process approach, the tutorials emphasized a research process common to both ACRL and UNESCO. In the behavioral approach, the tutorials discussed how information is used by students. In the Communication approach, the tutorials focused on the interaction between individuals, groups, and documents (such as in the socio-technical model). Finally in the Resource approach, the tutorials focused on specific resource types such as books, journals, databases, etc. In my own IL teaching and research (Mitchell, 2007) I have used a number of models including the ARCL model, the Hughes & Shapiro model and Talja et.al.’s concept of socio-technical literacy to help construct an IL curriculum that included both traditional information problem solving (IPS) skills and engaged students from different instructional perspectives (for example building on common Internet information skills to teach traditional IL skills).

An IL program as defined in this question response can be thought of with the following definition: A curriculum, set of classes or set of services designed to address the information literacy needs of the university in which the program is implemented. It is important to define an IL program so broadly because IL programs do not always share consistent goals, support, size, or content. For example, the IL program at Wake Forest University is a credit bearing 1 hour course that is taught to students by librarians. WFU offers approximately 10 sections of this course per semester with both subject specific (humanities, science, business) and general (information issues in the 21st century) sections. Despite the popularity of this class it is not required by the university. In contrast, the program at Catawba College is offered as part of another course but is a required part of the curriculum. In stark contrast to both of these examples, many IL programs are merely the set of bibliographic instruction services that libraries offer to interested professors and their classes. A final example of an emerging use of IL instruction is the embedded librarian. Embedded librarians are incorporated into the classroom environment to serve an ongoing instructional role. In these environments, librarians join courses for the semester to help students engage in research or conduct specific information-rich projects. To reiterate the criteria above, an IL program can be defined by; 1) its goal (who are they educating, what is the scope of the program); 2) its level of support (is the course integrated into the university curriculum, optional, offered only informally); 3) its size (is the program offered to all students, to interested professors, taught by only a few librarians or many); 4) its content (what does the course focus on – technology, research, information issues?). These themes serve both as evaluative criteria and complicating factors for program evaluation.

  1. What qualities do IL models/programs share?

Despite the differences in scope and content listed above, IL models and programs do share a number of qualities. First, many programs in higher education share a focus on the research process as defined by ACRL. Correspondingly, IL programs tend to agree on the definition of the research process, the role of information lifecycle concepts, and the value of resource-based instruction (e.g. how to use books, journals, and electronic databases). This means that much of IL instruction is focused on the information problem solving process. Second, there is general agreement that information issues are relevant to IL even if they are not always included in the program content. Despite this, some literature cited the over-emphasis on skill learning in programs with a lack of emphasis on needed conceptual learning. Third, there is typically an agreement on the role of technology in IL although the survey of models and programs from the literature review on this topic indicates that basing IL instruction from a student-driven skill perspective (e.g. Mabrito & Medley, Talja et al.) by building on common skills that students possess through everyday interaction is not particularly common.

IL programs tend to be centered in the library in higher education. The role of IL in libraries has grown as academic libraries in particular have become less focused on serving as warehouses of information and more as service and education locations. It has been stated in the literature that public libraries underwent the change from repository of knowledge to service center many years ago and that academic libraries are just now catching up (?). Some examples of this trend include the integration of the library with campus IT and instructional services, expansion of library support for technology and IT services, and re-alignment of library in administrative roles to strengthen their academic and student roles.

  1. What qualities do IL programs not share?

There are two main areas where IL programs differ; program scale/purpose, and program curriculum. First, as mentioned above, IL programs do not always share the same goals or university support. IL programs are often driven by university needs or perspectives and despite the goals of the librarians; these constraints can have an impact on the program content. In some cases, IL programs consist of single session bibliographic instruction classes while in others the IL program is a required and central element to the university curriculum. Libraries often have to work with their university to build an IL program and are inevitably constrained by university goals and resources.

Second, within IL programs of similar size and scope, the content of the curriculum is not always consistent. Some programs employ core ACRL based concepts while others employ less common models (Hughes & Shapiro, Talja et al). Likewise, not all IL programs focus squarely on IPS issues. Further, very few programs overtly discuss the role of metacognitive tasks (such as learning to learn, managing the research process, self-monitoring during search). As a personal comment in my own experience teaching IL alongside ten other IL instructors, all of which follow their own curriculum, I have found a wide range of ideas about the what constitutes an appropriate IL curriculum even when the program as a whole is based on the ACRL model.

Related to this lack of similarity in curriculum are core issues such as the view of student information skills. There is literature (Lotherington, Mabrito & Medley and Rowlands et.al ) which debates the value of common internet and technology skills in relation to IL. Related to the question about the role of student skills possessed prior to the class is a disagreement on the nature of literacy. While many agree that IL is a continuum (one is never wholly illiterate or wholly literate), Talja et al. assert that IL cannot even be thought of as a unified concept. They assert that instead, literacy is a series of skills and concepts that should be thought about from a specialized to generalized continuum and that students come to an information interaction with a mix of levels regarding specific skills.

  1. How can you evaluate these models in practice, in particular at college/university level?

With this background understanding of the role of IL models in IL curricula, and some of the factors that influence the implementation of IL programs at the college/university level, this section of the response focuses on how models can be evaluated. The question asks “how you can evaluate IL programs.” The first follow-up question of this response is “what is the goal of the evaluation?” Is the purpose of the evaluation to define a new IL model that is grounded in how instruction actually occurs or to identify ‘best practices’ in IL instruction (this is a common theme in literature anyway)? Is the purpose of the evaluation to engage the university in a discussion about the role of the IL program or to justify staffing expenditures? The simple answer in determining how you evaluate IL is that it comes down the purpose of the evaluation. If the goal is an academic investigation, questions need to be asked which are suitablygeneralizable and applicable across multiple institutions. If the goal is to evaluate the success or implementation within a given university, the questions should be focused and designed to provide specific feedback, particularly about the impact of the program. In fact, the literature reviewed was very poor on evaluating programs. Many of the articles about IL programs were case studies or best practices. This should not be too surprising as librarians typically have not acquired suitable research skills for this purpose.

Putting these differences aside, I would suggest three main evaluative techniques: 1) Descriptive analysis of the program goals, elements, theoretical foundation, and size; 2) Analysis of the program content and curricula compared to prevailing models: 3) Evaluative comparison of student outcomes in relation to a specific IL program. Depending on the ultimate goal of the evaluation, the content of each technique will differ. In the following paragraph each evaluative approach will be gauged.

Descriptive program analysis would be effective in a cross institutional comparison. Descriptive elements could include the goal of the IL program, the elements of the curriculum (specific skills and concepts taught), the over-arching IL model that the program uses, and statistics on the size and scale of the program. This evaluation would be interesting, particularly if the data were kept consistent enough to allow a cross-institutional comparison. I suspect that it would demonstrate a lack of consistent approach in designing and maintaining IL programs across universities. This type of evaluation would also be useful for internal data analysis if the IL program coordinator is looking to quantify the program over time. If this is the goal, statistics on the number of sections, students taught, type of training/coursework offered could be gathered. A study of this sort would work well as a quantitative survey-based study.

An analytical comparison of program content from multiple institutions would be much more ambitious than the gathering of descriptive content. It would in essence drill into the goals and curricula of the program and seek to find themes and identify gaps in the programs. This sort of analysis would be necessarily more qualitative in nature and would need to be grounded in a comprehensive understanding of supporting IL models. While a grounded theory approach would prove interesting, it would also run the risk of missing important models which have already been investigated. Some example questions that would be interesting would include questions about the underlying IL model (if the program uses one), an analysis of course goals and objectives against a selected set of IL models, and an evaluation of course content against these models. Taking on an analysis of curricula could prove to be analytically and politically difficult, particularly given the wide range of topics which count as IL in higher education.

The third type of analysis suggested is an evaluative analysis of student outcomes. Much of IL research focuses on instructor provided feedback as opposed to student provided feedback. I would suggest that two main reasons for this are the difficulty of identifying methods for asking students to think about their experience in an academic or evaluative way, and the difficulty in finding a consistent metric to gauge their response against. One main benefit of this form of evaluation is that it allows the program to find out more about the student perspective in IL and what they think about the contents of IL programs. Two sources of data for this type of evaluation are student course feedback forms and student learning assessments. If the IL program is structured as a regular university course, the instructors may already have a large body of student feedback via course evaluations. While these evaluations tend to be centered on student satisfaction with the course and instructor, it is appropriate to have students rank course contents and discuss how the curricula aligned with their goals. In my own research using this approach I used pre course surveys to have students rate their familiarity with IL and IT concepts and post-course surveys to rank their view of the utility of course contents at the end of the semester. One main issue with this approach of course is that it is tied heavily to the curriculum and would be difficult to generalize.

An alternative to student feedback on curricula would be a form of student learning assessment centered on a general IL model. In my research proposal I suggest that a self-efficacy based metric is appropriate because it allows students to combine a cognitive estimate of their ability (“can I do this”) and an affective estimate of their ability (“do I feel confident about it”) in a single metric. Self-efficacy has been demonstrated to be a reliable estimate of actual ability (Bandura) and has been developed with the ACRL information literacy model in mind. Using this approach would allow for a generalized analysis of student perspective across programs which are based on the ACRL model. A related approach which is common in education is to use Bloom’s revised taxonomy (Remember, Understand, Apply, Analyze, Evaluate, Create) to evaluate student learning. By mapping specific IL skills and concepts onto Bloom’s revised taxonomy (Krathwohl, et al.) an evaluative instrument could be designed that would assess student learning in a given area.