OKI Application Development:

The Assignment and Assessment Manager, an OKI, Open Source, Learning Support Tool

A Proposal to the

Andrew W. MellonFoundation

Stanford University Libraries

8 August 2002

OKI Application Development – May 2002page 1

OKI APPLICATION DEVELOPMENT

A PROPOSAL TO THE

ANDREW W. MELLONFOUNDATION

Stanford University Libraries

8 August 2002

PROJECT TITLE:THE ASSIGNMENT AND ASSESSMENT MANAGER,

AN OPEN SOURCE, OKI, LEARNING SUPPORT TOOL

PRINCIPLEMichael Keller

INVESTIGATOR:Stanford University Librarian and Director of Academic Information Resources

CONTACT Stanford University Libraries

INFORMATION: 557 Escondido Mall, Room 101

Stanford, CA 94305-6004

Email Address:

Phone (office):650.723.5553

Project Liaison:Lois Brooks, Director of Academic Computing

Technical Liaison:Scott Stocker, Senior Courseware Architect

OKI Application Development – 8 August 2002page 1

OKI APPLICATION DEVELOPMENT:

THE ASSIGNMENT AND ASSESSMENT MANAGER, AN OKI,

OPEN SOURCE, LEARNING SUPPORT TOOL

A PROPOSAL TO THE ANDREW W. MELLON FOUNDATION

 ABSTRACT

Stanford University Libraries’ Academic Computing department proposes the development of an Assignment and Assessment Manager (AAM), an Open Source learning support tool that is OKI-compliant, extensible, and modular. AAM is based on Stanford’s CourseWork course management system basic assessment module, developed by Academic Computing and deployed campus-wide at Stanford during the past year.

AAM is being designed to support extensions for online assessments that are discipline-specific and teaching method-specific in order to realize methods requested by faculty and found effective in research. AAM extensions for large-lecture science courses and language instruction will be delivered with the AAM tool. In addition, AAM supports “chunked” quizzes and questions that can be embedded into online content or activities in order to better support formative assessment.

The system architecture of AAM will be modular and allow the tool to operate as a standalone service or alongside other OKI services. It will implement existing OKI Common Service APIs and utilize the IMS Question and Test Interoperability metadata specification. This version will also greatly inform the development of the OKI Educational Service APIs, especially as they relate to assessment.

 RATIONALE

AAM is an important teaching and learning tool that provides a framework for extending and managing online course assessments including tests, quizzes, problem sets, papers, and short informal surveys. It will provide functionality (much of it already tested on a Stanford-developed prototype course management systems (CMS) module) that is not available in current CMS and requested by faculty and instructional designers.

AAM and the Assessment Tool Marketplace

A survey of commercial and academic assessment products reveals scores of stand-alone[1] and CMS-based tools[2] (Looms). Many of these assessment tools were developed by early adopters who made their systems available commercially or as shareware. Few of the stand-alone tools can be easily integrated into full CMS systems and campus IT infrastructure. While many of these tools have idiosyncratic authoring interfaces that would be problematic for typical instructors to use,[3] many are adequate for traditional testing. These testing tools have a similar design focus: they simply allow instructors to give a quiz or test online, replacing a paper-based assessment as closely as possible. Their goal is to translate face-to-face testing activities to a new medium in a manner that carries out traditional instructional practices. There is little effort to take advantage of the medium’s affordances to enhance formative (i.e., those which are used immediately to improve a student’s learning or instructor’s teaching) or summative assessment. Commercial online assessment tools are typically agnostic about learning methodology in order to attract the largest market. Few have special functionality to support formative assessment, or interfaces that allow researchers and developers to extend the tool. They do not take advantage of the capabilities of networked computers to provide effectively organized, near-real-time feedback on misconceptions and deep understanding. Feedback is not efficiently (in terms of faculty work-flow) linked to communication tools. The AAM project will change how assessments can be given. This change results from rethinking possibilities for assessment and developing a framework that can: (1) be extended to support different disciplines and their methods of teaching and learning; and (2), provide for ongoing, formative assessments.

1. We propose to develop an OKI and IMS-compliant Assignment and Assessment Manager Tool. AAM’s interaction design will be based on the assessment module developed for the CourseWork CMS[4] at Stanford University’s Academic Computing (see Appendix IV for views of the assessment module). This module implemented basic testing, quizzing, and assignment management functionality. AAM’s functional requirements will be based on the original CourseWork module, but extended to implement many functions listed in “Desired Features of Assessment Tools,” a needs specification that was prepared at the OKI Basic Tool Definition Workshop held in fall 2001 (see Appendix V). Rather than attempting to replicate the full set of features identified in the desired features list, AAM will provide a platform for developers to create extensions that provide these features. We will make final the specifications for AAM functionality during an initial phase of the project (November 2002 - January 2003) after conferring with other developers and academic institutions, including the Educational Testing Service. We do, however, plan for AAM to be a working tool that can replace the current CourseWork assessment module and will include the two extensions as specified in the proposal.

AAM will be a working online assessment tool that will support many current online assessment practices to support faculty as they construct, distribute, and evaluate web-based formative and summative assessments. It will be designed with wizard interfaces that structure the task of creating assignments, submitting documents, and managing quizzes, so that the tool supports both novice users who have little web experience and expert users who want full flexibility in developing

assignments. It will be a test-bed for developing the OKI assessment Educational Service APIs and will support extensions for different types of formative and summative assessments.

2. The AAM project will develop exemplary, discipline-specific and instructional-method-specific assessment extensionsto the core AAM tool. Faculty from different disciplines have different needs for assessment tools that are not addressed in currently available systems. Discipline specificity will lead to more rapid faculty adoption because there is more perceived value in tools that address faculty’s specific instructional and learning problems.

We propose to produce an OKI and IMS-compliant assessment tool, the AAM, and to complete two AAM extensions, satisfying the instructional needs for each as listed below:

  1. Assessment Tool Extension for Large-Lecture, Science Courses. This type of instruction requires formative assessment tools that help instructors know their students’ understanding of the content and help them communicate efficiently with a large number of students about misconceptions and progress in learning. The extension must:
  • provide ongoing, formative feedback to both the student and the instructor on student misconceptions, deep understanding, and factual knowledge;
  • conserve faculty and teaching staff time;
  • link simple, quick, online assessments to lectures and online content to obtain information rapidly about problem areas for students;
  • provide for peer-to-peer interactions;
  • support course continuity when there are frequent changes of instructor, using techniques such as archived question pools;
  • include embedded documentation that demonstrates best practices; and
  • implement near synchronous versions of Classroom Assessment Techniques (e.g., one-minute papers, RSQC2 technique) that would request student input during or immediately after lectures or section meetings.

The CourseWork module supported interactions between faculty and course assistants as they worked with sections of students. AAM will support the instructional practices that result from senior instructors monitoring course assistants who are responsible for several sections of students. Large lecture courses have collaborative methods among instructional staff for creating, reviewing, evaluating, and giving feedback on assessments. Most CMS’s and assessment tools do not accommodate the assessment logistics of this type of course.

AAM will support special formative functionality that is critical for large courses in which timely, individualized feedback is often problematic. (See section below on formative assessment)

  1. Assessment Tool Extensions for Language Learning. Language instruction requires continuous formative assessment of students’ oral and written proficiency. AAM language extensions must:
  • support recording of student oral input as responses to assessment items;
  • provide efficient workflow support for instructors, allowing them to quickly record spoken prompts; questions, and texts directly through AAM authoring wizards;
  • include a larger variety of assessment types including flexible cloze exercises;
  • support linguistic metadata tagging of questions and search of question pools;
  • allow immediate correction after feedback; and
  • include embedded documentation that demonstrates best practices.

Stanford has developed an extension for the CourseWork assessment module that allows students to record their responses as audio files that are uploaded to a server and then accessed by instructors for scoring. This basic capability has demonstrated the utility and desirability of this type of extension. It is now in use in seven language programs at Stanford and has been requested by many other institutions when presented at conferences. The AAM project will expand this basic capability and support the functionality requested by faculty, as listed above.

The Stanford Language Center, which initiated use of this functionality, has considerable experience in conducting large-scale online assessment. Since 1997, Stanford has conducted the majority of its foreign language placement testing online using testing instruments developed by the Stanford Language Center. Research has shown that these tests are reliable and valid and create efficiency in instruction (Rivera, Kamil, & Bernhardt). The Stanford Language Center has leveraged the utility and reliability of CourseWork’s integrated audio capture software in administering online versions of the Center for Applied Linguistics’ Simulated Oral Proficiency Interview (SOPI) to students in their final quarter of language instruction (Dozer and Bernhardt). The AAM extensions will allow for even more comprehensive assessment, particularly of difficult-to-test productive language performances, both written and spoken.

3. AAM will significantly enhance online, formative assessment methods.

Recent Research on Formative Assessment

Much recent study of assessment has focused on its integration into the learning process. No longer are teaching and assessment viewed as separate practices. Assessment is considered a process to promote and diagnose learning with an emphasis on generating better questions and learning from errors (Hub and Freed). Assessment has traditionally been carried out through summative testing after the learning of a topic was completed. Recent research, however, shows the importance of formative assessment to provide “feedback that can guide modification and refinement in thinking” during the learning process (Bransford, Brown, & Cocking). It is well known that in the classroom setting regular, ongoing feedback is helpful for learning (Chickering & Gamson, Cross, Kulik & Kulik,). AAHE’s Assessment Forum’s Principles of Good Practice assert that assessment “works best when it is ongoing, not episodic.”(1992). A mid-term and final are not considered sufficient for a learning-centered course. Quality undergraduate education includes assessments that involve students in active learning, provide prompt feedback, support collaboration and promote adequate time on task (Education Commission of the States).

Assessments should “make students’ thinking visible to themselves, their peers and their teacher” (Bransford, Brown, & Cocking). Quality instruction requires “articulation of student knowledge, problem-solving processes, or reasoning in a domain”(Collins, Brown, and Newman). One method to achieve this goal is to have students author narrative explanations as answers even in domains such as mathematics, that normally require numerical computations (Mitchell). Edgington argues that explanation is the very purpose of science and tasks that require explanation are commonly used to assess students' understanding (1997). Other methods that reveal conceptual understanding have students generate graphic representations or specially designed multiple-choice selections (Kashy, Morrissey, Tsai, & Wolfe; Schaeffer et al.).

Some formative assessments are focused on providing instructors a timely, global picture of student understanding during the course, rather than measures of performance for specific individuals. Cross and Angelo (1988) have compiled examples of classroom assessment techniques (CAT) that inform instructors “what students are learning in the classroom and how well they are learning it.” Their methods include easily adoptable methods such as One-minute papers, RSQC2 (recall, summarize, question, comment, and connect), and background knowledge probes). Another method, Peer Instruction, intersperses short conceptual questions into lectures in order to reveal common misunderstandings (Fagen, Crouch, and Mazur). Mazur describes the activity as including a short period for individuals to answer a question, followed by a few minutes when students try to convince their neighbors of their individual answer. While this method has been carried out with no technology, Mazur explains the advantages of using computers for documenting results (Mazur).

Assessment Tools and Formative Assessment

Several online assessment tool development projects have focused on formative assessment. The prototype CourseWork assessment module was developed and evaluated at Stanford University during the past five years. Its functionality was based on research that developed special formative assessment methods which encouraged conceptual thinking and collaboration. The assessment module administered weekly problem set assignments that presented a new question type that coupled every multiple-choice question with a student’s free-text rationale explaining his/her response. The module automatically scored the multiple-choice questions and organized corresponding rationales based on the students’ multiple-choice answers. Students were encouraged to discuss the problems and help each other, but they were required to author their own rationales. Students easily understood this protocol for peer interaction and no cheating was noted. This technique allowed instructors to quickly review rationales for the most commonly missed questions and identify widespread misconceptions. The assessment module also provided integrated email forms that simplified communications to students by grouping those who had similar misconceptions, similar scores, or other noted problems to receive common replies (Schaeffer, et al; See Appendix VI.). This functionality will be included in the extension proposed for AAM.

Another assessment tool development project, the Computer-Assisted Personal Assignment project (CAPA), has an “emphasis on learning vs. grading and ranking” (Thoennessen & Harrison). This project developed techniques to help learners “clarify common misconceptions and have them share their understanding.” Formative assessment techniques include: (1) a question type that has multiple, varied correct answer choices; (2) question templates that generate unique numerical values and labels for personalized problems; and (3), linkage to an asynchronous online discussion system. The positive effect of the system was not attributed to any one of these elements but to an instructional system that combined online and face-to-face elements. Much like the CourseWork assessment module, which used explanations to have students answer at a conceptual, rather than a surface feature level, CAPA was “well suited for conceptual questions because of the tools and templates available which facilitate coding of problems” (Kashy,Thoennessen, Tsai, Davis & Wolfe). CAPA’s method of personalizing each question with unique values, labels, and sets of answers created an environment in which students could not simply copy answers but had to engage in discussion dealing with conceptual issues. In a later version of the system, the CAPA assessment system was loosely coupled with an asynchronous discussion tool to facilitate student discussion.

Similarly, a project at Rice School of Engineering, the Engineering Design Tutor (EDT), generates multiple versions of a design problem, manages problem distribution and user responses, and evaluates responses (Cavallaro, Elam, Miller, Terk, Zwaenepoel). The goal of EDT is to provide a wide range of problems to each user, rather than support collaboration among individuals on similar problems, as in the CAPA and CourseWork projects. ETS demonstrates the fuzzy line between online tutorials and formative assessment activities. Once an assessment tool has an interface that allows other tools to display questions, as is planned for the AAM tool, then more complex aggregations of static content, dynamic environments such as simulations, and assessment questions are possible.

AAM’s Focus on Formative Assessment API’s

The focus of the AAM project will be the development of a functional assessment tool with APIs that support many types of extensions. The AAM project will enable researchers and developers to add new types of functionality to the AAM tool. For example, AAM will have API’s to support different response scoring modules. In this manner, experts in techniques such as latent semantic analysis can easily distribute software components based on their research without developing a full assessment tool that requires functionality for authentication, student access control, etc.

To support formative assessment, the AAM tool will have some unique functionality that allows it to interoperate with different course management and learning tools. To accomplish this, AAM will expose API’s to other tools for the display of questions and for answer submission. The AAM tool will not isolate assessments in standalone tests, the traditional summative model for assessment, separating assessments from learning activities. AAM will allow faculty to “chunk” assessments in segments, as small as a single question, and embed these segments within online content or activities that are managed by other tools. This basic formative assessment functionality can be overlaid on many activities including reading or viewing online content, exploring simulations, engaging in online discussion, reflecting on lab experiments, even checking one’s calendar or announcements. Student responses, however, will be accessed and reviewed by instructors through a central analysis interface. No current CMS’s support ongoing, embedded, modular assessments to provide this type of feedback. AAM answers the needs of faculty and instructional designers to know what their students know, and to do so efficiently.