Common Industry Format for Usability Test Reports

  • Draft, 8/9/99 12:30 PM
  • This document in RTF format
  • This document in Word97 format

Preface

Purpose and Objectives

Report Format Description

Title Page

Introduction

Method

Results

References

Preface

Purpose and Objectives

The overall purpose of the Common Industry Format (CIF) for Usability Test Reports is to promote incorporation of usability as part of procurement decision-making for interactive products. It provides a common format for human factors engineers and usability professionals in supplier companies to report the methods and results of usability tests to consumer organizations.

Audience

The CIF is meant to be used by usability professionals within supplier organizations to generate reports that can be used by consumer organizations. The CIF is also meant to be used by consumer organizations to verify that a particular report is CIF-compliant. The Usability Test Report itself is intended for two types of readers:

1)Human factors or other usability professionals in consumer organizations who are evaluating both the technical merit of usability tests and the usability of the products.

2) Other technical professionals and managers who are using the test results to make procurement decisions.

Methods and Results sections are aimed at the first audience. These sections describe the test methodology and results in technical detail suitable for replication, and also support application of test data to questions about the product’s expected costs and benefits. Understanding and interpreting these sections will require technical background in human factors or usability engineering for optimal use. The second audience is directed to the Introduction, which provides summary information for non-usability professionals and managers making procurement decisions. The Introduction may also be of general interest to other computing professionals.

Scope

Trial use of the report format will occur during a Pilot study. During the Pilot the reports will be provided according to agreement between the consumer organization, the supplier of the product, and any third-party test organization where applicable. The report format assumes sound practice (e.g., refs. 9 & 10) has been followed in the design and execution of the test. Summative type usability testing is recommended; the format is, however, intended to support clear and thorough reporting of both the methods and the results of any empirical test. Consequently it does not specify criterial values for any testing parameters or measures. The common format covers the minimum information that should be reported. Suppliers may choose to include more. Although the format could be extended for wider use with products such as hardware with user interfaces, they are not included at this time. These issues will likely be addressed as we gain more experience in the Pilot study.

Relationship to existing standards

This document is not formally related to standards-making efforts but has been informed by existing standards such as Annex C of ISO 13407, ISO 9241-11-C, ISO 14598-5. It is consistent with major portions of these documents but more limited in scope.

Using the template

The format should be used as a generalized template. All the sections are reported according to agreement between the consumer organization, the product supplier, and any third-party test organization where applicable. Annex C of this template contains an example that illustrates how the report format can be used.

Elements of the CIF are either ‘Required’ or ‘Recommended’. The later are noted in brackets; all other elements should be interpreted as ‘Required’.

A glossary is provided in Annex D to define terminology used in the report format description.

Report Format Description

Title Page

This section contains lines for

identifying the report as a Common Industry Format (CIF) document

naming the product and version that was tested

who led the test

when the test was conducted

the date the report was prepared

who prepared the report

contact information (telephone, email and street address) for a individual or individuals who can clarify all questions about the test to support validation and replication.

Introduction

Executive Summary

This section provides a high level overview of the test. The intent of this section is to provide information for procurement decision-makers in consumer organizations. These people generally will not read the technical body of this document but are interested in:

the identity and a description of the product

[Recommended] the reason for and nature of the test

a summary of the method(s) of the test including the number of and type of participants and their tasks.

results expressed as mean scores.

If differences between values or products are claimed, the probability that the difference did not occur by chance should be stated.

Full Product Description

This section identifies the formal product name and release or version. It describes what parts of the product were evaluated. This section must also specify:

[Recommended] a brief description of the environment in which it should be used

the user population for which the product is intended

[Recommended] any groups with special needs

[Recommended] the type of user work that is supported by the product

Test Objectives

This section describes all of the objectives for the test and any areas of specific interest. Possible objectives include testing user performance of work tasks and subjective satisfaction in using the product.

  • [Recommended] If the product component or functionality that was tested is a subset of the total product, the reason for focusing on the subset must be explained here
  • The functions and components of the product with which the user directly and indirectly interacted in this test.

Method

This is the first key technical section. It must provide sufficient information to allow an independent tester to replicate the procedure used in testing.

Participants

This section describes the users who participated in the test in terms of demographics, professional experience, computing experience and special needs. This description must be sufficiently informative to replicate the study with a similar sample of participants. If there are any known differences between the participant sample and the user population, they should be noted here, e.g., if the participants were paid or compensated. Participants should not be from the same organization as the testing or supplier organization. Great care should be exercised when reporting differences between demographic groups on usability metrics.

A general description should include important facts such as:

The total number of participants tested. A minimum of 8 is recommended.

Segmentation of user groups tested (if more than one user group was tested). Example: novice and expert programmers.

The key characteristics and capabilities expected of the user groups being evaluated.

How participants were selected and whether they had the essential characteristics and capabilities.

[Recommended] Whether the participant sample included representatives of groups with special needs such as: the young, the elderly or those with physical or mental disabilities.

A table specifying the characteristics of the participants tested should include a row in the table for each participant, and a column for each attribute. Characteristics should be chosen to be relevant to the product’s usability; they should allow a consumer to determine how similar the participants were to the consumers’ user population; and they must be complete enough so that an essentially similar group of participants can be recruited. The table below is an example; the attributes that are shown are typical but may not necessarily cover every type of testing situation.

 / Gender / Age / Education / Occupation / role / Professional Experience / Computer Experience / Product Experience
P1 /  /  /  /  /  /  / 
P2 /  /  /  /  /  /  / 
Pn /  /  /  /  /  /  / 

For ‘Gender’, indicate male or female.

For ‘Age’, state the chronological age of the participant, or indicate membership in an age range (e.g. 25-45) or age category (e.g. under 18, over 65) if the exact age is not known.

For ‘Education’, state the number of years of completed formal education.

For ‘Occupation/role’, describe what the user’s role is, in the context of use of the product. Use the Role title if known.

For ‘Professional experience’, give the amount of time the user has been performing in the role.

For ‘Computer experience’, describe relevant background such as how much experience the user has with the platform or operating system, and/or the product domain. This may be more extensive than one column.

For ‘Product experience’ indicate the type and duration of any prior experience with the product or with similar products.

Context of Product Use in the Test

This section describes the tasks, scenarios and conditions under which the tests were performed, the tasks that were part of the evaluation, the platform on which the application was run, and the specific configuration operated by test participants. Any known differences between the evaluated context and the expected context of use should be noted in the corresponding subsection.

Tasks

A thorough description of the tasks that were performed by the participants is critical to the face validity of the test. Describe the task scenarios for testing. Explain why these tasks were selected (e.g. the most frequent tasks, the most troublesome tasks). Describe the source of these tasks (e.g. observation of customers using similar products, product marketing specifications). Also, include any task data given to the participants, and any completion or performance criteria established for each task.

Test Facility

This is a physical description of the test facility.

[Recommended] Describe the setting, and type of space in which the evaluation was conducted (e.g., usability lab, cubicle office, meeting room, home office, home family room, manufacturing floor).

[Recommended] Detail any relevant features which could affect the quality of the results, such as video and audio recording equipment, one-way mirrors, or automatic data collection equipment.

Participant’s Computing Environment

The section should include all the detail required to replicate and validate the test. It should include appropriate configuration detail on the participant’s computer, including hardware model, operating system versions, and any required libraries or settings. If the product uses a web browser, then the browser should be identified.

Display Devices If the producthas a screen-based visual interface, the screen size and monitor resolution must be detailed. If the product has a print-based visual interface, the media size and print resolution must be detailed. If visual interface elements can vary in size, specify the size(s) used in the test. This factor is particularly relevant for fonts.

Audio Devices[Recommended]If the product has an audio interface, specify relevant settings or values for the audio bits, volume, etc.

Manual Input Devices[Recommended] Ifthe product requires a manual input device (e.g., keyboard, mouse, joystick) specify the make and model of devices used in the test.

Test Administrator Tools

[Recommended] Describe any hardware or software used to control the test or to record data, including standardized questionnaires.

Design Of the Test

Experimental Design

Describe the logical design of the test. Define independent variables and control variables. Briefly describe the measures for which data were recorded for each set of conditions.

Procedure

This section details the test protocol.

[Recommended] Include the sequence of events from greeting the participants to dismissing them.

[Recommended] Include details concerning non-disclosure agreements, form completion, warm-ups, pre-task training, and debriefing.

[Recommended] Verify that the participants knew and understood their rights as human subjects [1].

Give operational definitions of measures and any presented independent variables or control variables. Describe any time limits on tasks, and any policies and procedures for training, coaching, assistance, interventions or responding to questions.

[Recommended] Specify the steps that the evaluation team followed to execute the test sessions and record data.

[Recommended] Specify how many people interacted with the participants during the test sessions and briefly describe their roles.

Participant General Instructions

This includes all instructions given to the participants (except the actual task instructions, which are given in the Participant Task Instructions section).

[Recommended] Include instructions on how participants are to interact with any other persons present, including how they are to ask for assistance and interact with other participants, if applicable.

Participant Task Instructions

This section should summarize the task instructions. Put the exact task instructions in an appendix.

Usability Metrics

Three categories of usability metrics are relevant to the procurement decision: effectiveness, efficiency and acceptance. Conceptual descriptions and examples of these metrics are given below.

Effectiveness Metrics

Effectiveness relates the goals of using the product to the accuracy and completeness with which these goals can be achieved. Common measures of effectiveness include percent task completion, frequency of errors, frequency of assists to the participant from the testers, and frequency of accesses to help or documentation by the participants during the tasks. It does not take account of how the goals were achieved, only the extent to which they were achieved. Efficiency relates the level of effectiveness achieved to the quantity of resources expended.

Completion Rate

The results must include the percentage of participants who completely and correctly achieve each task goal. If goals can be partially achieved (e.g., by incomplete or sub-optimum results) then it may also be useful to report the average goal achievement, scored on a scale of 0 to 100% based on specified criteria related to the value of a partial result.

Note: The unassisted completion rate (i.e. the rate achieved without intervention from the testers) should be reported as well as the assisted rate (i.e. the rate achieved with tester intervention) where these two metrics differ.

Errors

Errors are instances where test participants did not complete the task successfully, or had to attempt portions of the task more than once. It is recommended that scoring of data include classifying errors according to some taxonomy, such as in [2].

Assists

When participants cannot proceed on a task, they are sometimes given direct procedural help by the test administrator in order to allow the test to proceed. This type of tester intervention is called an assist for the purposes of this report. If it is necessary to provide participants with assists, efficiency and effectiveness metrics must be determined for both unassisted and assisted conditions. For example, if a participant received an assist on Task A, that participant should not be included among those successfully completing the task when calculating the unassisted completion rate for that task. However, if the participant went on to successfully complete the task following the assist, he could be included in the assisted Task A completion rate. When assists are allowed or provided, the number and type of assists must be included as part of the test results.

In some usability tests, participants are instructed to use support tools such as online help or documentation, which are part of the product, when they cannot complete tasks on their own. Accesses to product features which provide information and help are not considered assists for the purposes of this report. It may, however, be desirable to report the frequency of accesses to different product support features, especially if they factor into participants’ ability to use products independently.

Efficiency Metrics

Efficiency relates the level of effectiveness achieved to the quantity of resources expended. Efficiency is generally assessed by the mean time taken to achieve the task. Efficiency may also relate to other resources (e.g. total cost of usage). A common measure of efficiency is time on task.

Task time

The results must include the mean time taken to complete each task, together with the range and standard deviation of times across participants. Sometimes a more detailed breakdown is appropriate; for instance, the time that users spent looking for or obtaining help (e.g., including documentation, help system or calls to the help desk). This time should also be included in the total time on task.

Completion Rate / Mean Time-On-Task. [Recommended]

The measure Completion Rate / Mean Time-On-Task is the core measure of efficiency. It specifies the percentage of users who were successful for every unit of time. This formula shows that as the time on task increases, one would expect more users to be successful. A very efficient product has a high percentage of successful users in a small amount of time. This allows consumers to compare fast error-prone interfaces (e.g., command lines with wildcards “delete taxfile.txt”) to slow easy interfaces (e.g., using a mouse and keyboard to drag each file that contains text to the trash).