Examination Number 48
Question: A – Planning for the Future…
For your choice of setting (digital media lab at the UL MRC) and function (user support) develop a plan, with the components listed below, to collect and analyze data that will inform you and other members of your organization about its strengths and weaknesses and opportunities for growth or improvement. Such a plan will typically include:
- Objectives
- The specific program or programs, which merit evaluation attention
- Measures of performance addressing success or breakdown
- Data collection methods
- Analytical approaches
Include examples of measures, data collection strategies, and analysis approaches. Include justification for your selection among options.
--
--
--
The House Undergraduate Library contains a nascent digital media lab within its Media Resource Center (MRC). The digital media lab contains a number of computers and specialized hardware and software optimal for video production, digital darkroom production, and web site design. The lab is currently maturing, and so are its support resources that are available to assist the end user in his or her tasks within the lab. Usability testing at this stage in the lab’s development can ensure a better end-design that closely matches user needs and can ensure that the potential for a costly redesign at a later date is limited.
I plan to collect and analyze data that will inform me and my team at the MRC about the strengths and weaknesses of current user support resources in the digital media lab as they relate to usability, and about opportunities for growth and improvement to those support tools. Such knowledge will aid not only in gauging where the system needs to grow, but it will also gauge what current strengths and weaknesses exist and how the system can be developed in the future.
My testing philosophy will be one that not only gauges current system metrics, but one that also gauges how future development should progress. Since the MRC digital media lab is a developing system, such an iterative process will aid in its development and should limit the need for costly post-production redesigns. It will also lay groundwork for future studies of similar natures to build on the data collected initially.
There are two parts to the MRC digital media lab training system - the tutorial materials themselves, which exist as html web pages, videos, and pdfs, and the digital media lab web site, which holds these training materials in one place for end-users. Since the system of support tools of the MRC digital media lab is interfaced through the MRC’s web site[1], I think it important to evaluate not only the training resources themselves but the web site that they reside in as well.
To tackle such a large problem, I will break the project up into smaller pieces, first by dealing with the individual training materials by themselves to establish their usability, and then testing the system web site and its usability regarding assisting people with finding the training materials they may need. For each area of this project, however, the same methodology can be applied to gather user data for system improvement.
To improve the whole of the system, I want to:
- Establish what tools already exist for end-users.
- Learn where end-use support need is greatest to prioritize areas within the media lab resources to focus support improvement first.
To then improve each part of the system in turn, I want to:
- Evaluate the current resource to identify possible strengths and problem areas and to develop benchmarks to compare future system improvements against.
- Determine what level of improvement (if any) needs to be made in a particular resource based on initially collected data, and focus work for improvements in needed areas within the tool at hand.
- If necessary, design new elements for user aid based on previously collected data. These elements could be integrated into existing tools or could be wholly new tools that address a previously unaddressed problem.
- Test new and revised resources for usability levels and if applicable, improvements in performance metrics.
After each specific area of the system has been tested and rolled out in part, attention can be turned to how the system works as a whole within its web-based environment. The web site can then be analyzed and manipulated according to the same criteria as the various resource tools.
To collect and analyze data, I plan to use as a road map for my own usability tests Jeffrey Rubin’s six basic elements of usability, from his 1994 Handbook of Usability Testing. These elements are:
(1)The development of a clear problem statement or test objective.
(2)Use of a representative sample of end-users that may or may not be chosen at random.
(3)A representation of the actual work environment.
(4)Observation of end-users who either use or review a representation of the product.
(5)Collection of qualitative and quantitative performance and preference measures
(6)Recommendation of improvements to the design of the product.
Within these six elements, I plan to follow a testing philosophy in usability known as General Theory. General Theory is one of five areas of usability design, and is seen as the traditional scientific approach to usability.[2] The aim of this approach is to “accumulate pieces of knowledge about human interaction with computers.” While a tested and proven philosophy of usability testing, Löwgren writes that the General Theory perspective has proved to be of limited value to HCI practitioners, which is why I plan to use it in conjunction with Rubin’s theory.
General theory has been widely applied as cognitive engineering, the application of cognitive psychology to interface design.[3] It basically draws from knowledge and techniques from cognitive psychology to provide the basis for principle-driven design.[4] In practice, general theory experiments tend to be short and tend to take place in laboratory settings, and the independent variables are of three kinds: user, task, and system.[5] The dependent variable is always the user’s reaction, although it can be measured in a number of ways, including time to completion, error rate, and satisfaction.
Since the media lab resource system and its various elements should have a high usability, I think that this theory is a good match for testing and improving the system, as testing can be done both quickly and thoroughly. The metrics I plan to focus on to test the system would be those stated above: user time to task completion, user error rate in task, and user satisfaction with the system after use. These metrics will be applied to both the current system and the revised system to establish degrees of improvement with the revision and to gauge redesign effectiveness.
Since the computer, hardware, and software set up in the digital media lab is specialized, and all the equipment is already in a laboratory, testing will be conducted in the digital media lab itself. This will allow me to use actual end-users in their native operating environment (Rubin’s 3rd rule) without having to set up a similar environment in an isolated lab elsewhere, which could be both time consuming and logistically difficult.
Data collection throughout the entire process will be handled in a uniform way. For such a data collection process, I want to collect two primary things. One is a series of user questionnaires to gauge user attitudes toward the system, and the second is audio/video tape of the end-user interacting with the system while attempting various tasks that are assigned to him or her.
The nature of the tests allows for a small test population. According to Jakob Nielsen[6], effective usability studies of this nature can be successfully completed using as little as 5 participants. The end-users that will be used in testing will be individuals who have used the lab within three months for any reason. Since user information (name, email, PID) is currently collected at the time the user is in the lab, contact can be made via email, which is inexpensive, fast and easy.
Regarding questionnaires, two questionnaires will be used at two times in the evaluation process of any part of the system. These will consist of a pre-test and a post-test questionnaire, which will be administered before and after the end-user is tested on both the old and revised system piece. The pre-test questionnaire will gather demographic information and end-user aptitude and system-attitude levels. The post-test questionnaire will measure system-attitude levels and allow for some qualitative responses to the system. This way, I will gather a collection of data that will enable me to define how an end-user’s attitude may be affected by interacting with the system and to locate areas of the system that have consistent weaknesses or strengths.
To augment the data gathered through the questionnaires, audio/video recording of the testing sessions will be used in conjunction with speak aloud methods where participants verbally express their thoughts and actions. This will capture the linear thought process of the person being tested and can be used to locate where possible problems arise and lead to system failures. Video recording will be used to augment the audio to track a participant’s movement around the equipment in the lab.
The audio and video recordings will be shown to the participant after he or she has completed the tasks and post-test questionnaire so that they can analyze their past actions and possibly further explain thoughts and feelings they may have had at various times throughout the testing process. The audio/video and post-test commentary by the participant will then be translated by me into transcripts that document the process in a summarized detail.
Together, the questionnaire data and translated transcripts from the audio/video tapes will be used to identify system strengths and weaknesses and to locate precisely where breakdown occur. User data can then be compared with the group to find common trends across the system to establish what areas seem to be common weaknesses and to focus on improving them.
This entire process can be completed verbatim with the redesigned elements of the digital media lab web site and its tutorials, and data collected can be used to establish net system gains and improvements in various areas of the system.
Knowing both how users performed with the old and new systems will help in knowing whether the redesign is successful and whether future redesigns should follow the same patterns.
The end result of all this analysis will be a greater understanding of what demands are currently being placed on the system, and to take it one step further, a revised system that better complements the demands on the system. If such a process was repeated over time as the system matures, the cumulative collection of data could present long-term trends in use and can begin to provide a solid foundation of knowledge when looking to the future of the system.
Once data collection is used to build a use history, it then becomes possible to make predictions of future situations, and to plan for them. It is in this sense that an organization, and in this case, the House Undergraduate Library’s Media Resource Center, can make real use of information systems to aid in its more effective use and more definitive future growth.
1 of6
[1]
[2] Löwgren, J. (1995). IDA Technical Report.
[3] Lansdale, M., & Ormerod, T. (1994). Understanding Interfaces, A Handbook of Human-Computer Dialogue. London, Academic Press.
[4] Woods, D., & Roth, E. (1988). Cognitive Engineering: Human Problem Solving with Tools. Human Factors, 30(4), 415-430.
[5] Löwgren, 5.
[6] Nielsen, Jakob. (March 19, 2000). Why You Only Need to Test With 5 Users. Retrieved October 16, 2003 from the World Wide Web: