Reading: Evaluate a multimedia product

Evaluate a multimedia product

Inside this reading:

Holistic evaluation

Four levels of evaluation

User evaluation checklists

Technical evaluation

Issues of technical completeness

User trials

Informal trials

Formal trials

Post evaluation—marketing

Summary

Holistic evaluation

Many ways and methods can be used to evaluate software products. One common oversight is to have the technical team evaluate IT aspects meticulously with less attention paid to the user or client viewpoint or to qualitative aspects.

The following evaluation method is an example of a process that covers the basics and includes a focus on function and utility. This approach to evaluation is difficult to complete in one session yet it will give the developer and the client a much clearer idea of the overall success of the project.

Four levels of evaluation

An efficient and yet comprehensive way to evaluate multimedia is to:

1Assess the reaction of the user (generally their feelings, good, bad etc).

2Assess learning or communication (did the user retain or apply the information).

3Assess behaviour change (did the activity have an impact on their day-to-day processes, for instance).

4Assess results and return on investment (did the product achieve the broader goals of the project).

The model is from the field of education, yet the principles apply equally to evaluating the results of multimedia products for other purposes.

Evaluation should not solely focus on collecting user responses or value judgments (in comments such as ‘Yes, I like it’ or ‘it’s fairly good’, etc). While important, they are only a first step. Effective evaluation should also check what the user obtained from the session. A test needs to be developed and applied to check that the user has retained the key content.

You also need to assess what changes the new system has had in the workplace. Does the new system make it easier? (‘it’ being work, study, shopping, banking, etc). Can the users now get more done with the new system in place?

Finally the evaluation should include business perspectives. Has the company increased profits or saved money? Can they do more with less staff? Has this changed the way that users learn/perform tasks/undertake processes, etc?

User evaluation checklists

User checklists should elicit general feedback about the prototype. The aim is to get an idea of the user’s experience (again, IT programmers can sometimes miss this point, as they focus too heavily on technical issues).

As an example, a checklist may ask.

Does the product:

  • Generally meet the user’s needs
  • Provide a professional ‘look and feel’ to gain the user’s attention
  • Provide readable text with correct grammar and spelling, at a suitable level
  • Use media elements (sound, animation, graphics, video) to clarify, explain, and support textual information
  • Employ a colour scheme that makes reading of text easy
  • Use appropriate fonts
  • Provide navigational directions/guidelines to the user
  • Provide a logical progression through the information
  • Provide interactive elements
  • Have a set beginning, middle, and end point so that users can tell how much information the program addresses and where they are within it
  • Provide sufficient branching opportunities for exploring related information in more detail
  • Provide an adequate title screen and/or headings
  • Provide feedback to the user that is immediate
  • Use links or menus to dissect information into manageable chunks
  • Provide a main menu of major program sections
  • Provide useful help files (such as readme.txt)
  • Use non-stereotypical or biased examples and scenarios
  • Motivate the user.

Technical evaluation

Technical and user evaluation overlaps, and the primacy of technical aspects can’t be overlooked; there is no utility without function. Much of the work in checking that the prototype works has been done in the assembly and testing phases. The broader technical evaluation, from the developer’s point of view, includes checking that the product:

  • functions on all required platforms (Mac, PC)
  • uses multimedia professionally (such as having good sound quality)
  • uses a consistent and intuitive navigational interface
  • is completely functional with no components that do not work
  • is appropriate for the experience level and computer literacy of the intended audience.

Issues of technical completeness

The aim of prototype evaluation is to find all the errors (technical and non-technical) before the product goes to delivery.

Though it may sound easy, producing software on ever-changing operating systems and platforms can be like completing a jigsaw puzzle with a few pieces missing.

Some developers only try for 99% success. They accept that the product may not fully function on every home computer. In this they make a rational business decision not to invest the sort of time and money (which might otherwise fund a second product) to achieve that last 1%.

This is not to say that you should not be aiming for a perfect result, but to highlight the fact that degrees of completeness can be economically acceptable. This is especially valid as some products only have a shelf life of three or four years (or less).

Reflect: The main issues of evaluation

What do you think are the five most important issues to focus on when evaluating multimedia software?

User trials

User trials determine if the program meets its aims and objectives with the target audience and in the target environment. User feedback is important at all stages but until the product is at a prototype stage, the results of trials can be treated with scepticism.

Note that user trials at the end of the production phase and just before final mastering and distribution are considered part of beta testing.

User trails need to be objective where possible. If those designing materials are involved in trials, there is a tendency to hang on to preconceived notions or viewpoints. Testing an unfinished product is fraught with complications that can undermine the reliability of any evaluation results. Though it’s not always possible (for reasons of expense or confidentiality), user trials should ideally be designed and supervised by people from outside the project who don’t have a sense of ownership of the ideas and concepts in the program.

Informal trials

Demonstrations of the new software (such as in PowerPoint shows) are generally considered a poor form of informal trial, since the audience does not experience the software in an interactive sense, and their reactions are influenced by their relationship with the demonstrator. Most demonstrations highlight the positives and gloss over the negatives. A controlled demonstration, nevertheless (if the program is robust) can prove valuable, if attention is paid to user statements and expressions—this human element is often seen as the advantage of informal trials.

Formal trials

In formal trials a large representative sample of users is invited to use the program. Formal trials may take place in any location and their point is not observation of users.

The selection of the sample group should take into account such things as their age, business positions, organisational roles and computing skills. Trials should include environment testing, being conducted in as near to planned end-user conditions as possible. Users may be invited, singly or in groups, to explore the package and then comment on it. Due to the size of such evaluation most comments are likely to be in written responses on forms or collected electronically. Online surveys (simple html and JavaScript tools with a backend Access database) are an excellent way to quickly collect and collate such information.

For trials to be most effective, they should be treated as a self-contained project if possible. They must be well planned, carefully documented and have the backing of the management team supporting the product development. Remember, if the client does not fully back the evaluation process (for example, in the workplace) then it can be a battle to instigate motivation in the groups. This issue needs to be discussed and resolved before any trial begins.

Reflect: Eliminating bias

Think about ways that you can ensure that a professional evaluation strategy is not biased.

Post evaluation—marketing

The internal marketing around a product release is surprisingly important to its success. Often new multimedia systems involve change management practices. People are being asked to do things differently (such as to work in a new way, to learn or follow a process using a different tool, etc). People often don’t like change.

Many a product evaluation session has been wrecked by a small set of users complaining about the system, highlighting its flaws—even making errors on purpose to reinforce their view. The negative comments may actually have nothing to do with the system; they could be using the system as a means to:

  • Increase wages
  • Progress an industrial dispute
  • Avoid extra work
  • Avoid the system tracking them
  • Denigrate a manager they don’t like.

Yes, these things may seem petty, but they do happen! The solution is marketing, and the best marketing is that when delivered (post beta testing) the system must work perfectly. Ensure that the evaluation is publicised well in advance. The system then has to help solve a problem—the benefits have to be highlighted (and this must be reinforced). Such marketing works best when ‘pushed’ from within, when for instance, you can demonstrate that the modifications users wanted in the initial beta test were fixed.

Summary

As you can see, evaluation is an integral part of any software development. Like testing it must be well planned and completed in a thorough and professional style. Shortcuts should be avoided at all costs!

1716_reading.doc

© State of New South Wales, Department of Education and Training 20061