Low-Fidelity Prototype and Usability Testing

Low-Fidelity Prototype and Usability Testing

Group Ten

Rishi Chopra, cs160-ar

Amit Bakshi, cs160-aq

Cynthia Prentice, cs160-au

Ben Hartshorne, cs160-bn

Low-fidelity prototype and usability testing

Mission Statement

The goal of the Simpsons Portal is to demonstrate the value of integration of two forms of media, television and the internet, for a user community that currently uses both. Both mediums provide users with valuable content, but in different forms. For example, the options given to fans of The Simpsons via television are limited. Users are forced to watch whichever episode of the show happens to be playing at the time with little opportunity to acquire additional, supplemental information about the show or interact with others (besides those who happen to be watching the show in the same room). The internet, on the other hand, offers users a vast array of services. Fans of the Simpsons can locate and download episodes of the show, find general information, and interact with others. However, being inherently decentralized and disorganized, the internet forces the user to spend a considerable amount of time finding information that he is looking for. The Simpson’s portal aims to combine the advantages of “push” (television) and “pull” (internet) to provide its user community with the ultimate Simpsons experience, all in one place.

Prototype Description

Our low-fidelity prototype is essentially paper taped around the edges of a laptop screen to simulate frames, and stacks of sticky notes to simulate the information that pops up onto the frames as a result of the users’ actions. Along the bottom of the screen is a tool bar that features a combination of buttons that the user can press and draggable objects that the user can drag and drop onto the various frames to obtain desired information. An episode is actually played during the test, through a window cut out from our paper prototype, allowing the user to actually watch the show while using the site. This allowed for a nice combination of paper prototype and simulation of end-user experience.

Our paper prototype implements two out of the three intended view modes. The user starts at a website, where he logs into the system. This login will eventually be optional to access most of the site, but necessary to access areas of the site where the user must be identified (such as a chat). At this stage, for simplicity, we simply require the user to log in at the beginning. Once he logs in, the user is confronted with our default 3-frame view mode. In the 3-frame mode, there is a side bar down on the left of the screen, a main viewing window on the top right, and a tool bar across the bottom. The 4-frame view features the main window on the left, two frames stacked vertically on the right (viewing window on top), and the tool bar across the bottom. Finally, the 1-frame mode offers full-screen video. We have not implemented the 1-frame mode at this time.

There are four main buttons towards the left side of the tool bar, two draggable objects to the right of these buttons, and two more, smaller buttons on the right side of the bar. When clicked, each of the four main buttons brings up content in the sidebar on the left in the 3-frame mode, and in the main screen in the 4-frame mode. The two draggable objects, representing chat and trivia, can be dragged and dropped into any of the frames to activate their feature in the desired location. The final two buttons allow the user to change between view modes different modes, and to get system help. For a better understanding of our prototype, see the appendices at:

Participants

We interviewed three men in their 20s, all who watch the Simpsons and who have used the internet as a resource to obtain information on the show. Subject #1 claims to have seen almost all episodes of the show, and is extremely computer-savvy. He has occasionally used the internet to get additional information. Of the three, Subject #2, although a huge Simpsons fan, was the least computer savvy. He doesn’t usually use the internet to get information about the Simpsons. Finally, Subject #3 fits our user community the best – he not only uses online Simpson’s guides like but actually collects episodes (both on his computer and on video). It is interesting to note that our three subjects often watch TV together, with a laptop connected to the internet in the same room. As they watch the news, sports, or most other types of programming, they frequently use their laptops to get any additional information that satisfies their curiosities.

Environment

We conducted our testing in User #1’s bedroom, setting up our laptop and paper prototype on his bed. After being greeted, the users would sit down on a chair facing the prototype, which was seated on the bed. The facilitator remained to the side of the user, looking over his shoulder, aiding him if necessary. The two observers sat on both sides of the prototype so they could see both the user and the prototype at all times. The computer sat on the bed, behind the prototype. Cindy played the role of greeter, Amit was the facilitator, Ben was the computer, and Rishi (and Cindy) took notes. We took care to emphasize to the user that is was the prototype, NOT the user that was being tested. We placed emphasis on making the user feel at ease and comfortable in exploring the user interface.

Tasks/Procedure

We had three tasks. They are attached as appendix B. The first was of moderate difficulty, the second was easy, and the third was hard. The difference between the first and the second is that for the first task, the user had not yet explored the interface.

The goal of the first task is to get the user to use the buttons at the bottom to bring up the search screen, and discover the basic operations of the site. The goal was that the user would click on "Find an Episode to Watch" at the bottom. This will bring up the search screen. When they enter “monorail" into the keywords box and hit submit, the engine returns their episode. Clicking on the arrow to the right of the episode will then watch it. When they come across the search screen, they may notice the two buttons at the top, labeled "search" and "browse". This will help them in the second task.

The second task was to use the other main navigation method, browse, to find the same episode. Our goal was to get the user to navigate through the data on the site using the browsing method. Given that they are searching for the same episode, they do not really discover anything new except another part of the interface. It was fairly easy for all our users, since they had already explored a very similar part of the interface.

The third task was much more difficult than either of the first two. We wanted the user to switch views to the 3-frame mode, so as to have both the chat and the episode information section (either general info or episode guide) on the screen at the same time, while keeping the episode playing in the corner. Nobody did this correctly at first. We did not explicitly describe the functions of the various buttons, as we wanted the user to explore the interface; this also allowed us the opportunity to see what was intuitive to the user.

Test Measures/Results

We tested five different aspects of our interface: search screens, ways to find episodes and episode information, the “Change views button”, button design (clickable and draggable) objects, and the +/- context-menu metaphor.

Different Search Screens

We tested two different interfaces for the search screen. As seen in the appendix, one interface offers all of the possible search options when the screen is first pulled up. The other interface embeds the options in a “Search Options” menu; so only the fields you choose to search by are visible. For each user, we alternated between the two different screens when he worked on the first task. At the end of the testing, we brought the original search screen up for users before showing the user the other interface. We wanted to make sure the users were not biased to choose the interface they saw first.

All participants said that they would prefer the same search screen, regardless of which they used during the first task. They preferred the interface that immediately presented all the search boxes. Both of the interfaces eventually give you access to the same data, but none of the users wanted to hide the more refined search options under an "expert search" or "search options" button. They preferred to have all the options out in front of them from the beginning.

Menu Buttons

Our prototype has a bottom menu bar where the users find all of the functions we provide. There are several buttons that bring the user to very similar data. We tested which buttons the users choose to find episodes and episode information. We wanted to see if the buttons were labeled accurately to describe their functions and if they divided the functions into intuitive categories. "Find an episode to watch" will bring the user to the search page. There is a link from there to the browse page. "Episode Guide" will bring the user to the browse page. There is a link from there to the search page. "General Info" will bring the user to a page that has information about the Simpsons that is not divided by episodes. We expected that during the first task (searching for an episode), people would click on the "find an episode to watch" button, and during the second task (browsing for an episode), they would click on the "Episode Guide" button.

During the first task, all three users paused for a while when they were first confronted with the button menu. This was not unexpected because it was the first time they had seen the screen and had to read the button labels. They spoke aloud saying that they were looking over the choices. When they did choose a button for the search, it was almost an arbitrary choice. They said that the difference between “Episode Guide” and “Find an episode to watch” was not clear. They believed they could find what they wanted by pushing either button, which seems to be an accurate assumption. The subtle difference between the two buttons did not come across very well. One of the users understood that one was for searching and the other for browsing. In general, the users were able to find episode no matter which route they took, but the buttons were labels were ambiguous to the users.

Changing Views

We wanted to test whether the “Change View” button accurately described its purpose, which was to change the screen view so that more features could be viewed at the same time. It changes from the 3-frame mode to the 4-frame mode and back. During the third task we asked the users to do three things at once, expecting that they would change the view to accommodate for the extra information.

This task was very difficult for all the users. The users did not know that there were different modes that had different number of frames. They all said that they did not know what to expect from the “Change View” button. One user suggested that the button should allow him to switch into the full screen mode as well. Once the idea of different frames became clear, one participant was struggling to figure out how to "activate" a particular frame. Frames are not active or inactive in our model, so his search was fruitless. For him, the idea of an active frame went hand in hand with multiple frames, perhaps from his experience with different windows or different frames in a web browser. We will have to figure out a different way to deal with the different modes, or perhaps find a more appropriate label for the button.

Buttons and Draggable Objects

Our bottom menu bar has both buttons and draggable objects. Buttons allow the interfaces to come up only in the left frame of the screen. Draggable objects can be placed in any part of the screen. We tested to see whether these two options made sense to the users. As part of the third task, the user had to use a button to send information to its predetermined frame and a draggable object to pull information into the remaining frame.

The buttons at the bottom of the screen were easy to understand and a common method for navigation and content control. Our participants had no trouble using the buttons. They behaved as expected. However, the draggable objects were a miserable failure. Nobody even had the idea to pick them up and drag them into a frame, despite several hints to that effect. When they went to click on one of the objects, the computer told the subject that as the mouse moved over the object, it turned into a hand, and when they clicked, the hand grabbed. Also, in the blank frames, the screen says "Drag and drop chat or trivia here." We had hoped that these two clues would encourage people to pick up the objects, but it did not work. However, we believe that a large part of this problem is due to the paper prototype, and (hopefully) when the interface is moved to the computer, it will become clear.

+/- Metaphor

In order to allow the user to search for more information while watching an episode, we needed to compact a lot of information in a little bit of space. In the left frame, we used scroll bars and +/- signs next to our expandable headers. We wanted to test if this presentation of information allowed the user to easily find what he was looking for and if the use of +/- signs was self-evident.

The method of hierarchical data organization we presented in the side frame worked very well. Two out of our three subjects did not think twice before using the plusses and minuses to expand and contract data in the way we intended. The third was a little confused and clicked on the name instead of the plus. As soon as he realized what the plus was for though, he recognized the interface as one he had seen before. He said that he did not see it before just because of the paper nature of the prototype. Perhaps making the name clickable as well (allowing it to expand the context menu) would make the user interface easier to use.

Discussion

Our user testing sessions were very effective. We identified some aspects of the interface that work well, some that worked poorly, and some of our tests were inconclusive. We have a clear winner in the choice between the search screens; every tester preferred the more exposed search. The different buttons at the bottom, although similar, do serve a different method of locating data, and different users preferred different methods. We learned that our labeling of buttons is not entirely clear, and could be refined. Our tests indicated that we should have several alternative methods for arriving at the same data, so that people may use the method they prefer (arriving at episode information through search and browse). Our initial reasoning regarding the buttons was to have one button to find an episode to watch and another button to look for information about any given episode. Our testing showed that users are constantly doing those two things at the same time, and it would be better to either split the buttons into a search and a browse, or to have both functions available under one button. The idea of different modes for varying the number of frames needs more work before it becomes intuitive. The method of switching between modes and how to interact with them is unclear. This is possibly because it is different from existing applications, or possibly a limitation of the paper prototype. We also must differentiate between novice and expert users, and expect our interface to satisfy both groups. We could add concise instruction to a few parts of the site to describe novel conventions to new users, if the design proves to be desirable once a user becomes familiar with it. The solution might also be to use more pre-existing conventions in our site. The draggable objects test was not conclusive; the paper aspect of the prototype introduced confusion that would not be present in a hi-fi prototype. Finally, the +/- metaphor worked quite well.