Brandon Burr

Nov 17, 2005

CS376 Milestone 2

Hypothesis

Using one type of data stream to index another should improve the efficiency of data review. I will design an interaction that allows a user to leverage sensor data from a particular event in order to analyze large amounts of video data of the same event.

My claim is that it is possible to design this interaction such that it enables researchers to complete common video analysis tasks efficiently.

Evaluation Plan

In order to evaluate my hypothesis, I will run a user study to evaluate the strengths and weaknesses of my design. I am framing the project in the context of analyzing video data of computer programmers in order to narrow the focus, with the hope of generalizing for a broader set of applications in the future.

I will begin by compiling a list of common tasks that a behavioral researcher would face in analyzing video data of programmers. I will obtain these tasks from an inquiry with researchers in this field. These tasks will be used to inform the design of the interaction between the user, the sensor data, and the video streams. A prototype will then be developed of this interaction.

At this point I will run a quantitative user study on the effectiveness of this design in accomplishing the set of tasks. Users will be measured on task completion time, and the results will be analyzed against a reasonable measure of efficiency, as defined by the behavioral researchers. From this data I will draw conclusions about the strengths and weaknesses of my design, and I will report on its possible benefit to video analysis of behavioral research.

Current Prototype

In the current prototype (see Figure 1), users designate one video stream as the focus stream. This focus video is displayed as large as possible, with the other videos shown as thumbnails off to the side. Clicking a thumbnail switches that video to the focus. The size of the thumbnails can be adjusted if close attention is needed to two or more videos simultaneously.

A right-hand pane shows a list of the codes that have been created for the video streams. From this pane the user can instantiate new events for particular codes, and can annotate those events freely. The current plan is to integrate external sensor data as it’s own set of codes against the video data.

The bottom pane shows the timeline. The timeline reflects the current temporal location of video playback. All events are displayed on the timeline in a color corresponding to their code category. Events can be shown or hidden on the timeline, facilitating correlation between various codes, and allowing the user to obtain a big-picture view of her codes.

Further Development

By the end of the quarter the system will be able to import other types of data. As an example of this, I will instrument the Eclipse IDE with the capability to log programming behavior (i.e. Compile events, editing, debugging, etc.). This log will export to the VACA format.

In the VACA system, data from external sources will be importable and synchronizeable with the video streams. This data will provide an index into the video data, and the users will be able to seek through the video streams using this external data. I will design a presentation of this data that facilitates correlation of events or behaviors of interest to the user. And I will evaluate this design against common analysis tasks in a user study (see Evaluation Plan, above).

Related Work

The timeline interface in VACA drew inspiration from the MSR Video Skimmer [3] and the Silver video editor [1]; however, these systems were not designed for analysis, annotation, or multiple video streams. The Anvil analysis tool [2] also has a timeline and supports annotation, but not multiple streams. Observer [4] and Diver [5] are video analysis tools that support annotation, but not a timeline visualization of that annotation.

References

  1. Casares, J., et al. Simplifying Video Editing Using Metadata, in Proc. DIS 2002, p. 159-166.
  2. Kipp, Michael. Anvil video annotation system. Page downloaded on June 27, 2005.
  3. Li, F.C., et al. Browsing Digital Video, in Proc. ACM CHI 2000, p. 169-176.
  4. Noldus, L.P., et al. The Observer Video-Pro: new software for the collection, management, and presentation of time-structured data from videotapes and digital media files, in Behav Res Methods, Instrum Comput 2000, 32, p. 197-206.
  5. Pea, R., et al. The DIVER Project: Interactive Digital Video Repurposing. IEEE Multimedia, 11(1), p. 54-61