Investigation of Warping Mechanisms in GUIs

Kevin Juang

Many advances and new experimental ideas have been augmented to the traditional Windows, Icons, Menus, Pointers (WIMP) interface since its adoption. One of the areas that has received attention is that of target selection via a cursor. People are finding ways to effectively reduce the distance from the current cursor position to the target location, which Fitts’ Law would state is highly beneficial. This reduction can be done by “gravity wells” that lightly pull the cursor into the correct position and may not even be noticed by the user, all the way to more radical warping mechanisms that make the cursor jump to the correct position and skip the in-between areas altogether. This paper will investigate some of the research conducted into warping mechanisms.

The traditional way to implement this is to use “gravity wells” or “sticky targets” (Cockburn) in lieu of increasing the actual size of the target. Instead, by attracting the cursor to certain targets or paths, the target’s size becomes virtually larger. Ahlström uses warping through gravity in order to make cascading menu selection easier. The cursor is pulled horizontally towards the cascaded menu, and as mouse movement can remain vertical, the movement is more like Fitts’ Law instead of the Accot-Zhai Steering Law. The trick then, appears to be finding situations where it can be predicted where the user will want to move the cursor, and then automating that task somewhat.

A nontraditional domain in which warping interactions have been leveraged is that of eye tracking. There are obviously key differences between using traditional mouse based interaction as input and using eye tracker gaze as input. A seminal paper that examined this issue was Shumin Zhai et al.’s “Manual and Gaze Input Cascaded (MAGIC) Pointing” (Zhai).

In this 1999 paper, Zhai unveils his MAGIC system, which does not use gaze input to control the fine details of input. In fact, Zhai argues against eye tracking as input in the general case. He states that eye gaze is a visual perceptual channel and consequently is not well suited for motor control tasks. Work by others well versed in perception and the human visual system, such as Edward Tufte, seem to back this claim (Tufte). In addition, eye gaze input goes against users’ mental model of hand-eye coordination where the eye searches for and receives information while the hand manipulates external objects. The chief advantage that eye input holds is that it is extremely fast, quicker than mouse-based, keyboard, haptic, or even speech input.

Zhai’s goal with the MAGIC system, as it were, is that of refinement. Gaze is used to warp the cursor to its initial location, leveraging the eye’s natural usefulness in search tasks. Then, once the initial warping has occurred, then traditional mouse input is used to pinpoint the exact desired location (Zhai). The studies that he conducted suggested that the MAGIC pointing approach worked as planned and participants performed faster and felt faster.

Zhai’s work with MAGIC was expanded on by Michael Ashmore et al. Ashmore’s focus in “Efficient Eye Pointing with a Fisheye Lens” was to try to use the idea of a fisheye lens (a zoomed but distorted lens) to be better able to make fine selections in a GUI (Ashmore). Zhai’s MAGIC system was used in one of the test conditions in order to initially focus where the fisheye lens should be placed. The two conditions with MAGIC (appearing only after fixation onset and fixed in place once visible after fixation onset) were significantly faster than those without MAGIC, and were not any less accurate.

In traditional WIMP interaction systems, another angle which receives some interest is in the modeling of haptic feedback. True haptic feedback occurs when there is a physical sensation given to the user as feedback, as in a force feedback joystick or a haptic pen. This requires special hardware and can be disruptive to the user as usually a motor is heard and the input device may be physically shaking and thus harder to control properly. Thus, research is being done into warping or gravity effects that simulate haptic feedback purely through software (Lécuyer). This may not initially seem like it would work, but results show that it has surprising validity.

To facilitate haptic simulation through software, in addition to the gravity wells (called “holes” by van Mensvoort), he also uses the idea of “hills” which are the exact opposite, making selection harder. The uses for this seem much less evident, since usually the user wants to select certain targets, but perhaps for gaming this could add challenge or maybe there are specific things (like delete file) that should be made tougher to do accidentally. This is similar to the idea of constraints in psychology.

Of course, for haptic simulation, these hills are part of the effect. Another idea is that of sunken ridges, which is useful for a haptic pen. Moving a pen along a surface with ridges keeps the pen in those ridges, making it easier to stay along certain specified paths (van Mensvoort). If those paths are even directional, then the system can slowly move the cursor for the user along those paths.

One classic reason why warping mechanisms in GUIs seem like they would be a good idea is because of Fitts’ Law (Ahlström). Fitts’ Law is a model of human movement that states that the time required to rapidly move from a starting position to a final target area is a function of the distance to the target and the size of the target. The applications for Fitts’ Law are widespread, ranging from computer graphical user interfaces (GUIs) to eye gaze input to general ergonomics and body motion.

There are some qualifications to Fitts’ Law, however, that have some impact when it comes to interface design. Firstly, it describes simple motor response (like a hand) and fails to account for gain or software acceleration provided for the mouse cursor. Secondly, it describes untrained movements, while some movements that may be executed for months and years might become engrained in the user. (There are some that argue, however, that Fitts’ Law is so low level that this does not matter.) Finally, it applies only to movement in a single dimension and not to movement in two or more dimensions. However, Fitts’ Law has been successfully extended to two dimensions in the Accot-Zhai Steering Law.

The benefit of warping is that one component of Fitts’ Law, the distance to the target, is significantly reduced or even possibly removed entirely (Ahlström). The standard “gravity well” approach, where the cursor is attracted to locations of interest, has the benefit of reducing the distance to the target while providing a possibly very subtle difference to the user. Even if the effect is noticeable by the user, there is usually never any confusion as to what exactly is going on.

Obviously, if through a pure warping mechanism, it is possible for the cursor to move more or less exactly where it needs to be, that would be the ultimate ideal to strive for. The size of the target wouldn’t even matter anymore because the cursor would already be in the correct place. Selection would be more or less instantaneous, and bounded only by the decision making of the user.

Of course, the obvious disadvantage is that the user might not exactly know what is going on, or may not like the results. It is currently unknown whether or not training would adequately prepare users for warping in GUIs, or even if such training should be necessary or desired. The primary problem is in locating the cursor after it has warped. There are different ways to handle this, such as quickly animating the movement of the cursor from the original location to the new location, but this occurs at a slight cost of speed.

Another tough concern obviously arises in the system having to figure out when to apply this warping and where to move the cursor. There are some current implementations that tackle this issue. For example, in Microsoft’s IntelliPoint software, there is an option called SnapTo that automatically warps the cursor to the default choice every time a popup message appears. The rationale is that the user likely would want to address a popup message when it occurs. Even if the user wants to select a different option other than the default (say, “cancel” instead of “OK”), the distance to target is still likely being reduced compared to where the cursor was originally. It does seem that users may be surprised, because it is not always obvious when a popup box is going to appear. Anecdotally, I do not know of anyone who has enabled this feature, although it is off by default and rather well hidden in the preferences. I could not find any research that was done into this SnapTo feature.

Another option, which is what I am pursuing is to get around this feedback problem by allowing the user to entirely control the activation of the warping. Potentially a slight modification of the current gravity well paradigm (turned up to extreme degrees) would work in certain situations. If there is only one sensible target in a given direction, then perhaps even a slight flick in that direction would be enough to map to a direct warping to that target.

Even more control could be granted to the user if he or she were allowed to control the entire process, however. This would be done by allowing the user to set specified custom warp points that could be warped to at the click or push of a mouse button or keyboard command. Many current GUI tasks involve toggling between dual locations.

For example, to look at the contents in one full size window and do something with those contents in another full size window involves two separate areas in Microsoft Windows. The cursor must be at the bottom of the screen near the taskbar to switch windows, and it must also be in the window area in the middle of the screen to perhaps click on buttons in a form. The current WIMP GUI requires constant maneuvering back and forth between the two areas. This could be eliminated by allowing, say, a middle click to toggle between the two areas, which could be predetermined or selected by the user.

Another possibility for granting the user self control over warping would be the idea of an excursion. This is similar to the “back” command in an Internet browser, the pushd and popd commands in Unix, and an excursion in Emacs Lisp. If the user is at a point where he or she will want to be later, and knows this, it takes just one click to set the current position as a warp point, and another click later to get back to that point. This way, the control of the warping is as precise or relaxed as the user desires.

It is even possible that this location is a home base of sorts (maybe near the middle of the screen), that the user may wish to return to many times. This may be useful where it is often desirable to start with the cursor near the middle of the screen so that it will never be too far to reach any possible screen location. This could have applications in gaming, where split second differences like this are considered highly important.

All in all, I find the possibility of warping mechanisms in GUIs highly interesting. It is considered to be a low level interaction problem, but the possible complications and uses can be rather complex and high level. As this does not seem to be a field that has experienced a whole lot of research, it will be interesting to continue this investigation in order to determine the feasibility of useful applications for warping mechanisms.

Works Cited

Ahlström, D. Modeling and Improving Selection in Cascading Pull-Down Menus Using Fitts’ Law, the Steering Law and Force Fields. In Proc. CHI 2005, ACM Press (2005), 61-70.

Ashmore, M. and Duchowski, A. Efficient Eye Pointing with a Fisheye Lens. In Proc. GI ’05, ACM Press (2005).

Cockburn, A. and Firth, A. Improving the Acquisition of Small Targets. In Proc. HCI 2003, 181-196.

Lécuyer, A., Burkhardt, J-M, and Etienne, L. Feeling Bumps and Holes without a Haptic Interface: the Perception of Pseudo-Haptic Textures. In Proc. CHI 2004, ACM Press (2004), 239-246.

Tufte, Edward. Envisioning Information. Cheshire, Connecticut: Graphics Press, 1990.

van Mensvoort, K. What You See Is What You Feel: Exploiting the Dominance of the Visual over the Haptic Domain to Simulate Force Feedback with Cursor Displacements. In Proc. DIS 2002, ACM Press (2002), 345-348.

Zhai, S., Morimoto, C., and Ihde, S. Manual and Gaze Input Cascaded (MAGIC) Pointing. In Proc. CHI 99, ACM Press (1999), 246-253.

1