Avatars and Emotions 18

Running head: AVATARS AND EMOTIONS

Avatars and Emotions
Elan Portnoy
School of Informatics at Indiana University at Bloomington

Avatars and Emotions 18

Avatars and Emotions

As humans become increasingly dependent on digital communication systems, the effects of computer mediation on social interaction are of greater interest. We must look beyond what social science has previously taught us about how humans respond to each other in consideration of what electronic media introduce to the equation of interpersonal socialization. Communication via e-mail, instant messaging, electronic forums and other online systems must be examined as multiple human-computer interactions rather than just human-human interactions. For example, when two individuals communicate using an instant messenger system, their exchange is a product of the two human-computer interactions (one on each side of the conversation) and psychological inquiry of such exchanges must take this into consideration.

An increasingly common practice in online communication is the use of computer-generated graphic images (avatars) to represent one’s online identity. Individuals have the option of fabricating their avatars to reflect what they perceive as accurate representations of themselves, or as fantastic distortions. In either case, viewers of these avatars will react cognitively and emotionally to the images, especially when presented in the absence of photographs of the originator’s actual face. Since the human brain’s perceptual and cognitive systems are old in design, they interpret images as real and react accordingly (Reeves and Nass 1998), and thus, the human brain responds emotionally to avatars as if they were real. Furthermore, it is anticipated that specific psychological principles which traditionally apply to human-human interactions will be observable in human-avatar interactions. More specifically, the Freudian principle of projection, a defense mechanism whereby an individual attributes characteristics to another person which exist within himself subconsciously (Freud 1976), will be present in human-avatar interactions.

The focus of the present study is to assess ratings of perceived affect in computer-generated (avatar) faces. These avatar faces represent both genders and range in affect from “sad” to “happy” including stimuli neutral in valence. These ratings were compared to a written, self-report instrument designed to measure mood states of participants at the time of the experimental trials, and to text-based stimuli comprising questions relating to mood (Zung, 1965) presented for the purpose of assessing participants’ affect following the avatar ratings. Hence, it is hypothesized that mood as measured by the written, self-report instrument and by the single rating of mood, will correlate to reports of perceived affect in the avatar’s facial stimuli.

Experimental trials for the present study were conducted through the Internet. Although a significant amount of effort is required to construct the experiment for networked implementation, there are numerous benefits to this method of experimentation. Participant recruitment and participation is easier since travel to a testing site, or laboratory, is unnecessary (Kraut, Olson et al. 2004). This is especially helpful for college student populations during the busier times of the semester, and when the weather is inclement. Web-based data collection is now a viable alternative to traditional research methods (Riva, Teruzzi et al. 2003), as it facilitates stimuli presentation and data collection in that it requires less time; yields greater data accuracy, costs less (Kraut, Olson et al. 2004) and offers experimental options not previously available.

Method

Participants

Eighteen individuals (8 women and 10 men, mean age = 33.3 years, SD = 19 years) volunteered to participate and were offered no compensation for their participation.

Apparatus

Experimental trials were administered through the Internet on a personal computer of the participant’s choice. The only restriction being the Microsoft Internet Explorer browser must be used as the software delivery system as the experiment required that environment to function without error. Stimuli included a 20 question, written instrument (see Figure 1) typically used to measure depression (Zung 1965) which was modified to suit the sample more appropriately. The original Zung’s measure records responses on a four-point Likert scale offering choices between “a little of the time;” “some of the time;” “good part of the time” and “most of the time.” Since participants in the current study were not necessarily suffering from depression, the decision to add a fifth choice “rarely/never” was made to give participants this option. In the analysis phase, data were adjusted to account for the difference by assigning the same score to a response of either “a little of the time” or “rarely/never.” The Zung (1965) measure was chosen over the Beck Depression Inventory (Beck, Ward et al. 1961) or HAM-D (Hamilton 1960) instruments as the latter two focused more on facets of depressive states than the Zung scale, which was better suited as a general measure of mood. In addition, a 19-point, single-item mood rating scale (see Figure 2) was presented after the second group of image stimuli.

The second group of stimuli included avatars constructed within the Yahoo (Yahoo! 2006) messenger avatar editing environment (see Figure 3). Yahoo avatars exist as optional features for Yahoo Instant Messenger users and are commonly displayed within the message windows of an online “chat” or two-way communication (both textual and audio-visually based). Six of the stimuli were female and six were male. Within each of the stimuli gender groups, there were three avatars with blond hair and blue eyes. Of these three, one appeared to be sad; one appeared to be happy, and the remaining image appeared to be emotionally neutral. The same three valences appeared with dark hair and dark eyes (sad, happy, and neutral). This pattern was the same in both gender groups yielding a total of 12 avatar stimuli. Within the gender groups, features aside from hair color, eye color and apparent mood were kept constant. Although there was no scientific data available to validate the emotional valence of the avatars, they were different enough in appearance to suit the purpose of the current study.

The online testing system was built using PHP and HTML and was centrally located on a commercial web hosting server. Data were stored on the server in a secure location where they could be accessed for analysis at a later date. The data files were formatted to be easily imported into Microsoft Access, Microsoft Excel or SPSS. Additionally, a notification was sent via e-mail upon completion of each of the experimental trials.

Design and Procedure

Participants were invited via publicly posted flyers to participate in the experiment and a link (URL) printed on the flyer lead to the experiment’s location. After accessing the link, participants were presented with an introduction informing them that the procedure should take approximately 10 minutes and that their responses and identity would remain anonymous. The introduction also contained the request that participants use the Microsoft Internet Explorer browser to avoid potential system malfunction. Participants were then given two short pages containing an introduction and instruction sheet requesting that they not use the “back” button to change answers, and not to skip any questions. They were also instructed to provide their ratings on the five-point scale and to click “rate this item” after selecting their answers in order to move on to the next item. Additionally, they were informed that they may discontinue participation at any time, and the promise of anonymity was reiterated. An additional request to use only the Internet Explorer browser appeared on this instruction page as well as contact information on the primary investigator. At the bottom of the page was a link to begin the experiment following an informed consent agreement form.

The experimental window had a light blue background with white text at the upper-center where the progress of completed items would be displayed with a numerical counter (see Figure 4). The middle of the screen was a white square (approximately one third of the total screen size) were the stimuli were presented. For all participants, the 20 question self-report stimuli were presented first. The decision not to vary the presentation order of stimuli types was made in an effort to avoid the effect of the avatar images on the results of the self-report. However, the items within the self-report measure were randomized by the software before presentation to each participant. Below the white stimuli presentation box, five clickable radio buttons offer the choices of: “rarely/never;” “a little of the time;” “some of the time;” “a good part of the time.” and “most of the time.” Participants were requested to choose one of these options in response to the text question presented in the window. Following this selection, the “rate this item” button was to be clicked in order to process the response on the server and move onto the next item. If a participant failed to make a selection and attempted to submit a blank response, the system would issue a warning and request they return to the question and provide an answer.

Following the presentation of the 20 question text-based instrument, a second page of instructions appeared on the screen describing the next segment of the test. Participants were told this portion contained computer-generated facial images and that they were to rate the apparent emotions expressed in these images on a seven-point scale ranging from “sad” to “happy.” The 12 avatar stimuli were then randomly presented and participants rated the images by clicking on radio buttons and a submission button in a similar manner to the first portion of the experiment (see Figure 5). Following the presentation of the second stimuli group, a page appeared in the browser window with the question, “How do you feel right now?” and a 19-point Likert scale below it ranging from “Happiest I've ever felt” to “Saddest I've ever felt.” Below this were four form fields where the participant was asked to enter his/her age numerically; select gender from “male” or “female;” choose eye color from “brown,” “blue,” “green,” or “other,” and hair color from the choices of “brown,” “black,” “blonde,” or “other.” The final click on the “save results” button submitted all the trial data to the server for processing, gave participants a “thank you” message and prompted them to close their browser window for security protection.

Data files collected from the server were downloaded into a Microsoft Access database where they were sorted and subsequently exported to the SPSS statistical analysis package for processing. Additionally, the Microsoft Access database served as a backup for data in the event of a problem with the server or its files.

Results

Mood Measures

Means and standard deviations for the modified Zung scores and for the 19-point mood rating scale may be found in Table 1. The modified Zung scores were comparable to those originally reported by Zung (1965) for normal subjects. Examination of individual scores on the modified Zung measure revealed that all subjects in the present study were within the normal range as described in Zung (1965).

Similarly, examination of scores for the 19-point mood rating scale revealed all individual scores were below 10 (the center point of the scale), suggesting subjects reported their current mood as leaning toward the “happy” end of the scale rather than toward the “sad” end. These data are consistent with the modified Zung scores.

Comparisons between males and females on the above measures (see Table 2) revealed no significant differences between the two groups for either the Zung scores or the 19-point mood rating scores.

Pearson correlation coefficients were computed between the modified Zung scores and the mood ratings for the total subject group and separately for males and females. A significant correlation, r(16) = .59, p = .005 one-tail test, was found for the total group. Similar correlations, computed separately for males and females, revealed a significant result for females, r(6) = .72, p = .022 one-tail test, and a non-significant result for males, r(8) = .45, p = .098 one-tail test.

Effects of Mood on Avatar Ratings

In order to assess the effects of mood on the perception of affect in the avatar images, Pearson correlations were performed between the avatar ratings and both the single-item mood score and the Zung score, for each of the image valence groups separately and for the total (composite) of all happy and sad and neutral faces (see Table 3). The correlation for the composite score and the single mood ratings was significant, r(16) = .417, p < .043 one-tail test, in the direction that the sadder the participant’s mood, the happier the ratings of the avatars. A similar correlation between the Zung score and the composite score did not reach significance, nor did correlations between the two mood scores and the three separate groups of avatar images.

Discussion

The results support the hypothesis that participants’ mood states influence their perception of avatar mood. Participants who scored higher on the mood scale (indicating they were “sadder”) tended to rate the avatars as being happier. This finding is interesting as the trend is in the reverse direction as was initially anticipated. The correlation between perceived avatar affect and mood ratings may be explained by Westen’s (1997) hypothesis that individuals fight negative mood states and therefore actively seek information with a positive valence in an effort to regulate emotions. Apparently, memory recall is also motivated by the need to regulate emotion and hence, positive associations to external stimuli are recalled from memory with greater ease when one is in a sad mood (Westen, Muderrisoglu et al. 1997).