Relevant Literature IJHCS paper with links to Bibliography file “greta”

This is based on abstracts from Web of Science. They omit quite a few of the usual suspects, and are based on literature reviews that centre on side issues such as personality vs. emotion vs. cognition and emotion in HCI. Many of the obvious candidates, such as Nass and consorts, are largely missing, since they’re already reviewed in Greta’s thesis. I’ve also not included literature on Emotional Speech Synthesis, which I intend to pinch largely from a recent PhD thesis by Marc Schroeder.Bold: read actual paper

General literature on internet surveys

(Birnbaum 2004) gives an overview of methodological problems, including recruitment and drop-out rates, with web-based studies. (Reips 2002; Reips 2002) are two complementary papers that summarise early, influential guidance on web-based experimentation. Technical issues are covered as are problems with dropout and achieving adequate response rates. Six common fears are addressed by (Gosling, Vazire et al. 2004) in a review of a very large sample of 510 studies and found to be largely unjustified.(Dominelli 2003) focuses on security and privacy issues, which are very important in health-related surveys. (Litaker 2003) provide step-by-step instructions for designing WWW-based surveys.(DeRouvray and Couper 2002) discuss severalstrategies for reducing “no opinion” answers, which needs to be compared to our four-item scale.(Hewson, Laurent et al. 1996) is another relevant early paper that gives advice on reducing attrition and preventing hacking.

An early discussion of data validity and ethical issues can be found in (Smith and Leigh 1997). (Pittenger 2003) discusses ethical issues around web-based psychology research in great detail. (Varnhagen, Gushta et al. 2005) present guidelines for improving informed consent for web-based studies. Comparing web-based to paper-and-pencil informed consent, they conclude that they are roughly equivalent.

(Skitka and Sargis 2006)is a potentially very relevant study that analyses APA publications in 2003 and 2004 which report web experiments. The meta-analysis raises methodological issues, to be discussed further in the paper.(Buchanan and Smith 1999) argue that any web-based instruments need to be stringently validated. This includes confirmatory factor analysis to check whether the underlying factor structures of the web sample and the paper-and-pencil sample are the same (Buchanan, Ali et al. 2005).(Whitehead 2007) discusses issues of sampling bias and data validity based on a proper medical meta-analysis of the literature, while (Duffy 2002) discusses methodological issues that are relevant to web surveys in nursing.

Response bias

(Cook, Heath et al. 2000) is an often-cited paper that relates response rates to representativeness and discusses sampling bias issues.(Ekman, Dickman et al. 2006; Ekman and Litton 2007) look at using internet-based questionnaires for epidemiological research. Web-based questionnaires are more likely to be returned by previous non-responders than paper-based ones. The same biases apply to responders in both conditions: Responders are more likely to be better educated and less likely to smoke.(Reimers 2007) presents such a very large internet study.

(O'Neil and Penrod 2001) discusses the effect that participating in a lottery and having to give one’s email address has on both drop-out rates and outcomes of an experiment (covering jury decisions).(Huang 2006) also considered varying degrees of anonymity, but doesn’t summarise results in the abstract.(Cronk and West 2002) found that take-home web questionnaires had significantly lower completion rates than paper-based or in-class web-based versions.(Sax, Gilmartin et al. 2003) also find that response rates are affected by mode of administration, but don’t say how internet-based surveys scored. Personalised invitations and high sender power increase response rates (Joinson and Reips 2007).

Simple demographic data will be given provided that users have decided to complete the survey (Basi 1999).(O'Neil, Penrod et al. 2003) found that badly designed tables, early requests for personal information, and imposing additional informed consent procedures led to early drop out. Early requests for personal information and additional consent procedures also affected the demographics of the sample, which could compromise external validity.(Meyerson and Tryon 2003) present a detailed demographic analysis of the equivalence of web-based and paper-and-pencil versions of the same questionnaire. They also discuss demographic biases.

The only between-subject variables we designed for were based on demographic characteristics – all participants used the same ordinal scale to rate the same stimuli. This prevents distortions like that discussed by (Birnbaum 1999). Biases due to administration method may only reveal themselves when subsamples are considered (e.g. splitting the sample by gender, (Epstein, Klinkenberg et al. 2001).

Unrelated to the internet, but highly relevant in our context, (Zelenski, Rusting et al. 2003) found that the time and date of students’ participation in experiments was linked to personality, thus potentially introducing bias.

Personality testing over the internet

The WWW can be very helpful in developing new personality questionnaires, such as the self-trust questionnaire (Pasveer and Ellard 1998). (Buchanan and Smith 1999) use the web to validate a self-monitoring questionnaire. Web users are also more likely to disclose negative, depressive feelings (Davis 1999; Joinson 1999). In order to maintain these advantages, non-disclosure of any sort of identifying information is useful (Joinson, Woodley et al. 2007). (Salgado and Moscoso 2003) found that web-based versions of the Big Five questionnaire were equivalent to paper-and-pencil versions; subjects preferred the web-based version. Fortunately, our IPIP version of the Big Five was developed by the highly anal (see above) Buchanan and friends (Buchanan, Johnson et al. 2005).The IPIP also generalises easily to other types of personality inventory (Ashton, Lee et al. 2007)A highly relevant sales pitch for the IPIP is (Goldberg, Johnson et al. 2006).(Ferrando and Lorenzo-Seva 2005) discuss the factor analytic validation procedure for personality questionnaires, but don’t say what they found in the abstract.(Chuah, Drasgow et al. 2006) stringently compared three methods for administering personality questionnaires, and found no differences.(Pettit 2002) examined the equivalence of paper-and-pencil and web versions of four standard personality questionnaires. Data are equivalent, but paper and pencil leads to more unscorable responses.

(Allik and McCrae 2004) summarise cross-cultural differences in personality. This could be a useful paper for lumping Europeans and Americans together …

Emotion

Describing Emotions.(Cowie and Cornelius 2003) critically discuss descriptive categories for emotions. The review is aimed at the speech community. (Sabini and Silver 2005) describe the pitfalls of linguistic classifications using the example of envy, embarrassment, and regret. Are these applicable to happiness and sadness, too? Electrophysiological information is very useful in obtaining “gold standard” data on users’ mood while interacting with a piece of software (Bamidis, Papadelis et al. 2004), such as smiling/frowning(Partala and Surakka 2004) or skin conductance and heart rate(Prendinger, Dohi et al. 2004).(Peter and Herbon 2006) propose a general taxonomy for HCI research.

Inducing mood and emotion in users.(Gerrardshesse, Spies et al. 1994) compare five procedures for the induction of elation and depression; (Westermann, Spies et al. 1996) look at 11 procedures. They find that the film/story procedure works best, negative states are easier to induce than positive ones, and explicit instructions help. It is possible to induce negative mood via the WWW; (Goritz and Moser 2006) discuss several successful approaches. Positive mood is more difficult, but (Goritz 2007) discusses some approaches that might be useful for lifting mood.Take-home message: it can be done.

Mood and personality:(Rusting 1998) is extremely relevant: this paper discusses how mood and personality affects the cognitive processing of emotion-congruent information. One of the three approaches discussed is the mediation approach, which looks at the way in which personality traits mediate the influence of mood! (Rusting and Larsen 1998) show that extroverts and neurotics are differently susceptible to positive and negative information, as predicted by Jeffrey Gray’s model of personality (see also (Gomez and Gomez 2002)) However, contra Gray and Newman, and pro Eysenck, there is no interaction between a person’s position on the extraversion scale and their position on the neuroticism scale (Rusting and Larsen 1997; Gomez, Gomez et al. 2002).

The traits of extraversion and neuroticism include stable cognitive structures that can bias judgment in affect-congruent (positive vs. negative) directions independent of current mood (Zelenski and Larsen 2002).

Current mood mediates the effect of personality on how people regulate their emotions. The key trait that affects emotion regulation is Neuroticism according to (Kokkonen and Pulkkinen 2001).(Tamir and Robinson 2004) show that highly neurotic people are faster to make evaluations when in a bad mood than in a neutral mood. This is reversed for low neurotics. High neurotics benefit (i.e. are more likely to be in a good mood) when they are able to identify threats well, but not low neurotics (Tamir, Robinson et al. 2006). (Lischetzke and Eid 2006) suggest that extraverts are more likely to be in a persistent pleasant mood because of self-regulation inherent to the trait.

A slight problem for our study is that most of the studies connecting personality traits, emotion, and information processing seem to have focused on extraversion and neuroticism, whereas we chose agreeableness and neuroticism. Extraversion is linked to positive, neuroticism is linked to negative affect. At least some of this may be the effect of trait-congruent behaviour, though (McNiel and Fleeson 2006). A possible source of these differences could be cognitive evaluation (Uziel 2006).

However, we can say that we are looking at a potentially neglected, relevant trait here.Agreeableness has been linked to emotion regulation (Tobin, Graziano et al. 2000), as well as extraversion and neuroticism, as seen above.(Haas, Omura et al. 2007) needs to be checked out. Agreeableness is also a factor in models that predict people’s ability to cope with stressful situations (such as interface malfunctions!) (Obrien and DeLongis 1996). In persons who are highly agreeable, blame accessibility (as measured by word rating experiment) is dissociated from overall anger, but not for people who tend to be disagreeable (Meier and Robinson 2004).(Ashton and Lee 2001) discuss how agreeableness and emotional stability (note: six-factor solution!) relate to pro- vs. antisocial behaviour.(Robinson, Meier et al. 2005) found that “threat categorisation tendencies psychologically protect or burden the individual, depending on the levels of agreeableness”. Will need to read the paper to find out what that means.

Hedonic preference:The study of (Gong 2007) reflects our finding that happy is better than sad at all times. In this study, happy vs. sad novels were recommended. Happy agents made users significantly more likely to rate reviews positively and their user experience more highly.

Emotion in HCI:It is important that embodied agents show empathic emotion, not self-oriented emotion(Brave, Nass et al. 2005). What does that mean for speech? (Gratch and Marsella 2005) give a critical overview of results from emotional psychology that are relevant for the design of such agents.

Evidence why it’s beneficial to adapt to users’ mood:

  • Agents that help users manage their frustration with computers help users stick with a piece of software even though it’s difficult to use. (Klein, Moon et al. 2002; Oatley 2004).This can be as easy as providing appropriate messages through a synthesiser. (Partala and Surakka 2004). However, embodied agents may be more effective than non-embodied ones, and female agents may be more effective than male ones(Hone 2006).(Prendinger, Becker et al. 2006) use neurophysiological measures to assess the effectiveness of frustration-reducing agents. The system for converting neurophysiological measures to emotional states is discussed in (Prendinger, Mori et al. 2005).
  • More complex relevant emotions are the feeling to be socially supported, to be cared for (Lee, Nass et al. 2007).

.

Evidence against:

(Lindgaard 2004) promises to be a nice wee rant.

No abstract, but potentially interesting:

(Cassell and Bickmore 2000; Cockton 2004)

Other interesting stuff

Mood influences noise annoyance, which is also linked to personality (refs will be in the paper, (Vastfjall 2002)). Sound quality judgements are linked to mood (Vastfjall 2004).Personality affects the speech rate that participants prefer (BAS/BIS scale, (Kallinen and Ravaja 2004)).(Kallinen and Ravaja 2005) report significant differences in rating speech rates between younger and older participants. They look at both subjective ratings and electrophysiological measurements. They also looked at the effect of speaker vs. headphone presentation on arousal (Kallinen and Ravaja 2007) and found clear differences.

Allik, J. and R. R. McCrae (2004). "Toward a geography of personality traits - Patterns of profiles across 36 cultures." Journal of Cross-Cultural Psychology35(1): 13-28.

Ashton, M. C. and K. Lee (2001). "A theoretical basis for the major dimensions of personality." European Journal of Personality15(5): 327-353.

Ashton, M. C., K. Lee, et al. (2007). "The IPIP-HEXACO scales: An alternative, public-domain measure of the personality constructs in the HEXACO model." Personality and Individual Differences42(8): 1515-1526.

Bamidis, P. D., C. Papadelis, et al. (2004). "Affective computing in the era of contemporary neurophysiology and health informatics." Interacting with Computers16(4): 715-721.

Basi, R. K. (1999). "WWW response rates to socio-demographic items." Journal of the Market Research Society41(4): 397-401.

Birnbaum, M. H. (1999). "How to show that 9 > 221: Collect judgments in a between-subjects design." Psychological Methods4(3): 243-249.

Birnbaum, M. H. (2004). "Human research and data collection via the Internet." Annual Review of Psychology55: 803-832.

Brave, S., C. Nass, et al. (2005). "Computers that are care: investigating the effects of orientation of emotion exhibited by an embodied computer agent." International Journal of Human-Computer Studies62(2): 161-178.

Buchanan, T., T. Ali, et al. (2005). "Nonequivalence of on-line and paper-and-pencil psychological tests: The case of the prospective memory questionnaire." Behavior Research Methods37(1): 148-154.

Buchanan, T., J. A. Johnson, et al. (2005). "Implementing a five-factor personality inventory for use on the Internet." European Journal of Psychological Assessment21(2): 115-127.

Buchanan, T. and J. L. Smith (1999). "Research on the Internet: Validation of a World-Wide Web mediated personality scale." Behavior Research Methods Instruments & Computers31(4): 565-571.

Buchanan, T. and J. L. Smith (1999). "Using the Internet for psychological research: Personality testing on the World Wide Web." British Journal of Psychology90: 125-144.

Cassell, J. and T. Bickmore (2000). "External manifestations of trustworthiness in the interface." Communications of the Acm43(12): 50-56.

Chuah, S. C., F. Drasgow, et al. (2006). "Personality assessment: Does the medium matter? No." Journal of Research in Personality40(4): 359-376.

Cockton, G. (2004). "Doing to be: Multiple routes to affective interaction." Interacting with Computers16(4): 683-691.

Cook, C., F. Heath, et al. (2000). "A meta-analysis of response rates in Web- or internet-based surveys." Educational and Psychological Measurement60(6): 821-836.

Cowie, R. and R. R. Cornelius (2003). "Describing the emotional states that are expressed in speech." Speech Communication40(1-2): 5-32.

Cronk, B. C. and J. L. West (2002). "Personality research on the Internet: A comparison of Web-based and traditional instruments in take-home and in-class settings." Behavior Research Methods Instruments & Computers34(2): 177-180.

Davis, R. N. (1999). "Web-based administration of a personality questionnaire: Comparison with traditional methods." Behavior Research Methods Instruments & Computers31(4): 572-577.

DeRouvray, C. and M. P. Couper (2002). "Designing a strategy for reducing "no opinion" responses in Web-based surveys." Social Science Computer Review20(1): 3-9.

Dominelli, A. (2003). "Web surveys - Benefits and considerations." Clinical Research and Regulatory Affairs20(4): 409-416.

Duffy, M. E. (2002). "Methodological issues in Web-based research." Journal of Nursing Scholarship34(1): 83-88.

Ekman, A., P. W. Dickman, et al. (2006). "Feasibility of using web-based questionnaires in large population-based epidemiological studies." European Journal of Epidemiology21(2): 103-111.

Ekman, A. and J. E. Litton (2007). "New times, new needs; e-epidemiology." European Journal of Epidemiology22(5): 285-292.

Epstein, J., W. D. Klinkenberg, et al. (2001). "Insuring sample equivalence across internet and paper-and-pencil assessments." Computers in Human Behavior17(3): 339-346.

Ferrando, P. J. and U. Lorenzo-Seva (2005). "IRT-related factor analytic procedures for testing the equivalence of paper-and-pencil and Internet-administered questionnaires." Psychological Methods10(2): 193-205.

Gerrardshesse, A., K. Spies, et al. (1994). "Experimental Inductions of Emotional States and Their Effectiveness - a Review." British Journal of Psychology85: 55-78.

Goldberg, L. R., J. A. Johnson, et al. (2006). "The international personality item pool and the future of public-domain personality measures." Journal of Research in Personality40(1): 84-96.

Gomez, A. and R. Gomez (2002). "Personality traits of the behavioural approach and inhibition systems: associations with processing of emotional." Personality and Individual Differences32(8): 1299-1316.

Gomez, R., A. Gomez, et al. (2002). "Neuroticism and extraversion as predictors of negative and positive emotional information processing: Comparing Eysenck's, Gray's, and Newman's theories." European Journal of Personality16(5): 333-350.

Gong, L. (2007). "Is happy better than sad even if they are both non-adaptive? Effects of emotional expressions of talking-head interface agents." International Journal of Human-Computer Studies65(3): 183-191.

Goritz, A. S. (2007). "The induction of mood via the Motivation and Emotion31(1): 35-47.

Goritz, A. S. and K. Moser (2006). "Web-based mood induction." Cognition & Emotion20(6): 887-896.

Gosling, S. D., S. Vazire, et al. (2004). "Should we trust web-based studies? A comparative analysis of six preconceptions about Internet questionnaires." American Psychologist59(2): 93-104.

Gratch, J. and S. Marsella (2005). "Lessons from emotion psychology for the design of lifelike characters." Applied Artificial Intelligence19(3-4): 215-233.

Haas, B. W., K. Omura, et al. (2007). "Is automatic emotion regulation associated with agreeableness? A perspective using a social neuroscience approach." Psychological Science18(2): 130-132.

Hewson, C. M., D. Laurent, et al. (1996). "Proper methodologies for psychological and sociological studies conducted via the Internet." Behavior Research Methods Instruments & Computers28(2): 186-191.

Hone, K. (2006). "Empathic agents to reduce user frustration: The effects of varying agent characteristics." Interacting with Computers18(2): 227-245.

Huang, H. M. (2006). "Do print and Web surveys provide the same results?" Computers in Human Behavior22(3): 334-350.