Susan Davis E85.2042

Susan Davis

E85.2042

February 19, 2008

Reading Summary on “Emotional Expression in Music Performance:

Between the Performer’s Intention and the Listener’s Experience”

Alf Gabrielsson and Patrick N. Juslin

The perception of emotional expression in music has been a hotly debated topic in the music field for years. People argue about whether emotions are actually experienced by performers as they make music, or if musical expression is just symbolic of emotions. Gabrielsson and Juslin’s paper (1996) is focused on the communication transaction between the performer and the listener, which has received limited attention in the debate. The study specifically investigates the intention of the performer to exhibit various emotions and expression and the ability of the listener to perceive the intended emotion accurately (p. 68).

Susanne Langer’s theory provides the foundation for this study of expression in music performance. Having rejected other commonly held theories of emotion in music, self-expression and semantic (Åhlburg, 1994, p. 69), Langer favored the idea that music is “formulation and representation of emotions, moods, mental tensions and resolutions – ‘a logical picture’ of sentient, responsive life, a source of insight, not a plea for sympathy” (Langer as cited in Åhlburg, p. 71). Gabrielsson and Juslin acknowledge her idea of music “mirroring the structure of emotions" as being the focus of this experiment (p. 68). They seem to desire to show a relationship between emotional expression in music and response.

Gabrielsson and Juslin (1996) have linked two studies that explore the same objectives in this paper. Study I included three performers - a flutist, a violinist and a singer – that were all male professional musicians between 40 and 50 years old. The performers were given three melodies (A, B, and C) between 8 and 16 measures long to record for an audience: (1) Charpentier Te Deum (2) a Swedish folk tune and (3) a melody composed specifically for the study. The melodies were notated for pitch and rhythm, but they did not have dynamic, tempo, and other expressive markings. Study II included six male professional guitar players between the ages of 25 and 45. The guitarists were all given the same melody to perform and interpret, the spiritual Nobody Knows. As in the first study, only pitch and rhythm were notated for the melody (p. 72).

All performers (from Study I and II) were directed to play the given melody in such a way as to exhibit the emotional expressions “happy,” “sad,” “angry,” “fearful,” “tender,” “solemn,” and “no expression.” They were given creative license to perform each melody, keeping the pitches intact but otherwise varying “tempo, timing, dynamics, articulation, phrasing, vibrato, attack, and timbre” (Gabrielsson & Juslin, 1996, p. 72) in order to express the required emotion. The goal was to play each version of the melody (i.e. happy, sad, etc.) twice, as close in interpretation as possible. Performers were allowed to practice to achieve the desired result and they were encouraged to play the melodies from memory. All performances were tape-recorded, stored in computer memory and then used for the listening experiments in Study I and Study II.

Study I included three listening experiments: Melody A was judged by seven music psychology students between the ages of 24 and 45 (five female and two male), Melody B was judged by 14 music psychology students between the ages of 23 and 40 (eight female and six male), and Melody C was judged by 35 musicians between the ages of 23 and 69 (14 females and 21 males). The experiments were conducted with all group members present at the same time and the order of recordings were randomized for judging. Study II included two listening experiments. In the first experiment, Melody D was judged by 13 musicology students between the ages of 19 and 47 as a group. In the second experiment, Melody D was judged by both musically trained and untrained listeners between the ages of 21 and 52. Participants in this experiment met individually and gave performance ratings on a computer program specially designed for the study. Gender was equally distributed in both experiments. All participants from Studies I and II were instructed to judge the performances they heard “with regard to their ‘happiness,’ ‘sadness,’ ‘anger,’ ‘tenderness,’ ‘expressiveness,’ ‘fear’ (only in Study II), and ‘solemnity’ (only in Study I)” (Gabrielsson et al, p. 74). Judgments were designated from 0 to 10, 0 indicating minimum relationship to the given emotion and 10 indicating maximum representation of the emotion.

The listening experiments revealed several interesting outcomes. First of all, the authors decided to put aside data from the singer because the trend in his data was similar to the violinist and flutist, but significantly less expressive. Considering then the data included in the study, in most instances of identification there was a statistically significant difference between the accuracy of the identified emotion and the other potential emotions. For example, the violinist’s “happy” interpretation received a mean rating of 6.1 while the mean of those who rated it as “sad,” “solemn,” “angry,” “tender,” and “no expression” was respectively 0.9, 2.3, 1.4, 0.7 and 0.7 (Gabrielsson et al, p. 76). Clearly, most people were able to recognize the “happy” intention of the violinist in that example. Emotions that proved problematic for the violinist, however, were “sad” and “tender;” and “angry” and “solemn.” The flutist’s version of “happy,” in contrast to the violinist, was often confused for “angry,” while “sad” and “tender” also posed a problem for interpretation. The listeners themselves acknowledged that the qualities “sad” and “tender” were difficult to distinguish between (p. 75). The authors had anticipated this result based on the work of Ekman and Plutchik (p. 71). Krumhansl (1997) also confirmed in a later study, “judgments are most consistent for basic emotions, such as sad, fear, happy, and angry, and may not reliably correspond with more fine-grained distinctions within these categories” (p. 338).

In addition to the decoding observations, the authors looked at specific qualities that may have affected the results, including tempo, timing, articulation and dynamics, and the shaping of individual notes. Although there were variances between performer’s renditions, the tempo of the emotions “angry” and “happy” were generally the fastest, while “sad” and “tender” were the slowest, with “no expression” and “solemn” in between. The data for the emotion “fearful” in Study II had to be discounted because the electric guitarists’ interpretations were so disparate (Gabrielsson, p. 77). To assess timing, Gabrielsson and Juslin looked specifically at deviation from nominal values for “measures, dotted patterns, and the end of the melody” (p. 78-79). The largest variations in timing overall for melodies A through C arose in the “tender” and “sad” renditions while the smallest deviation occurred in the “no expression” and “solemn” versions. In melody D the largest timing variations occurred for “fearful.” Several significant differences emerged for dotted rhythmic patterns between the emotions rendered, for example “happy” was generally played with much sharper dotting than other emotions. Ritardando at the end of a melody was primarily audible for melodies A-C, except in the cases of “no expression” and “angry;” while it was not often used by the guitarists for melody D (p. 80). Both the violinist and flutist used articulation and dynamics to express emotion. Their “happy” and “angry” renditions tended to be staccato or ‘airy,’ and the “solemn” and “angry” versions were generally the loudest. The guitarists conveyed “happy” and “fearful” with staccato articulation and used very soft, almost inaudible tones to represent “fearful” (p. 81). The authors looked at amplitude envelopes and frequency spectrums to get a better sense of the way each instrumentalist attacked and shaped their notes. The guitarists specifically used attributes of their instrument to convey emotion in melody D, using string bending or intense vibrato in the “angry,” “sad,” and “tender” renditions (p. 82-85). Based on the above data, the authors were able to provide a general musical description for each emotion investigated.

Gabrielsson and Juslin (1996) came to four overarching conclusions as a result of this study: (1) the performer’s intended expression had an effect on all variables measured in the experiment (timing, dynamics, etc.) regardless of instrument, performer or melody; (2) the performers were for the most part successful in conveying prescribed emotions to the listeners; (3) the female listeners were more accurate in interpreting emotions than the male listeners, however the difference was not statistically significant; and (4) basic emotions like happy and sad are easier to convey than more subtle emotions like solemnity and tenderness (p. 87). In addition, the authors acknowledged several interesting observations. They found a trend in the way that sadness and tenderness seemed to group together in many listeners’ ratings, but not all. They suggested that the emotions may be distinct in principle, but accuracy in the encoding rendition is necessary for accurate identification. The authors also observed that instrumentation might have had an effect on encoding due to the limitations of certain instruments. Anger was difficult to distinguish from happiness on the flute, while fearfulness seemed to be problematic for the electric guitar. They indicated that instrumentation should be considered, so as not to overlook expressive tools like vibrato, intonation and timbre that can influence musical performances. Finally, they recognized that there were many differences between the individual performers’ interpretations in the study. Some varied expression but tried to adhere to the notation while others very freely interpreted each melody (p. 87-88).

The paper concludes with suggestions for future investigation. Gabrielsson and Juslin suggest that different approaches be looked at: the integration of structural expression and emotional expression in music, “application of a modified lens model,” phenomenological methods to describe music performances, and potentially the developmental (child to adult) aspect of decoding musical interpretations (p. 89). Even to this day there is much we don’t fully understand about emotional expression in music performance. Hopefully continued research of the brain will shed more light upon the musical transactions between performers and listeners.

References

Åhlburg, L. (1994). Susanne Langer on representation and emotion in music. British Journal of

Aesthetics, 34(1), 69-80.

Gabrielsson, A., & Juslin, P. N. (1996). Emotional expression in music performance: Between

the performer’s intention and the listener’s experience. Psychology of Music, 24, 68-91.

Krumhansl, C. (1997). An exploratory study of musical emotions and psychophysiology.

Canadian Journal of Experimental Psychology, 51(4), 336-352.