Anatomy of Movement Project “Proposal”

Sheena Chandran, John Stoecker, Matt Wright

Feb 2, 2006

The vast majority of the music heard in our culture is considered some kind of art or expression, but a scientist can use musical elements to sonify data in a way that is analogous to using graphical elements to visualize the data. These musical elements include pitch, volume, and multiple dimensions of timbre for each individual sound, as well as infinitely many ways to combine sounds to control rhythm, density, harmony and many other percepts, all potentially varying as continuous functions of time. The arbitrary choice of mapping from data parameters to sound parameters determines whether the result will have aesthetic merit, whether it will elegantly reveal structure of the original data, both, or neither. Sonification therefore has qualities in both art and scientific rigor. Our project will explore this relationship by depicting aspects of motion data from a violin performance into a recognizable form through sonification.

Specifically, sonification will show how the timing of the motion of Barry Shiffman, a concert violinist, corresponds to the temporal evolution of aspects of the music while he plays an excerpt of J.S. Bach’s Chaconne (the fourth and final movement of Bach’s second Partita, BWV 1004). We hope that a combination of sonification and visual display of motion and force plate data from the Motion and Gait Analysis Lab will show how rhythmic expression manifests in specific parts of Mr. Shiffman’s body.

It is well known that an important element of musical expression is the manipulation of timing (see (Clarke 1999) for a review), in terms of continuous changes of tempo (i.e., speeding up and slowing down over time) as well as expressive microtiming of individual notes (i.e., musical events coming before or after the appropriate “clock tick” according to the current tempo) (Iyer, Bilmes et al. 1997). One thread of computer music research aims to analyze expressive timing directly from the audio signal (Schloss 1985; Bilmes 1993; Scheirer 1995); we will apply a combination of these methods and manual analysis of the recorded music to identify the way in which Mr. Shiffman’s performances utilize expressive timing.

Many models of expressive musical timing, particularly continuous modulation of tempo in performance of western classical music, are inspired by motion of physical objects (i.e., the parabola that results from throwing an object up into the air) (Repp 1992; Todd 1995; Widmer and Goebl 2004). These tempo curves typically follow the music’s phrase structure (Clarke 1999); we expect to see the music’s phrase structure also manifesting in the correlated motion of certain parts of Mr. Shiffman’s body.

Much previous work on the analysis of motion capture data from musical performance focuses on quantifying the kinematics of expert musicianship. Examples include the effect of increased tempo on the timing of pianists’ upward motion in preparation for playing each note (Palmer 2005), determining that movement amplitude contributes more than movement anticipation toward the tendency for pianists to play louder when they play faster (Palmer 2005), and studying the differences between novice and expert ‘cello performance in hopes of finding support for a model of musical skill as dynamic constraint satisfaction (Ueno, Furukawa et al. 2000). We can do similar analysis of the kinematics of the violinist’s right hand by analyzing examples of various bowing styles.

Observing the highly emotionally expressive aspect of Mr. Shiffman’s body during performance and our own experience in music led us to predict that the motion data will show distinctions when a piece is played with different emotional interpretations. Furthermore, we expect that playing the same piece with different emotional interpretations will result in different uses of expressive timing, but in a more natural, non-intellectualized way than directly asking the performer to vary the expressive timing.

Secondly, we predict that the periodicity of various body parts will relate to the periodicity of differing levels of the music (such as note, beat, measure, phrase, and section). Thirdly, we predict that the greater a section’s deviation from the metronomic standard (the greater the magnitude of the “expression”), the larger the non-musical body movement.

We will test these hypotheses using technical computing software such as Matlab. Signal processing filters such as the FFT (Fast Fourier Transform) applied to the motion and force-plate data will find the frequency content of movements in the ranges of 0.1-1 Hertz (phrasing level) and 1-5 Hertz (note level). Note onset detection, tempo tracking, and per-note deviation measurement methods were discussed above. When relations and correspondences are discovered, they will be sonified in a way that shows each of their relations to the piece; previous sonification work, (e.g., Kapur, Tzanetakis et al. 2005) has already laid a framework for the sonification of movement.

We hope to have a solid understanding of the data and initial evaluation of each hypothesis by the midterm. Data analysis will continue through February when sonification will begin. By the end of the quarter, we will have musical clips that show our progress and at least one final musical piece that demonstrates each hypothesis.

Our final project will include a web page and a multimedia presentation using video and sound that shows how our sonification illustrates our research and explains the movement of the body during violin performance. Once we better understand the movement of the body in musical performance, we can start to explain why seeing a violinist such as Barry Schiffman live is so much more powerful and aesthetically pleasing than listening to a recording.

References

Bilmes, J. (1993). Timing is of the Essence: Perceptual and Computational Techniques for Representing, Learning, and Reproducing Timing in Percussive Rhythm. Media Lab. Cambridge, Massachusetts, Massachusetts Institute of Technology.

Clarke, E. F. (1999). Rhythm and Timing in Music. The Psychology of Music. D. Deutsch. San Diego, Academic Press: 473-500.

Iyer, V., J. Bilmes, et al. (1997). A Novel Representation for Rhythmic Structure. International Computer Music Conference, Thessaloniki, Hellas, International Computer Music Association.

Kapur, A., G. Tzanetakis, et al. (2005). A Framework for Sonification of Vicon Motion Capture Data. 8th International Conference on Digital Audio Effects (DAFX-05), Madrid, Spain.

Palmer, C. (2005). "Time Course of Retrieval and Movement Preparation in Music Performance." Annals of the New York Academy of Sciences(1060): 360-367.

Repp, B. (1992). "A constraint on the expressive timing of a melodic gesture: Evidence from performance and aesthetic judgement." Music Perception10: 221-243.

Scheirer, E. D. (1995). Extracting Expressive Performance Information from Recorded Music. Program in Media Arts and Sciences, School of Architecture and Planning. Cambridge, MA, Massachusetts Institute of Technology: 56.

Schloss, W. A. (1985). On the Automatic Transcription of Percussive Music: From Acoustic Signal to High-Level Analysis. Program in Hearing and Speech Sciences. Palo Alto, CA, Stanford 119.

Todd, N. P. M. (1995). "The kinematics of musical expression." Journal of the Acoustical Society of America97(3): 1940-9.

Ueno, K., K. Furukawa, et al. (2000). Motor Skill as Dynamic Constraint Satisfaction. Linköping Electronic Articles in Computer and Information Science. Linköping, Sweeden, Linköping University Electronic Press. 2006.

Widmer, G. and W. Goebl (2004). "Computational Models of Expressive Music Performance: The State of the Art." Journal of New Music Research33(3): 203–216.