4

Argument:

Digital Music is not a genre but rather a technology, or group of technologies, that has been used in conjunction with other musical technologies, to produce music. This reiteration of “music” via these digital technologies has subjectively been both positive and negative, leading both to innovation and standardisation, but more importantly it has re-imagined our concept of musical performance and subsequently morphed our understanding of musical progress.

The two separate meanings of performance, one referring to human skill and live interaction with an audience, the other a measure of the efficiency of audio technology, have to some extent merged.

My project, in which I am creating a multi-media survey to analyse the influence of digital technologies on perception of musical performance in the general public, will hopefully act as a springboard for analysing the convergence of these terms. In a global culture of lip syncing, auto-tuning and even virtual popstars, it seems that little is still held sacred to the first definition, but strains of innovation have allowed a new form of musical progression to start from digital music, one where both the intricate designs of digital instruments and the mathematical realisation of patterns within a musical network yield a form of live improvisation, where, as Chris Brown puts it “ideally the body must stand free of interface, and the interaction is with the network of information that flows between the musician and the music”.

Slide Check time 2 mins

The human element of performance, for instance from a composer like John Bischoff who builds both his own hardware and software, is now implemented not on the stage but in the engineering of the hardware or coding of the software.

Even though all commercial Digital Audio Workstations have had the same quality capacity to reproduce audio since they all adopted the 32-bit systems, marketing campaigns emphasise quality in equipment over the skill of mixer, and somewhat erode the cultural perception of the digital performer. (In studies such as Meyer and Moran's high resolution audio has so far not proved to make a difference)

Slide

But beyond this there is also the sense of disembodiment embodied in the materials of digital music, the effect of software's plasticity and supposed ephemerality, Chris Brown calling it a “genie of a medium that continually changes shape”. He points out that the arbitrary relation of the sound to mechanical function means that performers bodily responses cannot easily adapt to the instrument (on a synth for instance the keys can be transposed, other instrumental 'voices' imposed, different combinations of sine waves that attempt to reproduce the tone of specific instruments etc.,)

Check time 3 mins

{This mind-body map between human and instrument becomes more difficult to form. Merleau-Ponty calls this phenomenon intercorporiality, the need for sensual configuration to establish and maintain an optimal grip of reality, refuting Descartes' mind-body distinction, and as Hubert Dreyfus argues this renders “telepresence” fundamentally different to presence, it could be argued that the presence of humans in some forms of digital music is thus rendered 'unreal' or disembodied. The example that Dreyfus gives, the lack embodiment that a telepresent teacher suffers, as sensual vulnerability is not exposed, and trust therefore not extended, is symptomatic of problems sound-artists face sitting behind a monitor, with no obvious physical interaction for the audience to interpret. To Bischoff however the “essense” of his music is not revealed in any kind of “physical gestures”. As a blind man hears better, limiting the visual component concentrates attention on sonic performance.}

Slide Check Time 4 mins

Clip of Hub if Time

Even the performances of live electronic system bands such as the Hub, where interaction of human responses and the network engender the composition, the element of performance, which is translated through cyborg practices, of both human encoding and human response to the system, cannot be visualised. In a performance (named Hub Renga after a Japanese parlour game where players create poetry together on a specific theme) created live on the radio members of the poetry conference sent lines to the Hub via modem, each line containing a “power word” that triggered musical events.

As Bischoff (who is one of the founders of the Hub) points out “the non-hierarchical structure of the network encourages multiplicity of viewpoints, and allows separate parts in the system to function in a variety of musical modes.” This strikes me as extremely post-structuralist, evoking both Deleuze & Guittari's 'rhizome' from 1000 Plateaus, Derrida's end of linear writing and Bhatkin's Heteroglossia.

Jim Horton, another member, says “the musical system can be thought of as multiple stations, each playing it's own sub-composition, which receive and generate information relevant to the real-time improvisation. No one station has an overall score.” This multiplicity of authors within the confines of a digital system brings a new, and yet contradictory, meaning to the phrase interactive performance, the only interaction they have in through the network, indeed “the players can be viewed as extensions of the network” (Bischoff).

Slide Check Time 6 mins

One of the interesting things about The HUB is that even though they program and build their own instruments they are self-described as “a band of interactive computer music systems” that perform “long distance”. Not system programmers, but systems.

The band gesture towards this in their comments, Tim Perkins suggesting that “we are really trying to let the systematic nature of the electronic systems we are working with suggest what's going to happen musically”, Scot Gresham-Lancaster puts it “we're the spokes and The HUB is the centre”.

{I like to think of it as a set of glasses filled to different lengths on a table in a room where the windows have been left open and there is a rainstorm outside. Effectively participation involves opening and closing different windows, changing patterns of air flow and therefore the sound created by the system. Though the processes are improvisational it is the interactions between different systems, designed by different musicians, that determines the songs structure, unique in each performance.}

Another example of this strategy is the lyrical composition of some of the Kid A album songs. Cutting up lyrics and reordering the words the emotional sentiment of the words are preserved but realigned by a wild and unpredictable system. Thom Yorke describes both this and the use of synthesisers in the Kid A as examples of using technology to prevent any forced emotionality through music.

In this way the machine has become cyborg, part musician, part instrument. Instruments have often been recognised as an extension of the musicians personality and voice, especially in blues and folk traditions. In this example of “musical progress” however the instrument goes through an anthropomorphism that is entire, taking on the creative role of the musician itself, whilst the Musician has suffered the reverse, becoming the machine that engineers the musician.

Check time 7

This emphasis on technology as agent is particularly interesting when we look at the wider developments in the spectrum of the music industry. The digitalisation of audio is something that has affected all musicians, whether or not they are performers. The scope of this includes everything from the availability of production tools to P2P networks, to youtube and spotify.

There has been an argument put forwards - perhaps most earnestly by Dave Grolsh in Sound City, a documentary that deeply reveres the analogue recordings of live studio performances over the ease-of-use digital software- that the availability of digital production products can be a tool or a crutch.

As a crutch is a tool, and anyone who uses digital technology as a tool to do things impossible in analogue recording inevitably requires it to realise the work they envisage, the metaphor is inapt. I prefer to think of it as the difference between a bicycle and a mobility scooter. At risk of being politically incorrect I'll elaborate. The crutch, mobility scooter, will enable anyone to get around with ease, whether or not they can stand up in the musical world. The bicycle is a tool which allows us to go further and faster than we could ever imagine going on with legs alone, but if a tyre bursts or the chain malfunctions, we can still get off and competently get around without it because you need to be able to walk to cycle (mostly). While some use digital technologies like the bicycle, with a background in music, even if that is just humming in tune, that enables them to function without it, others, because of the simplicity of sampling tools, have been able to ride along without coming to terms with the fact that they are crippled.

But even this distinction between performance through machines and performance with machines, breaks down when computers become an autonomous part of the creative process.

Slide

The concepts of musical performance and technical performance are in collision as humans find new ways of making robots and systems perform spontaniously and musical proficiency of humans becomes less important for the realisation of music.

Music is a mathematical process, whether this is the conscious use of applied mechanics to shape musical patterns within the digital system, or subconscious improvisation of the musically proficient performer. The progress of music, from the digital end, involves the use of a machine performance in a way that is spontaneous and idiomatic, not the standardised simulacrum of non-digital technologies that have become, as Brown puts it, “in effect cultural ambassadors for the musical biases of western culture”.