Diakopoulos 1

Remix Culture: Mixing Up Authorship

Nicholas Diakopoulos

College of Computing / Georgia Institute of Technology

|

Introduction

Spurred by the rise and spread of digital computing technology, the culture of remixing existing ideas and media artifacts is now reaching a fervent reificationin the modern cyberspace. From its beginnings as a term used to describe mixing different versions of multi-track music recordings in the 1970s, “remix” has now broadened itself to include notions of mixing other types of media such as images, video, literary text, game assets, and even tangible items such as cars and clothing[1]. But when everyone is mixing ideas and media, who is the author or creator of the final product? The remix trend, reflected in the contemporary zeitgeist and born out of philosophical ideas of modernism and post-modernism as well as facilitated by digital technology, represents a shift in authority in authorship away from the auteur; a democratization of media production and authorship.This paper will explore the evolution of the author in light of the remix trend. A range of diverse remix artifacts will be examined through the lens of hypertext theory in order to shed light on changing notions of the author, authority, and intertextuality.

Remix ACBs

Before delving into how the author relates to remix culture, it will first be essential to more fully understand what “remix” means. Manovich relates a train station metaphor for remix in which information or media is a train and each receiver of information is a train station (Manovich 2005). At the station information gets mixed with other information and the train is loaded and sent to a new station. But really the network as a palpable structure in electronic discourse necessitates that we update this metaphor with airports, which have more than one path leading to and from them, in place of train stations. Remix can then be seen as the process of traversing a path in a synchronic network of media; of navigating a hypermedia structure.

When we speak of “remix”, a core issue to understand is “what are we remixing?”. As remix culture expands from such diverse applications as car customization[2] to collaborative book writing[3], it becomes not only the remixing of media, but also of tangible artifacts and ideas. In Writing Space Jay Bolter notes that since ancient times philosophers have believed that thinking and writing were inseparable. The mind can be thought of as a writing surface and the act of thinking entails imprinting on that surface in the language of thought (Bolter 2001). Taking this philosophical notion of thought then, remix can indeed just as easily apply to media as to ideas. There is a subtly in distinguishing between remix ideas and remix media though. When we refer to “remix media” it implies that the remixer started with concrete instantiations of media which were then segmented and recombined. On the other hand, “remix ideas” may involve one or more people combining ideas gleaned from different sources (i.e. interpretations of media) which are then potentially instantiated in media. Collaborative authoring is typically used to describe the process of many people combining their ideas in a concrete media instantiation. These different flavors of remix are depicted in Figure 1.

As delineated in The Language of New Media, one of Lev Manovich’s central principles of new media is that of modularity. Modularity implies the treatment of new media as a collection of discrete samples, which can be combined into larger objects (Manovich 2001). Though the modularity principle of new media affords remixabilty moreso than traditional static media, remix applies equally well to either new media or old (e.g. photo collage). There may however be more hurdles placed on the remix of traditional media if it is not digitized. These constraints are removed throughthe digital representation of new media and the equal treatment of media assets by the computer. Constraints on remixability also fluctuate within the digital domain according to the underlying nature of the media. For example, remixing a music sequence might entail considering tempo and rhythm, whereas remixing video may involve maintaining continuity.

Figure 1. Graph representation of different modes of remix as they relate to people and media elements.

Origins of Remix

The tradition of remix is actually quite old and dates back to the oral cultures of the ancient Mediterranean. In Orality and Literacy Walter Ong describes how the ancient rhapsodes used to weave their stories by putting memorized snippets of stories together in a formulaic way to suit the demands of the audience (Ong 2002). In this way the oral poet “wrote” directly to the minds of the audience by mixing bits and pieces of culturally significant stories together in real-time (Bolter 2001).

As the tradition of orality largely lost out to that of written culture, so too did the notion of remixing material for different audiences in the fixed domain of print. It took a long time for the concept of mixing ideas in orality to reach the visual (print) medium, which has numerous historical motivations, but which in part may also be because static printed media does not afford it as easily as aural media (Bolter 2001). Remix was “discovered” with fresh vigor by the modernist artists such as the dada and surrealist collagists of the early 20th century. Dadaist collagists such as Max Ernst were noted for their “unconventional use of familiar elements” (Adamowicz 98). This was seen as a way for breaking with traditional mimetic aesthetics and exploring the modern aesthetic of juxtaposition.

The collagists thus began the modern trend of remix media, which was extended to music in 1972 by DJ Tom Moulton, who is said to have produced the first disco remixes (Manovich 2002). The advent of technology for the digital manipulation of sound in the early 1980s (e.g. synthesizers, sampling, and looping), of video in the late 1980s (e.g. Avid Non-Linear Editing) and of general purpose digital media manipulation tools such as Photoshop, has largely contributed to lowering barriers for people to remix existing media with the computer. In addition, the networked culture that has arisen as part of the internet allows personally collected/created media to flow freely between people, thus decreasing barriers to remix even further. As the digital medium affords mixing media, not just ideas, we’ve come full circle from the days of orality. Perhaps it isthe case that technology is finally catching up to society’s and culture’s needs, or in fact that through its affordances technology is determining what culture does: remix (Williams 1974).

As remix becomes more widespread and easier due to technology it is serving to putmodern and post-modern philosophical ideals into the hands of the amateur. Notions of self-reflexiveness, juxtaposition, and montage which draw on the modernist art ideals developed in the early 20th century are somewhat inherent to mixing potentially disparate media material (Lunn 1982). The post-modern aesthetic of multiple perspectives is also innate to remix insofar as remixes can be seen as different perspectives on an existing trajectory of media. However, as people start seeing these multiple perspectives in media it may at the same time undermine the potentially unitary perspective of the author.

Traditional Notions of Author

There are essentially two competing conceptions of the author: the author as a lone creative genius, and the author as collaborator. In the late medieval times the prestige of the individual began to grow. This continued into the Enlightenment and through the romantic periodeventually leading to notions of the designer/author as “creative genius” (Barthes 1978, Fallman 2003). This romantic instantiation of authorship eventually found itself applied to film in the auteur theory which was developed in the 1950s by Francois Truffaut and Andre Bazin. The auteur theory is meant to apply to the corpus of work of a director and consider that corpus as a reflection of the personal vision and preoccupations of that director[4]. The major concern with the romantic notion of authorship is that it “exalts the idea of individual effort to such a degree that it often fails to recognize, or even suppresses, the fact that artists and writers work collaboratively with texts created by others (Landow 1997).” Barthes has a similar criticism of romantic authorship in his essay The Death of the Author, in which he notes that literature too is overwhelmingly centered on the author, his person, history, tastes, and passions (Barthes 1978).

The alternate conception of the author is as a collaborator in a system of authors working together. This paradigm of authorship is in fact the norm throughout history. Think of the myriad of different traditional productions which rely on the creative input of multiple people: orchestra, film production, architecture etc. (Manovich 2002). This notion is reflected in Barthes’ argument that a text does not release a single meaning, the “message” of the author, but that a text is rather a “tissue of citations” born of a multitude of sources in culture (Barthes 1978). In this light, the author is simply a collaborator with other writers, citing them and reworking their ideas.

If collaborative authorship is so dominant in production and writing, why does the romantic notion of authorship even exist then? Of course there are various reasons for this. Manovich notes that in modernity it is important to brand collaboratively authored media because recognizability is so important for marketing. Branding thus transforms the collaborative view to the romantic view for capitalistic purposes. In the case of auteur theory, the rise of the romantic notion of author is likely a response by frustrated artists fighting for the credibility of the film director in a time when film was not yet recognized as high art. Auteur theory can in some sense be seen as a last stand against the rising tide of post-structuralism, which at roughly the same time in historywas placing emphasis on the reader of a text rather than the author.

The Rise of Reader-Response Theory

The beginning of interest in the reader’s part of the author-text-reader triumvirate comes from Louise Rosenblatt and Jean-Paul Sartre in the 1930s. The conception of the reader as deserving of critical attention then spread to post-structuralism and semiotics by the 1970s (Douglas 2000). By the late 70s Barthes argued that “…the true locus of writing is reading.” and that “…the reader is the very space in which are inscribed, without any being lost, all the citations a writing consists of…” (Barthes 1978).

The post-structuralist notions of stressing the role of the receiver as a maker of reality bled into literary criticism in the form of reader-response theory. Reader-response theory considers the myriad of ways that different archetypical hypothetical readers may perceive a text. Some examples are the “intended” reader, someone reading the text in the context in which it arose, and the “informed” or “ideal” reader who has developed the requisite linguistic, semantic, and literary competence needed to understand the text (Rabinowitz). These archetypes are meant as critical lenses through which different interpretations arise.

In contemporary semiotics we are used to the notion of “holes” in a text, which the reader fills by making assumptions and inferring causes and effects (Douglas 2000). The privileging of the interpretation of the reader when considering a text comes however at the expense of the author. Bolter notes that in the late age of print, tensions between the authority of the author and the empowerment of the reader have become part and parcel of the writing space (Bolter 2001).

The key point here is that post-structuralism, semiotics, and reader-response theory have all been whittling down barriers to the widespread adoption of remix culture for decades. By placing the focus of the textual experience on the reader, the reader is facilitatedto move beyond reading to actual text production.A microcosm of this evolution of the reader can be seen in Fiske’s categories for cultural production: semiotic production, enunciative production, and textual production (Fiske 1992, Shaw 2005). Semiotic production corresponds to the notion of reader-response in which the reader of a media item is producing ideas or interpretations. This level of reader involvement might be considered an example of remix ideas. Enunciative production is when people start articulating meanings to others concerning their interpretations. Finally, textual production corresponds to remix media in which cultural products act as the raw materials in the production of new cultural products. But to make the jump from enunciative production to textual production requires technical and/or artistic ability (Shaw 2005). Thus, even though reader-response theory facilitates the importance of the reader, it is not until the reader has acquired the requisite skills necessary to work with the media that she too can become a textual producer: a remixer. As the barriers to usage of authoring tools approach the computer knowledge of the average computer user: BANG! Everyone is suddenly a remixer.

Hypermedia and Remix

In some sense remix media bears a lot in common with hypermedia, which developed theoretically and practically on its own through the past four decades. The basic notion that ties the two concepts together is that remix media can be conceived of as a set of links to the original media which have been reordered or otherwise re-edited. Hypermedia consists of a network of potential paths that a reader may take, with potentially default paths which define a linear trajectory through the network. Remix media is essentially a reworking of the trajectory through a collection of media, which may also involve adding material to the trajectory that wasn’t present in the original media trajectory.

To some extent hypermedia has also investigated the changing relationship between author and reader which we are now considering in light of the remix trend. Through his experience building a hypermedia site, The Dickens Web, Landow argues that, “…hypertext has no authors in the conventional sense. … hypertext as a writing medium metamorphoses the author into an editor or developer. Hypermedia, like cinema and video or opera, is a team production (Landow 1997).” Thus authoring in a hypermedia can be a very collaborative process, not only with other writers whose text may be in the network, but also with the active interpreting reader.

The conception of the reader in interactive hypermedia takes the reader beyond that of passive interpreter in reader-response theory to that of co-author. Interactivity allows the reader of a hypertext to choose a path through the network of interconnected media elements, thus generating a personalized work simply through the trajectory of links chosen. The reader becomes co-author of the work insofar as it only exists as the text that was created through their (potentially unique) traversal (Manovich 2001). This “lean-forward” notion of the reader can be seen as a stepping stone toward more active meaning construction such as becoming a text producer or remixer.

In exchange for the increased agency of the reader and her ability to choose a path through the text, make annotations, or create links between existing text, the authoritativeness and autonomy of the author is subverted (Landow 1997, Douglas 2003). Traditional notions of authority in authorship are buttressed by the fixed changelessness of print in books which promulgates the idea that the author has created something staying, unique, and identifiable (Landow 1997). Mass production of identical copies from the printing press as well as resource barriers to becoming a publisher also supports homogeneity and the authority of the author (Bolter 2001). In contrast, the changeability of hypertext and ephemeral nature of digital media at large supports the loss of authorial control. Furthermore, the network nature of hypertext with its fragments of reused material disintegrates the thoughtful voice of the author (Landow 1997). Finally, since every digital technology requires some form of platform to run on, i.e. the environment in which the software runs, this also dictates to some degree how autonomous the hypertext may be (Douglas 2003). The authority of the author is thus further diminished through the constraints imposed on the text by the software environment.

The notion of intertextuality, which draws on the ideas of such theorists as Barthes, Derrida and Foucault, treats texts as networks of associations with other texts which may be extra-physical to the work itself (Douglas 2003). Barthes saw this intertextuality as beginning with the author as text, a concept akin to our notion of remix ideas. Hypertext allows one to make intertextual links explicit and at the same time allow the reader to explore the intertextuality of the text as they perceive it (Landow 1997). In traditional literature intertextuality can be rather passive, with the reader potentially not even noticing a tacit reference or allusion to another text. Different discourse communities have different strategies for dealing with intertextuality. Scientific discourse, for example, greatly relies on citation and building upon the ideas of others within a community. On the other hand, we have something like a newspaper column, which may form a dialogue with other columns addressing similar topics, but never explicitly cite these other columns.