1

CHAPTER 2

USER-MACHINE (AGENT-AGENCY)

Introduction: Agency, Technology and Internetworked Symbolic Action

For Kenneth Burke, the discussion of motive – of what people are doing and why they are doing it – centers in the terms of dramatism, and therefore a Burkean approach to answering this question can most easily be wrapped around discussion of computing environments perhaps as a kind of "text" – in terms of the Burkean pentad: Agent, Agency, Act, Scene, and Purpose. These elements are present in every computing situation – although one might say they are present in every situation – every time a human being sits in front of a computer. Taking great liberties with the main thrusts of A Grammar Of Motives, I will draw attention to what I refer to as sites of Human-Computer Interface (or Interaction), which are patterned conceptually (though not functionally) after some of Burke’s pentadic “ratios.” Focus and discussion is directed to the “Act” of computing, although I persist in anchoring these ratios to the human half, the “Agent,” of the “human-computer interaction.” As we analyze this interaction, we can consider various points of view, characterizing computers as tools, as contexts, as cultural icons, and as internetworked media. Privileged in this scheme, or extension of Burkean dramatism, is the agent or “user” of computer technology. While Burke’s tendency is to foreground, or emphasize, the term “Act,” primarily in reference to the symbolic action of a literary text, my approach is to analyze (and problematize) “ratios” or entanglements of the human Agent – the user, the cyberwriter, the netizen, the hacker, the hypertext designer, the “content provider” – in relation to each term in the pentad. Assumed as a backdrop, a context, is the dramatistic Act of internetworked symbolic action, perhaps even the “act” of being virtual.

I would argue that there is a kind of chronological (hierarchical) order in the movement of what I call “sites of interaction” between humans and computers. As the computer user shifts focus from moment to moment, concentrating now on the machine itself and the program applications she uses (the Agency), perhaps next upon the task she hopes to accomplish by means of the computer (the Act), and later spends time familiarizing herself with the graphical user interface on the screen (the Scene), she shifts contexts, re-prioritizes her actions and the machine’s motions. If computing is a rhetorical act, then we can organize a Burkean approach to the computing act in dramatistic terms. As we “read” the interactions between humans and computers, and more importantly, the interactions between humans by means of computers and internet technology, we can draw correlations between the shifting sites, or “levels” of focus between the human and the computer, and the five terms of the pentad.

Conceptualizing Computers

Shifting focus is nothing new to our lives with machines. Denise Murray in Knowledge Machines (1995) points out that, while driving in our cars we shift from horizon, to signpost, to speedometer, to rearview mirror, and back to the road. At first these shifts are conscious, awkward, unfamiliar until our minds and eyes become accustomed to them, as we form mental “schemata” around them. After several years of commuting, these schemata become seamless. We know our speed, even if we do not consciously remember glancing at the speedometer (19).

In front of our computers, we perform similar shifts of focus. After several years of work-related computer use, we train ourselves to perceive these shifts as seamless movements between tasks and the tools we need to complete them. At first, however, our interactions with new machines are conscious – each piece of equipment for a time after its introduction into the office or work environment is "too much with us." We fuss over it, read its manuals, call the service representatives, limit access to it, and, in short, we get acquainted with it. As our familiarity and comfort with the equipment grows, so does our ease of focus-shift, from the machine itself, say, to the quality, variety, and quantity of tasks it is performing. As we gain experience with the machine, our shifts in focus become smoother, less conscious, at times almost automatic. We could argue that as the level of trepidation lowers, the level of expectation rises.

The networked computer enters our work lives with a larger scope and depth of complexity than any other piece of technology. We ubiquitously anthropomorphize its hardware (i.e., circuit boards, chips, housings, cables, and other parts of the physical machine), and at the same time become immersed in its software (the applications, or programs, written to enable tasks by means of a user interface). When I say that we shift focus, what I mean is that our needs and motives for interacting with the computer shift and change constantly. To help examine motives and approaches to internetworked writing, I have divided these sites of interaction between user and computer into five. For want of more elegant names, I tentatively give them utilitarian tags: user-machine, user-screen, user-application, user-task, user-user. Each of these relationships between the user and the computer is generated by correlating motives, needs, or impulses. Each generates a roughly characteristic vocabulary and fairly consistent groups of concepts, ideas, and principles. Each, I hope to show, adheres in particular if not discrete clusters of causes and effects that directly affect the ways computer users interact with the machine, the screen, the software, the task, and each other.

Humans and Technology—Agent and Agency

Media saturation, particularly television commercials selling Information Technology products and services, would support the argument that the computer terminal screen is the ubiquitous initial point of human-computer interaction. As we consider what computers have come to signify, we might as well be talking about the dramatistic term “Scene,” as these jazzy, colorful images – a bizarre window-box of screen-within-screen, proscenium framed by proscenium repeat and multiply daily as marketers, manufacturers, and other “entrepreneurs” leap into the computer-train. For millions of Americans, television advertising marked the first time ever we saw the machine’s new graphically exciting face. But before the sexy pictures, before Windows 3.X, before the Macintosh, was our popular conceptualization – and misconceptualization—of the technology itself, as physical machine, as icon, as 20th Century Satan and Savior. Even before the early 1960’s planning and design of ARPANET, the military system that served as the initial backbone and prototype to the internet, computers had arrived in our cultural lexicon, and perhaps even in our dramatis personae of 20th Century Western archetypes, reflected in popular literature. Before Alvin Toffler put a name to it, Huxley’s Brave New World created a substantial wave of future-shock with various techno-forecasts including a model still held as a goal of virtual reality (VR) by designers, a collection of multisensory virtual-experience machines called “feelies.” In 1951, readers of speculative fiction sympathized with Vonnegut’s dystopian angst aimed at computer technology and technocratic industrial practices in his popular first novel Player Piano. By 1961 computers had earned enough space in the collective mind to rate a cultural stereotype, and that stereotype was not pretty. Dark satires reflected America’s unease with its new “thinking machines,” exemplified most memorably in Joseph Heller’s Catch-22, which struck a Luddite nerve in 1961 with a character promoted to the rank of Major by a very logical pentagon computer. The reason? The recruit’s first name is Major. In fact, all three of his names are “Major.” Thus, by means of computer-logic, he exits boot camp as Major Major Major Major, and immediately begins to hide, miserable in his incompetence, in an office where no one is admitted to see him, unless he is absent. By 1968 no one in viewing audiences seemed surprised by the ominous (murderous) “Hal 9000” computer in Stanley Kubrick’s 2001: A Space Odyssey, and by the time ARPANET is up and running (first operational nodes in 1969), Alvin Toffler’s Future Shock (1970) seems almost overdue. With these ominous, ironic models of computer technology making up the computerworld scene, already these man-made servants seemed more scary than helpful – few Americans felt “served” by as much as they felt themselves to be in the service of the computers owned and operated by the IRS, the Draft Board, and the phone company. As an agency by means of which man could escape drudgery, free up work time and space, and live “the good life,” the computer seemed an escaped Jinn. In accepting their labors, many felt they had agreed to accept their harsh intolerance of “error” in more elements of their lives and work than had ever been imagined (Postman, 111 ff.).

It is no surprise that Burke viewed computer technology with unease, even suspicion. Influenced by Nietzsche and others to conceive of human thought and terministic behaviors as connected to, not separate from, our humanity, while at the same time taking seriously our tendency to fall into traps of “occupational psychosis,” when we should rather be exercising “perspective by incongruity,” – thinking for ourselves – he challenged the new technologies and the typical technocratic god-terms to creep to the top of social and ethical “clusters,” or terms surrounding values of work, education, politics, and living.

The issue: If man is the symbol-using animal, some motives must derive from his animality, some from his symbolicity, and some from mixtures of the two. The computer can't serve as our model (or "terministic screen"). For it is not an animal, but an artifact. And it can't truly be said to "act." Its operations are but a complex set of sheerly physical motions. Thus, we must if possible distinguish between the symbolic action of a person and the behavior of such a mere thing.

--Kenneth Burke "Mind, Body, and the Unconscious" (LSA, 1966)

Critical to a Burkean approach to internetworked writing is the distinction between action and motion. If we are to maintain that “computing is a rhetorical act,” we need to distinguish between the human act of computing, that is the human employment of the machine as an agency in the accomplishment of some task or goal, or in the creation of a text, an application, or an operating system, and the machine motion that implements (or results from) that human act. When I say that “computing is a rhetorical act” I do not refer to the functions of the machine itself, the processing of the binary stream of data bundled into bits and bytes of on-off impulses. The computing machine is capable, in Burke’s view, only of motion – it has no needs (except our own need to keep it functioning for our own purposes), no desires, no goals, no urges. Computers do not act, but humans, in the employment of computers as agency, are said to be “computing” when they initiate (or plan, design, structure, make possible) the motion of the machine.

While any human action can be “interpreted” or perceived as having meaning or purpose, and therefore can be analyzed rhetorically, the Burkean system is primarily concerned with symbolic action. Therefore, it seems important to start from some basic claim, from a ground zero declaration that “computing” is a uniquely human act, and that it is well within the realm of utterance and meaning – of symbolicity – in that it exists mainly within a dependence upon both “artificially,” or deliberately created computer languages, and the employment of “natural,” or evolved human languages. While it seems handy to designate computer languages such a s Fortran, Cobol, C++, Visual Basic, or Java as “artificial” as opposed to human languages such as English, Latin, French, or Russian, which we might term “natural,” the lines between computer languages and human languages must necessarily blur and tangle, since both human and computer languages are created, “developed,” and used by humans. Burke provides a possible method for striking some kind of comfortable balance in conceptualizing the differences between human languages and computer languages, and their respective symbolic function as a means of human motive. Human languages encompass the full range of what Burke in his discussion of “Poetics in Particular, Language in General” (LSA 1966) calls the four “linguistic dimensions”:

Viewed from the standpoint of “symbolicity” in general, Poetics is but one of the four primary linguistic dimensions. The others are: logic, or grammar; rhetoric, the hortatory use of language, to induce cooperation by persuasion and dissuasion; and ethics. By the ethical dimension, I have in mind the ways in which, through language, we express our characters, whether or not we intend to do so. (LSA 28)

Computer languages have a tendency to fall predominantly into the logical/grammatical” dimension of language. We can, when pressed, imagine some arguments for the poetic, hortatory, and ethical dimensions of computer languages. Certainly an unusual irony of computer languages is that they function on one level as construction of logical operations in the interface between human and machine (compiler), but take on extra dimensions of symbolicity in those instances when one programmer who understands a particular computing language reads the code written by another. Primarily, however, we must assume that computer languages consist and exist mainly in the realm of logical expressions, with the expectation of logical (rational / consistent) results of any utterance flowing from human to machine. One need not “persuade” a computer. Nor could one expect the machine to appreciate poetic, or artistic, symbolic expression, or to value human personality (ethical dimensions of expression). Thus, while it is reasonable to reserve final evaluation of computer languages in light of their incipient status (after all, the English language has had roughly a millennium to run loose from the pen, and almost a century on the keyboard, while the Java programming language, for example, has at this writing been in existence for less than a decade), it is understandable that many consider computing languages as “off-shoot” technical languages, or even mere “codes,” invented for particular purposes, developed mainly as functions of logical expression.

Emanating from this “neutral” or “logic-only” quality of computer usage and computing languages is a kind of paradigm creep symptomatic of the post-industrial, or “information” age – its pervasiveness inspires cautionary critique such as the concept of a society ruled by “technique” envisioned by Jacques Ellul (1954/1964) – a conceptualization that Ellul argues is a result of our nature, our drive to create a technology-driven culture. Yet from a Burkean standpoint, we can follow textual trails in both academic and popular media, trails leading less toward anthropological concerns and more toward logological issues. The Burkean view of the “internet explosion,” or “computer revolution” is more likely to find and focus on various “terministic screens,” showing that a dominant economic and industrial movement into the “culture of information” or computer-privileging hierarchies in the world of work, can result in habits of logic-only thinking, of oversimplification, of assuming that because industries, economies, and institutions can be designed and managed on a “grammatical” or logical basis, that all human symbolic interaction inevitably will be conceptualized from a logical or technological framework, say, in the dystopian ways considered by Neil Postman’s Technopoly (1992).

Presumably, according to Burke, if our entire culture has assimilated technocratic values, we should be able, by charting clusters, to “get our cues as to the important ingredients subsumed in “symbolic mergers” (ATH 233). In the late 20th Century, clusters of “scientistic” terms and trails of technocratic thinking can be followed in the field of Rhetoric and Composition Studies. In the teaching of college writing, for example, we can see the three classical appeals divided and re-apportioned to fit a technology-driven terministic screen: textbooks privilege logos (logical appeals) over ethos (personal credibility), and both of these above pathos (emotional appeals), some even abandoning comment on emotional appeals altogether (an exception to this is Ramage & Bean’s Writing Arguments). Such emphasis on logos can on the one hand be explained by the “service” mission some composition departments set for themselves, framed upon a desire to prepare students to write during their academic career in courses across the curriculum. However, composition textbooks in these courses tend to hold up as “examples” of the assigned writing tasks published, journalistic articles published in Harper’s, Atlantic, or the New Yorker, and as most writing assignments in first-year courses are written in modes and styles best suited for a broad, general audience rather than a scholarly one. Our textbook’s passion is for logos, and its insistence upon “expert sources” in place of the writer’s (admitted lack of) scholarly ethos reveals much about our judgments about writing. Regardless of our insistence upon allowing students to “express themselves,” and in Burke’s words, by careful examination of how a textbook, or a teacher, or an entire academic department, handles the teaching and practicing of rhetorical appeals,