DIRECTIVE DECISION DEVICES:REVERSING THE LOCUS OF

AUTHORITY IN HUMAN-COMPUTER ASSOCIATIONS

John W. Sutherland

Department of Information Systems

VirginiaCommonwealthUniversity

ABSTRACT

______

In both the public and business administration arenas, recent years have seen the first appearances of what will here be taken to constitute a novel class of computer constructs, Directive Decision Devices. At least two sub-classes can be defined: (i). Devices designed to displace human decision-makers, and so configured to operate as entirely autonomous entities, and (ii). Devices intended to be deployed in company with human functionaries, with the objective of having the latter fall under the control of the former. Unlike decision aids or decision support systems, directive decision devices are designed not to assist human functionaries, but to displace them. Directive decision devices are also to be distinguished from the subservient computer systems found in conventional man-machine contexts; where the latter are conciliatory or cooperative, directive decision devices are commanding. The emergence of directive decision devices thus signals something of a sea change in the character of human-computer relationships. And unpleasant though the prospect may be for those they come to replace or rule, these pages will argue that there’s every reason to expect that directive decision devices will become both more common and capable.

______

KEYWORDS: Managerial Technology

Man-Machine Systems

Human-Computer Interaction

Intelligent Agents

1. INTRODUCTION

Directive decision devices, as this paper sees them, are computer constructs that have been assigned decision responsibilities that were previously, or would otherwise be, invested in people. Because they are fitted out to function as de facto (if not always formally acknowledged) decision-makers, the emergence of directive decision devices presages a perhaps seismic shift in the balance of practical poweraway from humans and towards computers.

In considering how fast and how far things might go in this direction, what matters most is the analytical capabilities with which computers can be equipped, as weighed against the analytical demands imposed by the decision situations in which a directive decision device might be asked to serve (mediated, maybe, by how clever administrative system designers can be in recasting decision-related requirements to make them more amenable to computer apprehension). As far as things now stand, about all that can be said with any assurance is that the analytical capabilities of computers are not as extensive as the more ardent AI apologists might wish; claims about computer programs being endowed with higher forms of human intelligence (inductive inference potentialities, particularly) regularly en turn out to be more precatory than practicable. But neither are prospects as pedestrian as die-hard humanists would have it, especially those subscribing to the amiable assumption that machine intelligence can never amount to anything more than mastery over menial matters.

What’s true, certainly, is that most of the people that have thus far actually been displaced by (or placed under the purview of)of what might be thought of as directive decision devices will have been assigned decision responsibilities that are not all that analytically challenging. But this may owe less to any constraints on the analytical capabilities of computersthan to the fact that so many people are employed in positions that ask so little of their intellect. On the other hand, as will later be argued, the prospects for directive decision devices owe much to the recognition that more and more organizational positions are coming to impose more in the way of analyticalrequirements (computational, chiefly) thantheir human occupants can well or fully meet.

Anyway, for those at whom they are targeted, the most benign consequence of directive decision devices will be some potential curtailment of prerogatives they once enjoyed. In their more aggressive orientations,they can dispossess some people of much-valued and increasingly irreplaceable employments.If, however, they can make better decisions than those they displace, or arrive at no worse conclusions more quickly or economically, directive decision devices may ultimately come to help more people than they harm. But the development and deployment of directive decision devices will proceed apace anyway, because of the attraction they hold for high-order administrative authorities.

2.0 BOUNDING THE DDD DOMAIN

Directive decision devices will generally appear as an ordered assemblage of three categories of components: (i). Information acquisition facilities for gathering decision predicates, (ii). One or more analytical instruments, encoded as computer programs, for realizing and evaluating alternative courses of action, and (iii). Some provision for implementing decision choices (courses of action).

As for its provenance, the typical directive decision device will have been commissioned by an organization’s higher-order administrative authorities with either of two missions in mind:

● enabling transfers of decision responsibilities from people to computers in the interests, variously, of economy, consistency of decision criteria, expediency or objectivity.

● extending the effective administrative reach (or span of control, if you will) of superiors over subordinates, thereby bringing more aspects of an organization more fully within the apprehension of those sitting at the apex of a managerial hierarchy.

Directive decision devices are then, on the one hand, instruments for advancing the automation of administration and, on the other, apparatus allowing increased concentration of administrative authority.

If it is to meet either of these missions, a directive decision device would have to be both autonomous andintelligent, at least to the extent demanded by the decision situations in which it’s to be employed[2].Any directive decision device, that is, would need to be fitted with whatever analytical facilities will allow it to arrive, essentially unaided, at a satisfactory (rational, if not optimal) conclusion to the administrative problems with which it might be presented. This relativistic interpretation echoes the earlier suggestion that directive decision devices need only be as intelligent as the people they might replace, or only marginally more capable than those who they might be assigned to lead.

Not all decision agentscan pass as directive decision devices. The usual decision agent is designed to perform at the behest of, or under the more or less constantly positive control of, a human master (and is likely to be kept on a very short leash). Moreover, a good number of what are promoted as decision agents actually have no decision responsibilities, per se. They are, rather, confined to making information-related choices, charged with ferreting out data items that have some statistical likelihood of being of interest to their principal, and so might then more accurately be described as search agents [3,4]. Nor, clearly, would the passive constructs commonly encountered in man-machine complexes count as directive decision devices, as decision authority would continue to reside extensively, if not exclusively, with the man.

A polar opposite situation would follow from the introduction of a directive decision device into a human-computer association. Rather than having the computer bound in service to a human, the human would be stand subservient to the computer. However, as a quick glance at Table 1might make apparent, this situation arises in only three of the four categories of tasks towards which directive decision devices have been/might be turned (each of which will be dealt with at some length in a subsequent section). In their first and most common incarnation, as Executoryconstructs, directive decision devices would not be deployed in company with humans at all. In fact, as will later be argued, the sorts of applications for which Executory systems will be posed as most appropriate are those where the presence of a human is not just unnecessary, but undesired!

TABLE 1: Tasking Categories for Directive Decision Devices
EXECUTORY / Displacement of authority over some type/class of decisions;
outright replacement of human functionaries
COMPENSATORY / Assumeauthority in areas (or over functions) where human
capabilities are expectedly deficient or undeterminable
INTERDICTIVE / Prevent implementation of improvident or prospectively parlous
decisions unless/until sanctioned by a higher authority
COOPTIVE / Seize the initiative in case where a human functionary fails to effect
a required action (or reaction) in a timely manner

As also will be noted later, each of the several tasking categories imposes something unique in the way of instrumental requirements for directive decision devices. But there are also some constants.

Irrespective of the category in which they fall, and irrespective also of whether they are to act as the superior member of a human-computer pairing or as an autonomous entity, there are certain capabilities required of all directive decision devices. Any directive decision device must, firstly, have some provision for problem-recognition(distinguishing discrete decision situations). Thereafter, all directive decision devices need something in the way of embedded instruments to support response-selection (which requires a convergence on a singular solution). Finally, every directive decision device must have some arrangement for actually carrying out the administrative action(s) on which it’s decided.

The operational core of directive decision devices is located in the analytical apparatus arrayed against the second of the above functions. Unlike ordinary decision aids, or models intended for use in interactive or collaborative settings, employing a directive decision device means that the only analytical capabilities that can be brought to a decision situation are entirely a consequence of the instruments with which the device itself has been invested by its designers. This suggests an important practical stricture: it will here be assumed that the only decisions for which a directive decision device can be held responsible will be those for which a conventional technical solution is both available and appropriate.

The domain of effective authority for directive decision devices will then to be restricted to technically tractable decision situations, meaning those where a solution can be realized by taking recourse to one of these four families of facilities:

Deterministic / Probabilistic
Categorical / Simple rule-based structures: Decision Tables and common Decision Trees / Extensive predictive constructs(e.g., Classification and Regression Trees)
Computational / Algorithmic formulations centered
around ordinary mathematical
optimization methods / Extrapolative-projective (particularly statistical inference type) techniques

The technical tractability stipulation means that the instrumental reach of directive decision deviceswill not extend to administrative applications involving subjects or situations whose characteristics are too elusive or elaborate to be captured by an orthodox mathematical expression or set of such (i.e., a solvable system of equations). Excluded also are entire species of decisionsfor which objectively-predicated conclusions are unavailable, e.g., speculative, rationalistic (judgment-driven), quasi-axiomatic (precept-driven), subjective, axiological (value-driven), conjectural (hypothesis-driven) or, indeed, any decision situation calling for creativity or contextual discretion.

The upside of these constrictions is that directive decision devices require nothing really novel or innovative in the way of technical facilities. In fact, as the upcoming section will make clear, the array of analytical instruments available to directive decision devices is only a subset of those found in the modern operations research and management science repertoires.

3.0 PROTOTYPICAL DIRECTIVE DECISION DEVICES

At their sparest, a directive decision device may consist merely of a collection of instructions constituting a rudimentary computer program (to be executed in company with whatever data-gathering and decision-implementation apparatus are appropriate to the application at hand). In their more elaborate guises, they can encompass a multiplicity of programs (constituting what’s sometimes referred to as a software suite) conjoined with a perhaps quite extensive assemblage of hardware such as sensor banks, input-output converters, integral actuating mechanisms, monitoring and feedback facilities, etc.

It's thus possible to think of directive decision devices as being arrayed along a continuum of capabilities. Where any particular DDDwould sit on this continuum would reflect the resolution power of its analytical instruments and the extensiveness of its functional repertoire.As with any continuum, of most interest are the sorts of things that would sit towards the two extremes and in the mid-range. Hence Figure 1which includes three sets of prototypical (ideal-type) referents crafted to suggest what minimally, moderately and maximally capable directive decision devices might look like.

The elementary prototypes are essentially just conveyances for the four instrumental categories introduced just above. At their simplest, elementary devices are roughly comprehensible as simple automata-type constructs, with their core instruments analogous to transducers. No elementary device has anything other than minimal functionality, as their charge does not extend beyond the three essentials: Problem-recognition, Response-selectionand Enactment. They can, moreover, carry out these three functions in only the most rudimentary way. The two midrange entries are configured as manifold network (node-arc type) models. They are 'manifold' in the sense that they are comprised of some multiplicity of elementary (especially algorithmic) devices, and so may be fit to function as managerial rather than merely decision agents. The final pair of prototypes then speaks to the suggestion that, in their most instrumentally elaborate guises, directive decision devices will have configurational features and functional capabilities reminiscent of full-blown process control constructs.

Figure 1: Prototypical Directive Decision Devices

3.1 ELEMENTARY DEVICES

An elementary directive decision device can recognize only definite, discrete variations on a single class of problem. Secondly, they are allowed no discretion. Response selection is a purely perfunctory process so that, for any decision situation, there will be only one essentially pre-programmed (design-dictated) decision choice. Any discriminatory capabilities low-end devices have are then entirely inherited, not inherent. Finally, elementary devices have no actuation mechanisms of their own. The physical execution of decision choices must then fall to an external entity of some kind (man or machine). Hence the characteristic configuration for low-end devices shown as Figure 2, where their endogenous capabilities are confined to those included in the dotted area.

The main point of contrast between elementary devices is thus to be found in their instrumental underpinnings, with probabilistically-instrumented devices (Type-3 categorical or Type-4 computational) being considered potentially more capable than their deterministic (Type-1 and Type-2) counterparts.

3.1.1Categorical Constructs

Though this assertion will shortly be subjected to certain qualifications, it's clear that operating within the confines of a decision table demands only the lowest form of intelligence, human or otherwise. Entities having only associative inference capabilities are confined to conclusions grounded in either direct experience, per classical conditioning, like Pavlov's dogs [5], or rote learning (sometimes extended to include transitive learning [6]) via indoctrination for humans or definitive-discrete programming for computers. For the latter, decision tables will be implemented as rule-base structures that include two types of if:then mapping operations:

Diagnostic (Problem Recognition): if {I}cthen Mi, or {I}c  P(Mi  M)  1, where: {I}c is an input array (a set of observables, symptoms, etc.) for the current case, M is the problem domain and Mi a discrete member of M, with the qualifier P(Mi)  1 requiring effective certainty.

Remedial (Response Selection): if Mi then Ri , or Mi  Ri, which requires that, for any Mi, there be one and only one singular permissible response option, Ri.

The discretely-recognizable problems included in M serve to define the rows of a decision table, with the response options (Ri R) arrayed as columns. The result is a neat bipartite structure, [M x R] that is absolutely intolerant of ambiguity [7, 8, 9].

This is well enough illustrated in one of the most widely-employed examples of decision-table driven Type-1 devices, computerized automotive diagnostic systems. These are connected to the output bus of an automobile's computer control box. Inputs (qua decision predicates) then take the form of diagnostic codes, each of which (hopefully) identifies a unique problem. The problem domain for the system (M) consists of all conditions for which there is a discrete code, with each code anchoring a row in a decision-table. As problem-recognition consists entirely of code reading, definitive diagnostic mappings are thus assured, as are forced-certitude response side mappings. For any recognizable diagnostic code, a case would conclude with the system dictating a specific corrective action, which may be enacted in one of two ways, depending on the nature of the correction: The system may then direct a mechanic to perform a manual task (e.g., change a spark plug) or command the automobile's on-board computer to alter one or more operating settings (for ignition timing, turbo boost, fuel-air ratio, etc.).

The response options arrived at by a decision table need not always be conclusive. There are two other possibilities: (1). A solution may require a computational exercise, which might then be dealt with by passing the problem, via a lateral transfer, to an algorithm-driven device; or (2). A response may execute a link to a secondary (subordinate) decision table, Ri  [M x R]2. If no final solution is found within this secondary table, a tertiary table, R2,j  [M x R]3 , might be called, and so on, yielding a multi-stage solution, M1,i  (M2,j … Mn,k)  Rm,n. Applications involving multi-stage solutions, however, might better be undertaken by converting from decision tables to deterministic decision trees. Though deterministic decision trees have no raw capabilities beyond those of decision tables, they are a more convenient way of apprehending applications where problem-recognition requires a convergent (linear or trajectory-type) search process. A decision tree representation would clearly be the better choice for a device designed to trouble-shoot automobile engines for which there are no diagnostic codes. Assessing the capabilities of probabilistic or stochastic decision tree structures is not so straightforward. It’s sometimes claimed that probabilistic tree-type constructs can be used to provide computers with inductive inference capabilities. As it happens, however, most of the computer programs of which this is said actually turn out to be performing statistical inference operations in some partially masked form. For despite a century or more of serious attention from some very serious scholars, there is not as yet any strong agreement as to whether an operational inductive logic is obtainable. Attempts to develop an inductive analog to deductive entailment have not proven productive. Indeed, it's been deductively demonstrated that an operational inductive logic must involve something more and/or different than simply formally explicating the syntactical properties of the sentences by which premises are converted into conclusions. This raises reservations about the pragmatic purport of the most ambitious attempts along this line, instances of Inductive Logic Programming[10, 11]. The typical inductive logic programming application involves the search for inferences in the form of generalizations derived from an array of empirical referents (data bases or sets of exemplars [12]). But because the inferential mechanism employed in inductive logic programming exercises tend to be merely mapping operators, the conclusions at which they can arrive can be nothing more than a logical consequence of their syntax (mainly in the form of first-order predicate calculus provisions) and the semantic content of the empirical referents they were provided. The generalizations at which inductive logic programming exercises tend to arrive cannot then be said to represent new knowledge, but merely a transformation (or transmogrification, if you will) of that already incorporated in the exemplars.Nor do the actual accomplishments of ILPexercises seem to provide much substantive support for its proponents' contention to have authored advances in machine-learning that now make meaningful Baconian induction possible[13, 14]. All in all then, it seems best to search for more continent sources of probabilistic support for Type-3 devices.