Supporting Therapy Selection in Computerized Clinical Guidelines by Means of Decision Theory

Stefania Montani, Paolo Terenziani, Alessio Bottrighi

DI, Universiy of Piemonte Orientale,Alessandria, Italy

Abstract

Supporting therapy selection is a fundamental task for a system for the computerized management of clinical guidelines (GL). The goal is particularly critical when no alternative is really better than the others, from a strictly clinical viewpoint. In these cases, decision theory appears to be a very suitable means to provide advice. In this paper, we describe how algorithms for calculating utility, and for evaluating the optimal policy, can be exploited to fit the GL management context.

Keywords:

Decision Theory, Clinical Guidelines

Introduction

Clinical guidelines (GL) can be defined as a means for specifying the “best” clinical procedures and for standardizing them. In recent years, the medical community has started to recognize that a computer-based treatment of GL provides relevant advantages, such as automatic connection to the patient databases and, more interestingly, decision making facilities; thus, many different approaches and projects have been developed to this hand (see e.g. [3,5]).

As a matter of fact, decision making is a central issue in clinical practice. In particular, supporting therapy selection is a critical objective to be achieved. Consider that, when implementing a GL, a physician can be faced with a choice among different therapeutic alternatives, and identifying the most suitable one is often not straightforward. Unlike clinical protocols [9], that specify what is the only admissible procedure in a given situation, GL are used in domains in which different choices are actually possible (see the example in the results section). Alternatives can be pruned relying both on site-related contextual information (e.g. due to the unavailability of certain resources in a given hospital), and on patient-related contextual information (due to the peculiarities of the single patient on which the GL is being applied). The works in [10,11] show how decision models can be resorted to in order to help physicians in defining and contextualizing GL. However, also when the GL has been properly contextualized, it is frequent to find more then one option left, and sometimes no one of these remaining alternatives is really “better” than the others, from a strictly clinical viewpoint.

In clinical practice, various selection parameters (such as the costs and the effectiveness of the different procedures) can be available to physicians when executing a GL. The computer-based GL systems described in the literature offer sophisticated formalizations of these decision criteria. Nevertheless, the available information is often only qualitative in nature, and “local” to the decision at hand: it does not take into account the consequences of the choice, in terms of actions to be implemented, and future decisions to be taken along the path stemming from the selected alternative. On the other hand, the possibility of obtaining a complete scenario of the decision consequences, considering the probability of the different therapy outcomes, the utilities associated to the different health states, and the money, time and resources spent, would be clearly an added value for physicians and hospital administrators.

Decision theory seems a natural candidate as a methodology for covering this task. To this hand, a systematic analysis of the main GL representation primitives, and of how they could be related to decision theory concepts has been recently proposed [6]. Since, at a sufficiently abstract level, the GL representation primitives treated in that work are shared by all the systems in the literature [8], that contribution can be seen as the first step towards the implementation of a tool within any of the such approaches.

In this paper, we start from such knowledge representation results, to describe how decision theory algorithms (to calculate utility and to obtain the optimal policy) can be exploited, when the goal is the one of supporting therapy selection in a GL management system. We also analyse complex situation which may arise due to the presence of certain types of control flow relations among GL actions, namely iterations and parallel executions. A practical application of this work is represented by the tool which is being implemented in GLARE, a domain-independent system for GL acquisition and execution [12, 13]. The algorithmic choices, and the technical issues discussed in this paper will therefore refer to this specific example. In particular, an earlier version of GLARE already embedded a facility able to calculate costs, time and resources required to complete paths in a GL (details can be found in [13], and are briefly sketched in the next section); the decision theory support can be seen as an extension of that work.

The paper is structured as follows: in the next section we clarify the goal of our decision theory tool and we summarize the previous results in the direction of supporting therapy selection, i.e. the concept mapping work and the main features of the GLARE's cost-collection facility. In the results section we describe technical issues about the implementation (referring to the specific example of GLARE). Finally the last section addresses some concluding remarks.

Materials and Methods

Designing the Main Features

We envision the possibility of adopting a decision theory tool for supporting therapy selection in two fashions. First, it can operate in the on-line modality, when applying the GL actions one at a time to the patient at hand, by automatically retrieving the patient’s data from the Hospital Information System (HIS). In this case, at the time at which a therapeutic decision has to be taken, the facility is able to provide local pros and cons of the various alternatives. Secondly - and more interestingly - the tool can be used off-line, if the physician wants to make a simulation of the consequences of a therapeutic alternative, by evaluating the patient's evolution along the different paths stemming from the decision at hand, typically until the end of the GL is reached. The possibility of collecting this global information is crucial to allow her making a well informed choice. This modality would be useful also for education purposes. From the algorithmic viewpoint, this working mode generalizes the first one; therefore, in the rest of the paper, we will concentrate on the off-line modality.

Off-line simulation typically involves a series of temporally consequent decisions. The clinical GL can therefore be seen as a dynamic decision problem. In particular, as we will briefly motivate below, the GL can be mapped onto a (completely observable) Markov Decision Process (MDP), in which the sequence of therapeutic decisions generates the sequence of (patient) states. The typical goal of a decision theory tool is to find the optimal policy, i.e. the sequence of decisions able to maximize the expected utility. In the context of GL, it is possible to adapt and simplify this task by limiting the decisions to be considered to therapy selections among clinically equivalent alternatives (see the example in the results section); we will call them non-trivial decisions henceforth. Moreover, we propose to realize an implementation in which also costs, resources and time spent to complete any path in the GL can be obtained, and can be coupled with the calculation of the expected utility along the path itself. Note that fixing the path means fixing the policy that has to be applied, i.e. knowing which alternative will be chosen at any decision.

Finally, the user should be always allowed to select the part of the GL s/he wants to focus on (as a default, the overall GL will be taken into account). The possibility of selecting only a portion of the GL seems to us particularly relevant, since we aim at supporting only non-trivial decisions, while the GL will typically include paths where therapeutic choices do not require the adoption of the decision theory facility; moreover, concentrating only on a subpart of the GL will obviously reduce the computation time. All the technical details about a concrete implementation of a decision theory tool within the system GLARE are described in the results section.

Previous Work

Concept Mapping

In [6], a knowledge representation contribution, aimed at mapping the GL primitives to decision theory concepts, was provided. In particular, at a sufficiently abstract level, GL representation formalisms share the following assumptions (for the terminology used here, we refer in particular to [2,12]). First, a GL can be represented as a graph, where nodes are the actions to be executed, and arcs are the control relations linking them. It is possible to distinguish between atomic and composite actions (plans), which can be defined in terms of their atomic components via the has-part relation. Three different types of atomic actions can then be identified: (1) work actions, i.e. actions that describe a procedure which must be executed at a given point of the guideline; (2) query actions, i.e. requests of information from the outside world; (3) decision actions, used to model the selection among different alternatives. Decision actions can be further subdivided in diagnostic decisions, used to make explicit the identification of the disease the patient is suffering from, and therapeutic decisions, used to represent the choice of a path in the GL, containing the implementation of a particular therapeutic process (henceforth, we will concentrate on (non-trivial) therapeutic decisions, that we want to support). Control relations establish which actions can be executed next, and in what order. For example, actions could be executed in sequence, or in parallel. Moreover, the alternative relation describes how alternative paths can stem from a decision action, and the repetition relation states that an action has to be repeated several times (maybe a number of times not known a priori, until a certain exit condition becomes true).

In a well-formed GL, a decision action is preceded by a query action, that is adopted to collect all the patient's parameters necessary (and sufficient) for taking the decision itself. Each decision is therefore based on an (explicit or implicit) data collection completed at decision time, and does not depend on the previous history of the patient. We can thus say that the GL describes a discrete-time first-order Markov model, since each time a query action is implemented, the patient's situation is completely re-assessed, and an (explicit or implicit) query action is always found before a decision action. This observation justifies the mapping of GL primitives to the field of decision theory, and in particular allows us to represent a GL as a MDP. The difficulties in applying Markov processes to simulate clinical processes are well known. Nevertheless, these limitations appear to be less critical in the domain of clinical guidelines, where rather strict design policies are typically applied. Therefore, when dealing with GL, some simplifications hold. In particular, as already observed, a first-order Markov model is sufficient to capture the GL dynamics. Moreover, the process modelled by the GL is completely observable, since in a GL a decision can be taken only if all the required parameters have been collected: if some needed data are missing, the query action will wait for them and the decision will be delayed.

It is then straightforward to define the state as the set of patient's parameters that are normally measured for taking decisions and for assessing therapy outcomes. Query actions are the means for observing the state. State transitions are produced by all the work actions between two consecutive non-trivial therapeutic decisions. Finally, the utility of a state can be evaluated in terms of life expectancy, corrected by Quality Adjusted Life Years (QALYs) [4].

The Cost-collection facility

GLARE already incorporates a decision support facility, able to assist physicians in choosing among therapeutic alternatives [13]. Relying on this tool, it is possible to compare different paths in the GL, by simulating what could happen if a certain choice was made. In particular, users are helped in calculating the “cost” of the paths themselves, in order to select the cheapest choice. Costs are not interpreted just as monetary expenses, but also as resources and time required to complete GL actions. Note that, when running the tool, if a composite action is found, it is expanded in its components, and the reasoning facility is recursively applied to each of them, by analysing all the decision actions that appear at the various decomposition levels. At the end of this process, the tool displays the values of the collected parameters (costs, resources, times) gathered along each path. The final decision is then left to the physician.

Results

Within GLARE, the facility described in the previous section is being extended, by allowing: (1) the identification of the optimal policy, and (2) the calculation of the expected utility along a path. In order to implement these functionalities, we had to take into account the following issues.

Focusing

As already observed, the possibility of selecting only a sub-part of a given GL is a fundamental issue to be addressed, since it allows one to skip the paths on which decision theory support is not required. In our tool, path selection has been conceived as the first step of the interaction with the user. Technically speaking, the mechanism works as follows: through a user-friendly graphical interface, the physician is asked to indicate the starting node (normally the decision at hand) of the paths to be compared and (optionally) the ending nodes (otherwise all possible paths exiting the starting node will be taken into consideration, until the end of the GL). For every decision action within each path, s/he is allowed to restrict to a subset of alternatives. Moreover, the selection process is recursively applied to composite actions. All the paths pruned by this procedure will be ignored by the subsequent steps of the reasoning process (i.e. mapping to the Markov model and extraction of the optimal policy, or calculation of the expected utilities).