Design Patterns for Deadlock-Free Concurrent Object-Oriented Systems

Design Patterns for Deadlock-Free Concurrent Object-Oriented Systems

Design Patterns for Deadlock-Free Concurrent Object-Oriented Systems

David E. Cloutier, Colin J. Neill & Phillip A. Laplante

The Pennsylvania State University

Malvern, PA 19355, USA.

Abstract: Multithreaded software is usually constructed without any formal assurances that it has certain desirable properties (such as freedom from deadlock). While formal methods are available that allow concurrent systems to be constructed in a rigorous manner, they have yet to gain a significant industrial following.

This paper applies a formal method, CSP, for the construction of deadlock-free concurrent systems using two object-oriented design patterns. By using these design patterns to hide formal method specifications the need to learn the formal method is minimized. This allows for the construction of deadlock-free systems using design patterns, which are widely used in the design of object-oriented systems.

Key-words: - concurrency, deadlock, design patterns, object-oriented, CSP

1

1. Introduction

Multithreading is widely used to achieve better performance in software systems. For example, in a uniprocessor system, multithreading can increase process throughput by allowing a process to perform computations while waiting for slow I/O operations to complete. On a multiprocessor system, multithreading can increase performance by exploiting program parallelism.

However, multithreaded programs are more difficult to construct and maintain than sequential programs for various reasons. Perhaps the most obvious is that multithreaded programs are inherently nondeterministic. It becomes necessary to understand all of the possible combinations of states that can occur and the consequences of each. Such complexity gives rise to three common classes of errors in concurrent systems that must be avoided: race conditions, livelock, and deadlock.

A race condition exists when certain allowable operation interleavings produce incorrect results [1]. The root cause of this problem is insufficient synchronization.

A concurrent system is livelocked when it internally communicates infinitely without any component communicating externally [2]. This is a special case of divergence when a program performs an infinite unbroken sequence of internal actions.

A concurrent system is deadlocked when no component can make any progress, generally because each is waiting for communication with the other [2].

One common approach to resolving these issues is to avoid multithreading altogether. If concurrency is avoided then these errors are not possible. However this approach may not be feasible due to performance requirements.

Another common approach is to use multithreading and rely on testing to certify that the resulting system has certain desirable properties (such as freedom from deadlock). With this approach testing is considered a suitable substitute for rigorous engineering analysis. However testing is extremely difficult for concurrent programs as it relies on executing a particular sequence of computations to reveal defects. The inherent nondeterminism of multithreaded programs significantly inhibits their controllability. Moreover, testing may never execute sequences that reveal defects. Even when defects occur in the system, nondeterminism can inhibit (or prevent) successfully diagnosing the problem and correcting the defect.

An alternative approach is to use formal methods, which allow for the rigorous specification and analysis of concurrent systems. Martin [3] presented a set of design rules for constructing deadlock-free concurrent systems. However use of these rules requires extensive knowledge of the underlying formal method. This would likely deter wide spread usage in industry.

Therefore this paper adapts Martin’s rules for usage with two common concurrency design patterns: Active Object and Monitor Object [4]. By presenting Martin’s results in terms of design patterns, concurrent systems can be constructed which are deadlock-free and divergence-free without requiring extensive knowledge of the underlying formal method. It is expected that by using the rules and design patterns presented in this paper software engineers can gain more of the benefits of concurrency while minimizing some of the associated risks.

2. Active Object

Schmidt et al. [4] documented the Active Object design pattern that “decouples method execution from method invocation to enhance concurrency and simplify synchronized access to objects that reside in their own threads of control”. The Active Object pattern is a fundamental approach for integrating concurrency with object-orientation. This paper presents a simplified variation. This variation has one significant difference from the pattern documented by Schmidt et al. [4] in that it has been simplified to exclude guards and futures; a simplification necessary to achieve deadlock-freedom.

The class diagram for the simplified Active Object pattern is shown in Fig. 1.

Fig. 1: Active Object Class Diagram

The Active Object pattern is simply composed of a number of smaller design patterns. A Proxy [16] provides a local representative for a thread-remote object (an object that resides in a different thread of control). It allows a client to communicate with a thread-remote object as though it were a thread-local object. The proxy is responsible for instantiating the proper concrete method request and submitting it to the queue. A MethodRequest is just an instance of the Command pattern [16]. A method request encapsulates a request on a thread-remote object (i.e., the servant). MethodRequests are prohibited from supporting a can_run method (i.e., guards). An active object must be able to execute any request immediately whenever one is available (i.e., can_run must always return true) in this variant. The Active Object pattern also utilizes the well-known worker thread model (which is encapsulated within the RequestProcessor class). A worker thread is responsible for pulling requests from a queue and executing them. When the queue is empty, the worker will block waiting for a request to arrive. The queue is a basic synchronized unbounded buffer. It has an infinite capacity for requests and provides them to the worker in a first-in-first-out manner. A servant defines the behavior and state of the thread-remote object that the proxy represents. It is a passive object that is required to be divergence-free.

Unlike the pattern presented in Schmidt et al. [4], futures are prohibited. A future object allows a client to obtain the result of a method invocation on an active object by waiting indefinitely for a result to be computed and stored into the future. The methods supported by an Active Object’s interface are required to be one-way. A one-way method is one that does not provide direct feedback to the caller. As such one-way methods cannot return results, support out-mode parameters, or throw exceptions.

The Active Object pattern enhances concurrency by allowing clients to proceed independent of method invocations [4]. Further the pattern supports separation of concerns since concurrency-related logic is separated from application code. Application programmers can focus on the servant implementation, while the components that contain the concurrency logic, Queue and RequestProcessor, can be provided via a class library. However, the context switching and synchronization overhead can negate any performance gains provided by the additional concurrency [4]. Also if two-way communication is needed between a client and an active object, a callback mechanism must be used (since futures are prohibited in this variant).

3. Monitor Object

Schmidt et al. [4] documented the Monitor Object pattern that “synchronizes concurrent method execution to ensure that only one method at a time runs within an object”. This paper presents a simplified variation. This variation has one significant difference from the pattern documented in [4], conditions (blocking a thread’s execution until a particular condition holds) are not supported. Again, this simplification is necessary to achieve deadlock-freedom.

The class diagram for the Monitor Object pattern is shown in Fig. 2.

The Monitor Object pattern is really just an instance of the Proxy pattern [16]. Specifically a monitor object can be considered to be a protection proxy. The Proxy protects a servant from being concurrently accessed. It serializes access to the servant by ensuring that the lock is acquired before the servant is accessed. A servant defines the behavior and state of the object that the proxy protects. It is a passive object that is required to be divergence-free.

Fig. 2: Monitor Object Class Diagram

The Monitor Object pattern simplifies concurrency control since clients are not responsible for thread-safe usage of a monitor object (it is implicitly provided) [4]. Further separation of concerns is supported. Concurrency related logic is separated from application code. Applications programmers can focus on the servant implementation, while all concurrency related logic is encapsulated within the proxy. However, the use of single lock can limit scalability due to contention among multiple threads attempting to use the monitor object [4].

4. Overview of CSP

In the late 1970s, Hoare with his work on Communicating Sequential Processes, CSP [5], provided a basis for the verification and validation of concurrent systems. Before CSP several methods for specifying communicating sequential processes were widely used including semaphores [6], conditional critical regions [7], and monitors and queues [8]. As observed in [5] “most of these are demonstrably adequate for their purpose, but there is no widely recognized criterion for choosing between them”. This consideration led Hoare to attempt to find a single simple solution to all those problems. He subsequently revised his 1978 CSP work producing a second version of CSP [9], which is the version used in this paper.

CSP is a formal method for the specification and analysis of concurrent systems. CSP deals with processes, networks of processes and the interactions between them. Processes in CSP communicate only via explicit messages (shared data is not permitted).

CSP processes communicate over simple point-to-point channels. These channels operate with a type of rendezvous semantic described by Welch [18] as: if process A transmits on channel c before B is ready to receive, then A will block. If B tries to receive on channel c before A transmits, B will block. When both are ready, data is transferred and this transfer is as a direct mapping between their respective state spaces.

Commercial model checking tools for CSP are available [17]. Such tools greatly enhance the effectiveness of using CSP by automating verification. However, model checking suffers from the well-known state explosion problem that prevents its application to large-scale industrial systems.

For a more comprehensive and thorough treatment of CSP the reader is referred to Hoare [9] or Roscoe [2].

5. The Client-Server Protocol

The Client-Server Protocol was originally formulated by Brinch Hansen in the context of operating systems [10]. It has since been adapted as a means of designing deadlock-free concurrent systems using the programming language occam [11]. Martin [3] presented a formal adaptation and extension of the Client-Server Protocol as a set of design rules for the construction of deadlock-free systems in CSP. An overview of Martin’s results is presented below.

A basic client-server CSP process P has a finite set of external channels partitioned into separate bundles, each of which has a type, in relation to P that is either client or server. Each channel bundle consists of either a pair of channels, a requisition and an acknowledgement, <r,a>, or a single channel (called a drip) <d>.

A basic client-server process, P, must obey the following rules:

a) P is divergence-free, deadlock-free and non-terminating.

b) When P is in a stable state (no internal activity possible), either it is ready to communicate on all its requisition and drip server channels or it is ready to communicate on none of them.

c) P always communicates on any bundle pair <r,a> , in the sequence r,a,r,a,...

d) When P communicates on a client requisition channel, it must guarantee to accept the corresponding acknowledgement

When constructing a client-server network from a set of client-server processes, each client bundle of a process must either be a server bundle of exactly one other process, or consist of channels external to the network. Similarly each server bundle of any process must either be a client bundle of exactly one other process or be external to the network. No other communication between processes is permitted,

The client-server digraph of a client-server network consists of a vertex representing every process, and, an arc representing each shared bundle, directed from the process for which it is of type client, towards that for which it is of type server.

Rule 5.1 [3] A client-server network, composed from basic processes, which has a circuit-free client-server digraph, is deadlock-free (and divergence-free).

A composite client-server process V is a client-server network (P1,…PN) composed solely from basic client-server processes, whose client-server digraph contains no circuits. A composite client-server process is represented by a single vertex in a client-server digraph.

Rule 5.2 [3] A client-server network, composed from composite processes, with a circuit-free client-server digraph, is deadlock-free (and divergence-free).

Since Martin’s definition of deadlock-freedom excludes any network that can diverge, this has been made explicit in the restatement of the rules above. For proof of the rules the reader is referred to [3].

6. Modeling Active and Monitor Objects in CSP

While CSP and the Client-Server Protocol enable concurrent systems to be constructed in a rigorous manner with the guarantee of deadlock-freedom, it is unlikely that they would be widely used in industry. Industrial software developers have typically avoided formal methods in favor of more pragmatic approaches. Currently the object-oriented paradigm is quite popular in industry. Therefore if objects could be designed such that they can be modeled in CSP by client-server processes then one could use Martin’s rules to create object-oriented systems that are guaranteed deadlock-free and divergence-free by construction. This allows for the comfort of using a familiar approach, object-orientation, while still retaining some of the associated benefits of CSP. However Martin’s results are only applicable to a static configuration of CSP processes. Processes and channels cannot be created on demand at run-time. Therefore this approach requires the system to be composed of a fixed set of active and monitor objects and all inter-object collaborations are fixed so that the system can be modeled in CSP.

Since we are only interested in an object’s external collaborations with other objects the CSP model will ignore an object’s state and any parameters passed via method calls. Object state does not affect the deadlock analysis since we preclude condition synchronization (with the exception of an active object’s queue). All class methods are assumed to have the following syntax,

M ::= o.m | M1;M2 | if B then M1 else M2 | while B do M1

where o.m denotes the invocation of a method m on an object reference o (and o is an external object, not a part of the representation). For a given method there exists a finite set of all possible traces. Further every trace is a finite sequence since every method is required to be divergence-free.

For example consider the following method:

void notifyObservers(Object arg)

{

if (hasChanged)

{

hasChanged = false;

Iterator itr = observers.iterator();

while (itr.hasNext())

{

Observer o = (Observer) itr.next();

o.update(this, arg);

}

}

}

The set of possible traces of this method are:

<notifyObservers@pre, notifyObservers@post>

<notifyObservers@pre, update, notifyObservers@post>

<notifyObservers@pre, update, …, update, notifyObservers@post>

The @pre and @post postfix is used to denote method entry and exit. Since we assume an object model with a fixed set of objects (and all objects are required to be divergence-free) we can safely assume that the while loop terminates and hence the term update, …, update must be a finite sequence of calls to update.

We can construct a CSP client-server process to model a method’s behavior using the method’s trace set. A method invocation between two objects is modeled by a pair of requisition and acknowledgement channels <req, ack>. So every time an object a invokes method m on object b that is modeled using the same unique pair of requisition and acknowledgement channels. For the above example the CSP model of the notifyObservers method is as follows:

NOTIFY_OBS =

(req0ack0NOTIFY_OBS)

(req0req1ack1ack0NOTIFY_OBS)

(req0req1ack1…reqNackNack0NOTIFY_OBS)

The prefix operator () is used to indicate the relationship between an event, e, and a process, P, such that eP, which reads “e then P,” describes an entity which engages in e and then behaves as described by P [9]. Nondeterministic choice is denoted by . For our purposes the process PQ, describes a choice between behaving as P or Q, which is decided nondeterministically (e.g., a coin toss decides which behavior to provide).

An object is simply modeled in CSP as the composition of all of its method CSP processes as follows:

OBJECT = METHOD_1 …METHOD_N

The external choice operator is denoted by . PQ is a process which offers the environment the choice of the first events of P and Q and then behaves accordingly. This means that if the first event chosen is one from P only, then PQ behaves like P, while if one is chosen from Q it behaves like Q.

CSP processes derived in the above manner are non-terminating by definition. Further they are divergence-free since they do not contain DIV (denotes the process which does nothing except diverge), or hiding (internal events not visible to the environment), and all recursion is event-guarded [12]. Finally since all processes are sequential, non-terminating, and divergence-free they are also deadlock-free [13]. By definition all processes constructed using the above approach offer their environment the choice of all server channels or none of them. Further the processes have been constructed to guarantee that they always accept the corresponding acknowledgement of a client requisition channel by always engaging in the acknowledgement event directly after communicating on the corresponding requisition channel. Finally the processes have been constructed to guarantee that they always communicate on any bundle pair <r,a> , in the sequence r,a,r,a,... For client bundles this is guaranteed by construction since all occurrences of communication on client channels are of the form ...reqack... As for server bundles this property is guaranteed by the fact that all processes only communicate on server requisition channels as their first event and they only communicate on server acknowledgement channels as the last event before beginning the same communications cycle over again. Therefore all processes constructed using the above approach are client server processes.