Analysis of Stakeholder/ Value Dependency Patterns and Process Implications: A Controlled Experiment
Di Wu1, Qi Li1, Mei He2, Barry Boehm1, Ye Yang2, Supannika Koolmanojwong1
1University of Southern California, 941 W. 37th Place Los Angeles, CA 90089-0781
2Institute of Software, Chinese Academy of Sciences, Beijing 100080, China
{diwu, qli1, boehm, koolmano}@usc.edu, {hemei, ye}@itechs.iscas.ac.cn
Abstract
Different classes of information system stakeholders depend on different values to be successful. Understanding stakeholders’ value dependencies is critical for developing software intensive systems. However, there is no universal one-size-fits-all stakeholder/ value metric that can be applied for a given system. This paper presents an analysis of major classes of stakeholders’ value priorities using the win-win prioritization results from 16 real-client graduate software engineering course projects. Findings from this controlled experiment further verify and extend the hypotheses that “different stakeholders have different value propositions”, bridge the value understanding gaps among different stakeholders, beneficial for further reasoning about stakeholders’ utility functions and for providing process guidance for software projects involving various classes of stakeholders.
1. Introduction
A major challenge in the development of software intensive systems is to define requirements to satisfy the needs of its multiple inter-dependent stakeholders [1, 25, 26]. The 2005 Standish Group research report [2] reported 53% of the projects were challenged due to lack of user input. Developing the wrong functionalities and developing the wrong user interface are among the top software risk items [4]. Making incorrect assumptions of different classes of users and the incompatible needs among these users have led to many product disasters [5].
This indicates that software developers need to better understand the dependability of the software they produce. By dependability, we mean “to what extent does the system satisfy the value propositions that its stakeholders are depending on” [6]? The USC 2004 NASA high dependability computing study [6] provides a good example of the resulting dependability conflicts across different classes of stakeholders, and provides high level guidance for reasoning about stakeholders’ desired levels of service. A key finding from the study is that different stakeholders have different value dependencies, but most of the relationships are hypotheses yet to be tested. Moreover, it is even more challenging for practitioners to come up with a win-win set of desired functional needs because functional dependencies must be considered within an operational context, which varies from system to system. However, we have had the opportunity to test the hypotheses in [6] and extend it to functional needs in the context of a graduate team-project software engineering course, since most projects are web-based applications for community service organizations and small businesses. We collected major classes of stakeholders’ priority ratings on their win-win agreements, and applied the analysis approach suggested in [6]. During the win-win prioritization, some students were instructed to role play as client, user, and maintainer. The role-play client’s ratings were then compared with real client’s ratings to assess the confidence of using role-player in this activity. The results from the controlled experiment quantitatively verify the hypotheses that “Different stakeholders have different value propositions” and by means of mutual learning process and experiences, role-play stakeholders could be able to represent missing real stakeholders to some extent.
The remainder of the paper is structured as follows: section 2 introduces related work; section 3 describes the experiment design step by step; section 4 presents results from the experiment; section 5 discusses threats to validity, and section 6, 7 concludes the paper with discussions on process implications and future work.
2. Related work
2.1. VBSSE “4+1” Theory -The underlying theory
This paper aims to empirically verify the dependency theory from the “4+1” Value-Based Systems and Software Engineering (VBSSE) theory [7, 8]. The center of the theory is the success-critical stakeholder (SCS) win-win Theory W, which addresses what values are important and how success is assured for a given software engineering organization. The four supporting theories that it draws upon are dependency theory (Identifying all of the success-critical stakeholders), utility theory (Understanding how the success-critical stakeholders want to win), decision theory (Having the success-critical stakeholders negotiate win-win product and process plans) and control theory (Adaptively controlling progress toward a success-critical stakeholder win-win outcome). This process has been integrated with the spiral model of system and software development and evolution [7] and its next-generation system and software engineering successor, the Incremental Commitment Model [8]. Thus, the approach can be used for non-software intensive systems, but our experiment here was performed in the context of software-intensive systems.
2.2. Top-Level Stakeholder/ Value Dependency Framework-The hypotheses to be verified and extended
The USC 2004 NASA high dependability computing study [6] provides a good example of the resulting dependability conflicts across different classes of stakeholders, and provides high level guidance for reasoning about stakeholders’ desired levels of service. Table 1 shows the top-level stakeholder/ value metric in the framework identified in [6]. The key elements in the framework are: A definition of major classes of success critical stakeholders; a classification of dependability attributes; and dependency strength on these attributes for each class of stakeholders.
The dependability attributes in Table 1 are essentially the same as the attributes frequently associated with software quality. In this paper, we will continue to refer to them as dependability attributes.
As already mentioned in section 1, the main finding from this study is that different stakeholders have different value propositions of level of services, but most of the relationships are hypotheses yet to be tested quantitatively. Moreover, it is even more challenging for practitioners to come up with a win-win set of desired functional needs because functional dependencies must be considered within an operational context, which varies from system to system. In this paper, we try to empirically test the hypotheses in Table 1 and extend the dependability attributes to functional requirements based on 16 projects.
2.3. Other Related Work
It is generally accepted that requirements need to consider multiple viewpoints perspectives. Darke and Shanks [9] pointed out that viewpoint “represent a particular perspective or set of perceptions of the problem domain”. The range of these perspectives is not limit to stakeholders. It also includes organizational and domain sources, such as database, policy, documents [10]. A number of requirements methods have been developed for defining different viewpoints. Such work include Easterbrook’s work [11] on identifying user viewpoints during requirements acquisition, Finkelstein et al.[12] and Niskier et al.[13]’s work on identifying developer viewpoints during requirements modeling, Kotonya and Sommerville [14] ‘s object-oriented approach to viewpoint analysis. Several researches also proposed techniques for identifying conflicting viewpoints [15, 16]. Our approach differs from these works in that we use the stakeholder WinWin negotiation approach to elicit and reconcile viewpoints. In addition, the prioritization step during negotiation enables stakeholders to identify value priorities from their viewpoints. Existing methods often lack this direct input from stakeholders.
Early research on the characteristics of software quality and dependability were the 1973 TRW-NBS characteristics of software quality study [17] and the 1977 GE-RADC factors in software quality study [18]. Subsequent significant contributions were Gilb’s techniques of defining quantitative quality goals [19], Grady’s quality management metrics used at Hewlett-Packard [20, 21], Kitchenham’s COQUAMO estimation model [22], the SEI’s architecture tradeoff analysis method [23, 24], the U. of Toronto non-functional requirements method [25], and the Fraunhofer-Maryland survey of software dependability properties [3]. Links between these and stakeholder value propositions are addressed in the Basili et al. Goal Question Metric Paradigm [26], the stakeholder win-win related research [5, 6, 27], Al-Said’s study of stakeholder value model clashes [5], and several chapters in the Value-Based Software Engineering book [28]. These works provided the basics for defining and analyzing the dependability attributes used in this study. However, the main shortcoming of these works is lacking quantitative evidence. In this paper, we design a controlled experiment to collect and analyze data to quantitatively verify the hypotheses that “Different stakeholders have different value propositions” and this study can be used to complement the current research in this field.
3. Controlled Experiment Methodology
3.1. Experiment Objective and Background
As mentioned in Section 1, the objective of this study is to identify stakeholder/ value dependence patterns using real-client projects being developed at the USC graduate software engineering course. More specifically, we want to 1) identify the primary dependability attributes in terms of functional tasks, level of service, and project related activities; 2) evaluate the hypotheses of dependency relationships shown in Table 1.
Of the 16 projects in fall 2008, six belonged to a category called multi-media information services. The media included art gallery items, cinema artifacts, writing essays, theater scripts, housing information, and clothing and garment information. Five involved community services such as youth services, thrift stores, financial literacy education, rivers and parks, and housing services. The other five were miscellaneous applications such as neighborhood small businesses, anthropology research data management, and social networking. Project teams were formed by first year graduate students taking the course. The project clients came from USC-neighborhood small business, non-profit, and campus organizations needing improved e-services applications.
3.2. Experiment Design
Preparation: Prior to the experiment, all teams conducted win-win negotiations with real client representatives and agreements were reached. To make sure the experiment is under control, we also gave students two tutorials, one is for teaching students how to role-play stakeholders in step 1, the other is for teaching students how to rate meaningfully in step 3.
Step 1: Define major classes of success critical stakeholders (SCSs) and identify role players: We focused on three main types of stakeholders in the software engineering process: client, user, and maintainer. Al three types of stakeholders prioritize from the perspective of business importance. Developer prioritization was not included in this study because it was done from a different value perspective: ease of implementation.
As happens with many software projects in practice, we often just had real clients and did not have real user and real maintainer representatives. As an alternative, students were instructed to role play client, user, and maintainer, which created an opportunity to assess the effectiveness of using role players to represent actual stakeholders. We will compare the results between real-client and role-play client. If the results show a strong correlation, it will provide some confidence for accepting results from role-play users and role-play maintainers.
To make sure that the experiment is under control, in this step we teach students how to role-play: A challenge in this experiment is to identify role players who are able to prioritize from the perspective of other stakeholders. We suggested to teams to identify role players based on who best understands the type of stakeholder. For example, the person who works on the operational concept could be a good candidate to role play user. We also provided some value proposition hints for the role players to consider. For example, the role play client may consider on time, within budget, functions are correctly implemented, and are interoperable with existing system, etc.
Step 2: Extend the classification of dependability attributes: In order to analyze the dependency relationship, we extended attributes from Table 1, and defined three sets of attributes: level of service goals, functional tasks, and project activities, as shown in Table 2. The levels of service attributes were compatible with the corresponding dependability attributes in [6]. Functional tasks attributes were added according to 16 projects’ specific operational context. Project activities were extended from cost and schedule to more project related activities such as requirements of transition, deployment etc. The meanings and examples of these attributes can be found in [27].
Table 2. Classification of Dependability Attributes
1.Functional tasks / 2.Level of services / 3.Project ActivitiesAdministration support / Correctness / User interface standards
Financial support / Availability / Hardware interface
Query / Security / Communications interface
Display / Privacy / Other software interface
Update / Interoperability / Course required process& tools
Storage / Usability / Other development tools
Tracking / Performance / Language
Scheduling / Evolvability / Computer hardware
Notification / Reusability / Computer software
Automation / Computer communication
Social networking / Deployment
Scan and conversion / Transition
Status message / Support environment
Usage analysis / Budget
Others / Schedule
Step 3: Rate dependency strength on these attributes: The WikiWinWin tool [30] has the capability of providing a rating page for each participant. The rating page displays all the win-win agreements to be rated as shown in Figure 1. All role players and real-client representatives rated on business importance on a scale of 1 to 9:
1-Not Worth Having, 3-Want to Have,
5-Could Have, 7-Should Have, 9-Must Have
Use of intermediate values between two adjacent judgments (2, 4, 6, 8) was allowed. A participant might bypass rating on an agreement if he/ she felt it did not have direct significance. Participants were encouraged to provide comment for their ratings.
In this step, we teach students how to rate meaningfully: It is likely that a participant would simply treat every agreement as equally important, e.g. all rated as “Must Have”. To minimize such instances, we encourage participants to consider the following factors when giving their priority ratings:
· Whether this is a core function: a basic function that a system must have. e.g. user administration is a must have function for most of the projects
· Whether this function depends on any other functions: if a function is rated highly, the functions that it is strongly dependent on should be at least that high.
Figure 1 An Example of Rating Using WikiWinWin
Step 4: Data collection and consolidation: After all participants submitted their ratings, the results were reviewed for completeness. Projects that didn’t have any priority ratings from real client representative were excluded. After this step, 10 projects remain for the study (6: multi-media, 2: community service, 2; other).