Improving Communication Among Project Disciplines Using the Performance/Cost Analysis Model (P/CAM)

Prepared by Bryan Piggott, VP Engineering Inc.

Introduction

Some decisions and actions are more important than others. Some get you much closer to your goals, others only inch you forward. Decide what is important. Spend your time and energy there.[1]

In a process as complicated as building a system, how do you decide what is important? There might be dozens, even hundreds, of teams working to complete the project. Communication is critical to coordinating these teams, sometimes a tool can help.

Technology projects have a checkerboard past with dismal failures, soaring successes and many projects that just finish. This author believes that communication breakdown is the underlying cause for most failures and that these breakdowns can be reduced through the use of some specific modeling approaches.

The Navy A-12 program was cancelled in 1991 with overruns projected of $2 billion – the legal maneuvering went on for many years after.[2] There is no way to tell at this point but the project might have turned out differently. Management on both sides paid little attention to what the earned value reporting was communicating to them; the contractor put recovery plans in front of the customer and that is what they all focused on. At some point outside forces looked at the program and saw an unrecoverable spiral. Termination.

Mr. John D. Warner, President of Boeing Computer Services in 1995, “described the success of the Boeing 777 aircraft as ‘three million parts flying in close formation.’ The 777 contains four million source lines of code, the software systems are all functional, and the 777 is service ready and on schedule. The 777 was designed on CAD/CAM systems and involved thousands of engineers and scientists working in the U.S. and Japan.”[3] In this project communication was paramount and probably one of the principal reasons for its success.

These two projects were fairly close together in time, both run by industry giants; why was one a failure and the other a success? This author has concluded that the 777 was a success because Boeing ensured that the approximately 240 design teams (distributed around the world) were able to rapidly and effectively communicate the information needed to work, make decisions and be a success; the A-12 program did not.

According to the Open Systems Group in The Open Group Architecture Framework (TOGAF) descriptions there are four business domains that need to be addressed in the development of system architecture:

  1. Business
  2. Data
  3. Applications
  4. Technology

These business domains cannot exist in isolation and be effective; they co-exist and must freely communicate. They should be integrated; although many times the systems are developed with little direct consideration of the domain interplay.

Stovepipe environments block success

One of the ‘advances’ in modern management and system engineering is an increase in the number of specialized disciplines. Specialization has many benefits but there are some negative side effects:

·  It leads to a “stovepipe” mentality regarding one’s own discipline.

·  It results in specialized terms and language; you almost have to be an insider to fully understand discussions of their discipline.

·  It may result in adversarial relationships among the disciplines on a project.

It is not uncommon to hear comments like: “I do not like the architecture and they will not listen to me so I am going to build my software the right way.” or even “How can they start the software design until I finish the architecture?

What are the developers supposed to do while the architecture is being completed? They have budget, staff, tasking and a schedule, they are not going to look bad to their management chain; they are going to do something. They will show progress — at least motion. When groups strike out on their own they may have every intention to synchronize when their counterpart finishes their work. This author’s observation is that the planned synchronization generally does not take place; it consumes schedule and so the domains diverge. The most extreme result of this divergence is project failure! The best that can happen is there will be substantial re-work to synchronize work products resulting in schedule slips and budget overruns.

This “stovepipe” issue is recognized for systems. Everyone has heard people speak of organizations having “stovepipe” systems that “do not talk with one another.” Organizations with a set of “stovepipe” systems are wasteful and inefficient because of functional overlap (building functionality several times) and the difficulty in sharing information across the systems. This author suggests that a similar problem exists within development organizations! Often each of the development organizations operates as stovepipes; with the concomitant difficulty of communication: the larger the project the more likely the “stovepipes.”

Communication is key

Why do projects fail? Communication is posed as the main culprit, but that is too general and to difficult to address. In a discussion of why the Tower of Babel failed, Frederick Brooks said that they lacked “…in two respects — communication, and its consequent, organization. They were unable to talk with each other; hence they could not coordinate.” He went on to say “Schedule disaster, functional misfits, and system bugs all arise because the left hand does not know what the right hand is doing.”[4] The classic reasons for project failures are listed below; all of these have a significant communication component.

·  Misunderstood requirements

·  Poor planning

·  Poor tracking

·  Inadequate product management

·  Inadequate processes for product development

·  Limited product/process reviews

·  Limited risk management

All of the reasons stated here are valid; any one has the potential to cause serious disturbances to a project. A few of them together would doom most projects.

Requirements problems are the number one cause of project failure.[5] Misunderstood requirements are primarily a communication breakdown. The customer could be sending an incomplete or confusing message. The requirements analyst could be filtering the message through their own biases. The requirement could be distorted during the formalization process. Sometimes the customer is not even the end user and the requirements have been filtered through an acquisition specialist shop before engineers see them; the project will have no means of validation. In any event, the customer is representing the requirements in their language, using their terminology; if engineers do not understand what is being said they will get it wrong when they design and build the system.

Adequate planning is critical. “You have two important functions as a manager: to do things right and to do the right things. The function with the greater impact on your organization is to do the right things.”[6] Everyone needs to know what is expected in order to have any chance of success. If there is planning but it is not written, then there is a clear opportunity for miscommunication since the written word is unavailable to fall back on. Good plans to address a reliable set of requirements provide a fighting chance to succeed.

Tracking is also critical. This lets the customer know how the project is going; it also informs the development staff. Without tracking one cannot know if the plan is adequate, if additional staff or time is needed, or if there is a chance to succeed. Tracking data should be communicated to the customer and to all project members; developers cannot be effective in isolation. Co-workers are dependent on the outcome of other tasks and other tasks are dependent on them. Knowledge of status allows appropriate application of resources while minimizing waste and frustration.

Product management is primarily Quality Assurance (QA) and Configuration Management (CM). Communication is central to these. For example, without an appropriate and utilized CM system, one could very well write a data interchange (or any function) based on an obsolete design because the design was retrieved from the wrong file structure.

If processes do not exist, or are ignored, then each technologist will apply their own personal process, they have no other choice. In all probability their process will not be compatible with all of the other personal processes in use. This brand of autocracy is at best “ad hoc, and at worst chaotic”[7] — the definition of the Capability Maturity Model Integrated (CMMI) level 1.

Peer reviews and audits have been shown to be one of the most effective means of detecting defects close to the time the defect is injected. A variety of authors have identified the cost of correction as being up to 100 times more if the defect is not found until after deployment.

Risk Management is an activity that is often inconsistently planned and executed, with lip service paid to tools like the risk taxonomy from the Carnegie Mellon Software Engineering Institute (SEI). There are managers who want risk management as long as it does not consume too much budget or time. Most particularly they do not want their key people involved in the risk identification or the impact assessments since they it would take them away from ‘design’.

How we can improve communication and understanding

Seek Understanding, not just information. What you really need is to understand. When you understand something, you know what to do. Information, without understanding, does not let you know what to do.[8]

What will improve the success rate of projects? Certainly better communications will help. How is that achieved? There are a variety of ways but this author suggests the use of a comprehensive model that integrates information from the following disciplines:

  1. Mission
  2. Architecture
  3. Data
  4. System Engineering
  5. Software Engineering
  6. Operations
  7. Logistics
  8. Cost

Integrating these forces a rigor by which we formalize transformation from one discipline to another. Integration means that an analyst can make a change in mission, for example, and see the affect on software engineering, or cost. He can modify the software functionality being delivered and see what the impact is to purchase hardware and software development schedule. A specific and unique model does not have to be constructed to answer these questions, or many others, as has been the case in the past.

This is a tall order, one that has not been accomplished until now. There have been methods or tools that approach this comprehensive model view. Architectural frameworks address mission, architecture and some system engineering but largely ignore the remainder. The Rational Unified Process (RUP) and the Unified Modeling Language (UML) go a little farther and address data. Even so these are primarily static models, and it takes substantial time to analyze alternative in these constructs.

The Performance/Cost Analysis Model (P/CAM) addresses the gaps that the prior tools have missed and enhances the value brought by other these tools, see Figure-1. P/CAM has been through a validation and verification process, currently awaiting the final report with positive results expected.

Figure 1 - P/CAM Structure

Page 6 of 9

P/CAM is the zenith of many years of work predicting cost and schedule for complex systems, based on limited data. P/CAM is a patent-applied-for, intellectual concept and framework with which one can build a model that represents mission and architectural components in order to study technical, cost and logistic impacts of changes to mission or architecture. P/CAM can be applied to single-site systems or complex multi-site, multi-release environments.

P/CAM is a model-integration-engine relating several different classes of information (See the list above) which currently produces five useful outputs. Other models can be integrated into P/CAM that influence or generate costs to support analysis of alternatives and the production of a Life Cycle Cost Estimate (LCCE). For example, a system disposal model is on the drawing board for addition to P/CAM. Intermediate results directly support engineering trade analysis and development of the Cost Analysis Requirements Document (CARD). CARD data available includes the requirements for processing, storage, communication, operations staff, facility and a candidate Bill of Materials (BOM) for COTS software and hardware. Hardware is automatically selected by P/CAM to satisfy the stated mission activity, the BOM is automatically generated.

P/CAM is constructed using Phoenix Integration’s Model Center product which makes it simple to perform parametric sensitivity studies, design of experiments, and optimizations. Analysis of alternatives becomes an approachable task with the use of the Model Center and the P/CAM structure.

Two instances of P/CAM have been constructed to date using Department of Defense Architecture Framework (DoDAF) products as inputs. These instances have allowed the customer to study the impact of changes in mission, function, hardware or site constraints in terms of:

·  Processing demand

·  Storage volumes

·  Impact of ingest alternatives

·  Impact of analyst community alternatives

·  External communications volumes

·  Computer hardware (i.e., type, space requirements, power requirements, cooling requirements, and support staff requirements)

·  Development cost

·  Integration cost

·  Deployment cost

·  Operations cost

In both instances P/CAM provided the bulk of the data included in the CARD submitted to DoD customer. One observed side-effect of using P/CAM was the opportunity to identify defects in the architecture products and correct them before those products were sent forward to support design.

The following few pages will provide the reader with a more detailed understanding of P/CAM, its components and how they have been combined.

Detailed discussion of P/CAM components

Executables

P/CAM has many components; three of these are the principal executables at the heart of the model. The first of these components is the Performance Analysis Model; it relates the business activities with the architecture/system functions to produce the computer demand, permanent storage demand and external communications demand for the stated mission. The second component is the Cost Analysis Model; it relates the Work Breakdown Structure (WBS) with the architecture/system functions to produce a system-cost database containing cost details for each WBS element/function pair. Finally there is the SQL component which analyzes the cost records to generate a life cycle cost model and BOM. Since the cost records are in a database the analyst can examine the cost records in any way that is useful by way of SQL queries.