COCOMO II/Tables list/Boehm et al.
List of Tables
Preface
Table P1 Software Overrun Case Studies
Chapter 2
Table 2.1 User Function Types
Table 2.2 FP Counting Weights
Table 2.3 UFP Complexity Weights
Table 2.4 UFP to SLOC Conversion Ratios
Table 2.5 Rating Scale for Software Understanding Increment SU
Table 2.6 Rating Scale for Assessment and Assimilation Increment (AA)
Table 2.7 Rating Scale for Programmer Unfamiliarity (UNFM)
Table 2.8 Adapted Software Parameter Constraints and Guidelines
Table 2.9 Variation in Percentage of Automated Re-engineering
Table 2.10 Scale Drivers for COCOMO II Models
Table 2.11 Precedentedness Rating Levels
Table 2.12 Development Flexibility Rating Levels
Table 2.13 RESL Rating Levels
Table 2.14 TEAM Rating Components
Table 2.15 PMAT Ratings for Estimated Process Maturity Level (EPML)
Table 2.16 KPA Rating Levels
Table 2.17 RELY Cost Driver
Table 2.18 DATA Cost Driver
Table 2.19 Component Complexity Ratings Levels
Table 2.20 CPLX Cost Driver
Table 2.21 RUSE Cost Driver
Table 2.22 DOCU Cost Driver
Table 2.23 TIME Cost Driver
Table 2.24 STOR Cost Driver
Table 2.25 PVOL Cost Driver
Table 2.26 ACAP Cost Driver
Table 2.27 PCAP Cost Driver
Table 2.28 PCON Cost Driver
Table 2.29 APEX Cost Driver
Table 2.30 LTEX Cost Driver
Table 2.31 PLEX Cost Driver
Table 2.32 TOOL Cost Driver
Table 2.33 SITE Cost Driver
Table 2.34 SCED Cost Driver
Table 2.35 Early Design and Post-Architecture Effort Multipliers
Table 2.36 PERS Cost Driver
Table 2.37 RCPX Cost Driver
Table 2.38 PDIF Cost Driver
Table 2.39 PREX Cost Driver
Table 2.40 FCIL Cost Driver
Table 2.41 RELY Maintenance Cost Driver
Table 2.42 MCS Project Phase Distributions
Table 2.43 Effects of Reliability Level on MCS Life Cycle Costs
Table 2.44 Sizing Equation Symbol Descriptions
Table 2.45 Post-Architecture Model Symbol Descriptions
Table 2.46 Early Design Symbol Descriptions
Table 2.47 TDEV Equation Symbol Descriptions
Table 2.48 Scale Factors for COCOMO II Models
Table 2.49 Cost Driver Ratings for Post-Architecture Model
Table 2.50 COCOMO II.2000 Post-Architecture Calibrated values
Table 2.51 COCOMO II.2000 Early Design Calibrated values
Table 2.52 COCOMO II.1997 Post-Architecture Calibrated values
Table 2.53 Definition Checklist for Source Statements Counts
Table 2.54 Definition Checklist for Source Statements Counts (continued)
Table 2.55 Definition Checklist for Source Statements Counts (continued)
Table 2.56 Definition Checklist for Source Statements Counts (continued)
Table 2.57 Definition Checklist for Source Statements Counts (continued)
Table 2.58 COCOMO Model Comparisons
Chapter 3
Table 3.1 TPS Software Capabilities
Table 3.2 Size for Identified Functions
Table 3.3 COCOMO Model Scope
Table 3.4 Summary of WBS Estimate
Table 3.5 Scale Factor Ratings and Rationale
Table 3.6 Product Cost Driver Ratings and Rationale
Table 3.7 Platform Cost Driver Ratings and Rationale
Table 3.8 Personnel Cost Driver Ratings and Rationale
Table 3.9 Project Cost Driver Ratings and Rationale
Table 3.10 Risk Matrix
Table 3.11
Table 3.12
Table 3.13
Table 3.14
Table 3.15
Table 3.16 ARS Software Components
Table 3.17 ARS Prototype Application Elements
Table 3.18 ARS Prototype Sizes
Table 3.19 ARS Breadboard System Early Design Scale Drivers
Table 3.20 ARS Breadboard System Early Design Cost Drivers
Table 3.21 ARS Breadboard System Size Calculations
Table 3.22 ARS Full Development Scale Drivers
Table 3.23 ARS Full Development Cost Drivers (Top Level)
Table 3.24 ARS Full Development System Size Calculations
Table 3.25 Radar Unit Control Detailed Cost Drivers – Changes from Top-level
Table 3.26 Radar Item Processing Detailed Cost Drivers – Changes from Top-level
Table 3.27 Radar Database Detailed Cost Drivers – Changes from Top-level
Table 3.28 Display Manager Detailed Cost Drivers – Changes from Top-level
Table 3.29 Display Console Detailed Cost Drivers – Changes from Top-level
Table 3.30 Built In Test Detailed Cost Drivers – Changes from Top-level
Chapter 4
Table 4.1 Model Comparisons
Table 4.2 Converting Size Estimates
Table 4.3 Mode/Scale Factor Conversion Ratings
Table 4.4 Cost Drivers Conversions
Table 4.5 TURN and TOOL Adjustments
Table 4.6 Estimate Accuracy Analysis Results
Table 4.7 COCOMO II.1997 Highly Correlated Parameters
Table 4.8 Regression Run Using 1997 Dataset
Table 4.9a RUSE – Expert-determined a priori rating scale, consistent with 12 published studies
Table 4.9b RUSE – Data-determined rating scale, contradicting 12 published studies
Table 4.10 COCOMO II.1997 Values
Table 4.11 Prediction Accuracy of COCOMO II.1997
Table 4.12 COCOMO II.2000 “A-Priori” Rating Scale for Develop for Reusability (RUSE)
Table 4.13 Regression Run Using 2000 Dataset
Table 4.14 COCOMO II.2000 Values
Table 4.15 Prediction Accuracies of Bayesian A-Posteriori COCOMOII.2000
Table 4.16 Prediction Accuracies Using the Pure-Regression, the 10% Weighted-Average
Multiple-Regression and the Bayesian Based Models Calibrated Using the 1997
dataset of 83 datapoints and Validated Against 83 and 161 datapoints
Table 4.17 Calibrating the Multiplicative Constant to Project Data
Table 4.18 Regression Run: Calibrating Multiplicative Constant to Project Data
Table 4.19 Improvement in Accuracy of COCOMO II.2000 Using Locally Calibrated
Multiplicative Constant, A
Table 4.20 Prediction Accuracy of COCOMO II.2000
Table 4.21 Schedule Prediction Accuracy of COCOMO II.2000
Table 4.22 Regression Run: Calibrating Multiplicative and Exponential Constants to Project
Data
Table 4.23 Improvement in Accuracy of COCOMO II.2000 Using Locally Calibrated Constants,
A and B
Table 4.24 Consolidating Analyst Capability and Programmer Capability
Chapter 5
Table 5.1 Object Point (OP) Data [Banker et al., 1991]
Table 5.2 Application Point Estimation Accuracy on Calibration Data
Table 5.3 RVHL rating scale.
Table 5.4 RVHL multiplier values.
Table 5.5 Subjective determinants of bureaucracy.
Table 5.6 DPRS rating scale.
Table 5.7 DPRS multiplier values for each rating.
Table 5.8 CLAB contributing components.
Table 5.9 TEAM rating scale.
Table 5.10 SITE rating scale.
Table 5.12 APEX rating scale.
Table 5.13 PLEX rating scale.
Table 5.14 LTEX rating scale.
Table 5.15 PREX rating scale.
Table 5.16 CLAB rating scale.
Table 5.17 CLAB multiplier values for each rating
Table 5.18 RESL rating scale based on percentage of risks mitigated.
Table 5.19 RESL rating scale based on design thoroughness/risk elimination by PDR.
Table 5.20 RESL multiplier values for each rating.
Table 5.21 PPOS rating scale.
Table 5.22 PPOS multiplier values for each rating.
Table 5.23 PERS rating scale.
Table 5.24 PCAP rating scale.
Table 5.25 PCON rating scale.
Table 5.26 PERS rating scale for CORADMO.
Table 5.27a Multiplier ratings (best schedule compression).
Table 5.27b Results for a 32 KSLOC project (effort: 120 person-months/schedule: 12.0 months).
Table 5.27c Results for a 512 KSLOC project (effort: 2580 person-months/schedule: 34.3
months).
Table 5.28 COTS assessment attributes.
Table 5.29 Dimensions of tailoring difficulty.
Table 5.30 Final tailoring activity complexity rating scale.
Table 5.31 COTS glue code effort adjustment factors.
Table 5.32 Defect introduction drivers.
Table 5.33 Programmer Capability (PCAP) differences in defect introduction.
Table 5.34 Initial data analysis on the Defect Introduction model.
Table 5.35 The defect removal profiles.
Table 5.36 Results of 2-round Delphi exercise for defect removal fractions.
Table 5.37 Defect density results from initial Defect Removal Fraction values.
Table 5.38 CORADMO Drivers
Table 5.39
Table 5.40
Table 5.41 Rationales for the SIZE factor value over time and technologies.
Appendix A
Table A.1 COCOMO II Waterfall Milestones
Table A.2 MBASE and Rational Unified Software Development Process Milestones
Table A.3 Detailed LCO and LCA Milestone Content
Table A.4 Waterfall Phase Distribution Percentages
Table A.5 MBASE and RUP Phase Distribution Percentages
Table A.6 Inception and Transition Phase Effort and Schedule Drivers
Table A.7 Software Activity Work Breakdown Structure
Table A.8 Rational Unified Process Default Work Breakdown Structure [Royce, 1998]
Table A.9 COCOMO II MBASE/RUP Default Work Breakdown Structure
Table A.10a Plans and Requirements Activity Distribution
Table A.10b Product Design Activity Distribution
Table A.10c Programming Activity Distribution
Table A.10d Integration and Test Activity Distribution
Table A.10e Development Activity Distribution
Table A.10f Maintenance Activity Distribution
Table A.11 COCOMO II MBASE/RUP Phase and Activity Distribution Values
Table A.12 Example Staffing Estimate for MCS Construction Phase
Appendix B
Table B.1 Incremental Estimation Output
Table B.2 Incremental Effort Estimation Results
End of Tables
© 1999-2000 USC Center for Software Engineering. All Rights Reserved A_front_matter_991223_v5