System Integration Test Workbench
Sample System Integration Test Workbench
Rice Consulting Services, Inc.
P.O. Box 891284
Oklahoma City, OK 73189
405-793-7449
Document Change History
Revision: 1.0 / Originator: Joe Smith. / Eff. Date: Draft / Requirement ID: N/ADescription / Reason for Change
Original version 1.0 / No changes---original.
Page 1 of 26Rice Consulting Services, Inc.12/19/2001 2:18 PM
System Integration Test Workbench
Approval Page
I have authored this document and approve its contents, based on the scope of my responsibility and expertise.
Joe Smith. - Author / Date / Comments*QA Analyst / (Page #, “N/A” or “None”)
I have reviewed this document and approve its contents, based on the scope of my responsibility and expertise.
Mary Long – QA Director / Date / Comments*(Page #, “N/A” or “None”)
Jerry Quinn – QA Manager / Date / Comments*
(Page #, “N/A” or “None”)
Randall W. Rice– Consultant / Date / Comments*
Rice Consulting Services Inc. / (Page #, “N/A” or “None”)
Paulette Johnson – Test Team Leader / Date / Comments*
(Page #, “N/A” or “None”)
* The page number of any cosmetic and/or minor changes found in the course of the approval process is indicated in this column. This column must be addressed; absence of comments is indicated by “none” or “N/A.”
System Integration Test Workbench
Table of Contents
1.Preface
2.Overview
3.System Integration Test Input
3.1Early Project Deliverables
3.2Design Deliverables
3.3System Development Deliverables
4.System Integration Test Process
4.1Step 1 - Define System Integration Testing Strategy
4.2Step 2 - Write System Integration Test Plan
4.2.1Task 1 - Set System Integration Test Objectives
4.2.2Task 2 - Define system functions, software modules, system interfaces, structural attributes and business processes to be tested.
4.2.3Task 3 - Define Test Responsibilities
4.2.4Task 4 - Define Major Tasks and Deliverables
4.2.5Task 5 - Define the Types of Testing to be Performed
4.2.6Task 6 - Define Pass/Fail Criteria
4.2.7Task 7 - Define Risks and Contingences
4.2.8Task 8 - Select Test Tools
4.2.9Task 9 - Define Test Cases
4.2.10Task 10 - Build Test Data
4.2.11Task 11 - Finalize System Integration Test Plan
4.3Step 3 - Perform Testing
4.4Step 4 - Evaluate Testing
5.System Integration Test Quality Control
6.System Integration Test Output
7.Testing Standards
8.Test Tools
9.QC Checklists
10.Appendix A - Critical Success Factors
11.Appendix B - Test Types
12.Appendix C - Test Case Worksheet
1.Preface
The following conventions are used in this document:
Bold / Used for information that is typed, pressed, or selected in executables and instructions. For example, select connect to host.Italics / Used for file names, directories, scripts, commands, user IDs, document names, and Bibliography references; and any unusual computerese the first time it is used in text.
Underline / Used for emphasis.
Arrows > / Used to identify keys on the keyboard. For example <Return>.
“Quotation Marks” / Used to identify informal, computer-generated queries and reports, or coined names; and to clarify a term when it appears for the first time. For example “Data-Generation Report.”
Courier Font / Used to denote anything as it appears on the screen or command lines. For example tar xvf dev/rmt/3mm.
Capitalization / Used to identify keys, screen icons, screen buttons, field, and menu names.
Page 1 of 26Rice Consulting Services, Inc.12/19/2001 2:18 PM
System Integration Test Workbench
2.Overview
This process describes the steps to plan, perform, and evaluate system integration testing.
The System Integration Test Workbench has input which feeds the process. Quality control (QC) is performed to ensure the test meets standards. If QC is passed, the output is delivered. If QC is not passed, the process is repeated until QC is passed.
The System Integration Test Workbench is supported by testing standards, guidelines, and test tools.
3.System Integration Test Input
3.1Early Project Deliverables
The following items are deliverables from the Project Feasibility, Requirements Definition, and Prototyping phases of the System Development Life Cycle:
Business process analysis
High-level business cases
High-level business requirements
High-level regulatory requirements
Prototypes
System requirements
Project plan with work breakdown structure
Testing thresholds (levels or phases of testing to be performed)
Data flow diagrams
High-level time estimates
Project Test Plan
If the decision is to buy the software, a preliminary package selection
This input provides the information needed to write the System Integration Test Plan.
3.2Design Deliverables
Deliverables from the Technical Design phase to be used as input to the System Integration Test Workbench include:
- the technical blueprint (basic system design document)
- detailed data flow diagrams
This input is used to develop detailed test cases.
3.3System Development Deliverables
Deliverables from the System Development phase to be used as input to the System Integration Test Workbench include:
- Software modules
- System builds
- Completed system
This input is used for the items to be tested.
4.System Integration Test Process
4.1Step 1 - Define System Integration Testing Strategy
The testing strategy sets the direction for the system integration testing effort and will be shaped by:
The type of project (Rapid Application Development, contracted/purchased, etc.)
The type of software (Graphical User Interfaces, Object-oriented, etc.)
The scope of testing
The extent of testing
Project schedule and timeframes
Project risks
System risks
- Critical Success Factors (correctness, reliability, usability, etc. - See Appendix A)
4.2Step 2 - Write System Integration Test Plan
The purpose of this step is to write a high-level test plan that describes system integration testing. There are several tasks in this step.
4.2.1Task 1 - Set System Integration Test Objectives
The test objectives should reflect the system objectives. At the system integration level, the test objectives should be encompass the entire system and should be testable.
4.2.2Task 2 - Define system functions, software modules, system interfaces, structural attributes and business processes to be tested.
System Integration Testing should be designed to validate that all system functions, software modules, and system interfaces work correctly according to user requirements. In addition, the system integration test should also validate that the system will support the business processes and that required structural attributes (usability, performance, etc.) are present.
To support this objective, an inventory of system functions, software modules, system interfaces, structural attributes and business functions must be taken. The input for this inventory will be the technical blueprint and knowledge of the business functions to be supported by the system. In turn, these functions, modules, interfaces, and attributes will be the basis for identifying system integration test cases.
4.2.3Task 3 - Define Test Responsibilities
The test responsibilities should be defined in terms of in-house developed systems and contracted/purchased systems.
4.2.4Task 4 - Define Major Tasks and Deliverables
This defines the major testing-related tasks and what should be delivered from them. In addition, the people responsible for the tasks should be specified, along with when the tasks will be performed.
4.2.5Task 5 - Define the Types of Testing to be Performed
There are many types of testing, such as conversion, compliance, performance, usability, etc. This task matches the test objectives to the appropriate type(s) of testing. A list of test types is found in
Appendix B.
4.2.6Task 6 - Define Pass/Fail Criteria
This task defines which criteria must be met for the system to pass or fail testing.
4.2.7Task 7 - Define Risks and Contingences
This task defines the system risks, which include the impact of system failure and the likelihood of system failure. Also included in this task are the criteria for suspending and resuming testing.
4.2.8Task 8 - Select Test Tools
This task involves the identification, selection, acquisition, and implementation of test tools for the system integration test. The test tool selection will depend on the technical environment, the test objectives, the types of testing to be performed, and the people who will be using the tools for testing.
In this context, a test tool is defined as any vehicle which assists in testing. Test tools can be employed during test planning, test execution, and test evaluation. Test tools can be either manual (e.g., checklists, decision tables, etc.) or automated (test case generators, capture/playback, etc.).
The selection and implementation of test tools is not a testing solution in itself. There must first be testing processes and standards in place, and the people using the tools must understand testing concepts and how to use the tool effectively.
4.2.9Task 9 - Define Test Cases
This task defines the test cases to be performed during the system integration test. A test case consists of a condition, an expected result, and a procedure for executing the test case. A worksheet has been designed to assist in documenting test cases (see Appendix C).
The main objective of the test cases is to validate as much of the system functionality and behavior as possible. Since there are astronomical numbers of possible test case combinations, test cases should be based on risk with the highest risk cases receiving the highest priority.
Since the system integration test cases will contain detailed instructions for testing, they should be referenced by the System Integration Test Plan, but not necessarily included in the body of the test plan.
4.2.10Task 10 - Build Test Data
This task builds data to support the test cases. The test data may be created from existing data, if the data structure is correct, and/or from data created to validate test conditions.
The system integration test data must be backed up and maintained on a regular basis to prevent accidental loss of data and to enable regression testing.
4.2.11Task 11 - Finalize System Integration Test Plan
This task involves compiling all of the test information described in the previous steps into the System Integration Test Plan, which is based on the System Integration Test Plan Standard. After the plan has been finalized, it should be reviewed by anyone having an interest in the test, including: the project team, the test team, and project management. There is also a standardized approval form that should be used to document that all key personnel have read and approved the System Integration Test Plan.
4.3Step 3 - Perform Testing
System integration testing will be performed in builds. Each of the builds will be a functional area of the system that can be tested independently. As each build is completed, it can be tested with other builds. The purpose of the System Integration Test Plan is to describe how the system will be tested to ensure all components work correctly and interface together correctly.
The build concept allows system integration testing to be initiated as major functional areas of the system are completed.
As defects are found and fixed, regression testing should be performed. Complete regression testing requires the use of an automated test tool to repeat a previous test and compare any differences. A lesser degree of regression testing can be achieved using manual methods, such as checklists and written test scripts. The downside to manual regression testing is the potential for human error in executing tests and evaluating test results.
4.4Step 4 - Evaluate Testing
This step is to determine if the test passed or failed, and to report the results of system integration testing.
In the System Integration Test Workbench, test evaluation is performed as each test case is performed, as defects are identified, and as each build is tested. As each test case is executed, a pass/fail determination is made. If the test case fails, a defect report should be filed for the incident.
If the system integration test lasts longer than a month, periodic status reports may be generated to keep management abreast of testing progress. The System Integration Test Status Report Standard should be used as a basis for status reports.
There will be a final system integration test report. This report will summarize the system integration testing efforts and the conclusions drawn from system integration testing. The System Integration Test Summary Report Standard should be used as a basis for the final summary report from system integration testing.
5.System Integration Test Quality Control
Quality control is performed on the system integration test process and deliverables to ensure all steps in the process have been performed and the deliverables are correct. The vehicle for system integration test QC is a set of checklists. Each step in the system integration testing process should have a checklist. The QC checklists are found at the end of the System Integration Test Workbench.
6.System Integration Test Output
The output of the System Integration Test Workbench includes:
- the System Integration Test Plan
- system integration test cases
- a completely tested system
- test defect reports
- test status reports
- reusable test products (test plans, test cases, etc.)
7.Testing Standards
Standards and guidelines support the System Integration Test Workbench. There are standards for the:
- System Integration Test Plan
- System Integration Test Cases
- System Integration Test Summary Report
- System Integration Test Status Report
8.Test Tools
At the time of this writing, there are no automated test tools in place to support system integration testing. It is expected that a future project will be performed to select and acquire the necessary tools to support system integration testing.
9.QC Checklists
RESPONSEITEM / YES / NO / N/A / COMMENTS
WORKSHEET #1
TEST STRATEGY
1.Has the type of project been identified?
a) Traditional
b) Rapid Application Development (RAD)
c) Client/Server
d) Object-oriented
e) Computer Aided Software Engineering (CASE)
f) Contracted/purchased system
2.Has the type of software been identified?
a) On-line
b) Batch
c) Graphical User Interface (GUI)
d) Object-oriented
3.Has the scope of testing been defined?
4.Has the extent of testing been defined (i.e., when will testing be considered complete?)
5.Have testing responsibilities been assigned?
6.Has an impact analysis been performed?
7.Has a risk analysis been performed?
8.Have contingency plans been defined?
a) Schedule overruns
b) Cost overruns
c) Personnel deficiencies
RESPONSE
ITEM / YES / NO / N/A / COMMENTS
WORKSHEET #2
TEST TEAM SELECTION
1.Has a determination been made as to what test strategy is appropriate for this application?
2.Have the skills needed to implement the test strategy been determined?
3.Has a rough commitment to perform each type of testing been identified?
4.Have the individuals who possess the needed skills been identified?
5.If those individuals cannot be found within the company, have alternative sources been explored to obtain the needed skills (e.g., outside consultants)?
6.Has the specific use of each selected individual in testing been determined (i.e., how individual will be utilized)?
7.Has the selected individual's manager been identified (i.e., those who can authorize the individuals to work on the test team)?
8.Have the individual's supervisors been approached, and the case made, for the selected individuals to be on the test team?
RESPONSE
ITEM / YES / NO / N/A / COMMENTS
WORKSHEET #3
TEST TEAM ASSIGNMENT
1.Have all the individual skills needed for testing been defined?
2.Has someone been included on the test team that possesses the necessary skills?
3.Do the individuals on the test team have adequate time authorized to fulfill their test responsibilities?
4.Do the individuals on the test team know their area of responsibilities (i.e., test assignments)?
5.Do the individuals agree they have the necessary time and skills to fulfill their test assignment?
6.Will those individuals be available during the time spans when testing will occur?
7.Do the individuals on the test team have adequate authority to fulfill their responsibilities?
8.Do the test assignments indicate what testing responsibilities can be delegated?
RESPONSE
ITEM / YES / NO / N/A / COMMENTS
WORKSHEET #4
TEST OBJECTIVES
1.Do the test objectives reflect the system application objectives?
2.Were the test objectives defined by the entire test team?
3.Is each test objective testable?
4.Is the completion criteria something that is measurable (i.e., there is no question as to whether or not the test objective has been accomplished)?
5.Are the test objectives realistic?
6.Do the individuals who agreed upon the test objectives have adequate authority to define objectives of this type?
7.Are the test objectives consistent with the business objectives of the organization?
8.Are the test objectives consistent with any applicable policies, laws, and regulations?
9.Is there a sufficient number of test objectives for an application this size (after experience in using this standard has been obtained, an approximate number of test objectives should be calculable for a system of any size when first using this methodology, a minimum of 25 should be defined)?
10.Are the test objectives so voluminous that the level of objective is too low (as a beginning guideline over 100 test objectives is normally too many)?
11.Do the test objectives assign responsibility for the entire testing effort?
12.In establishing the test objectives, did the test team make reasonable tradeoffs between testing costs and reliability of the application system?
13.Do the testing objectives address the following constraints, if those constraints are important to the success of the system?
a)Volume
b)Performance
c)Capacity
d)Batch turnaround
e)On-line response
f)System availability
g)System access
h)Stored data security
i)Ease of operation
j)Ease of use
RESPONSE
ITEM / YES / NO / N/A / COMMENTS
WORKSHEET #5
RISK ANALYSIS
PROJECT RISK
1.Have all project risk factors been identified?
a) Project size
b) Developer experience levels
c) Project management experience level
d) Experience with technology being used
e) Criticality of deadlines (schedule)
f) Process maturity
g) Training
h) User involvement
I) Tester availability
2.Has each risk factor for the project been assessed?
3.Have the risk assessments been discussed or reviewed by the project team?
4.Did the project team reach agreement on the risk assessment?
5.Is the risk assessment reasonable?
SYSTEM RISK
6.Have all system risk factors been identified?
a) System/application complexity
b) System/application size
c) System/application frequency of use
d) System/application impact of failure
7.Has each system risk factor been assessed?
8.Has the system risk assessment been discussed or reviewed by the project team?
9.Did the project team reach agreement on the system risk assessment?
10.Is the system risk assessment reasonable?
RESPONSE
ITEM / YES / NO / N/A / COMMENTS
WORKSHEET #6
CONTINGENCIES
1.For each project and system risk, has a contingency been developed?
2.Have the contingencies been reviewed?
3.Are the contingencies reasonable? (i.e., Will they work in actual practice?)
4.Are the contingencies testable?
5.Have the contingencies been tested?
SUSPENSION CRITERIA
6.Have the normal suspension criteria been defined?
7.Have the abnormal suspension criteria been defined?
8.Have the suspension criteria been reviewed?
9.Are the suspension criteria reasonable?
10.For contracted/purchased software or systems, is the vendor in agreement with suspension criteria?
RESUMPTION PROCEDURES
11.Have the resumption procedures for normal suspension been defined?
12.Have the resumption procedures for abnormal criteria been defined?
13.Have the resumption requirements been reviewed?
14.Are the resumption requirements accurate?
15.Are the resumption requirements reasonable?
16.For contracted/purchased software or systems, is the vendor in agreement with resumption requirements?
RESPONSE
ITEM / YES / NO / N/A / COMMENTS
WORKSHEET #7
TEST PLANNING
1.Have test objectives been included in the system integration test plan?
2.Has the scope of testing been included in the system integration test plan?
3.Has the project overview been included in the system integration test plan?
4.Have definitions and acronyms been included in the system integration test plan?
5.Have assumptions been included in the system integration test plan?
6.Have constraints been included in the test plan?
7.Have test responsibilities been defined in the project plan for the vendor(s)?
8.Have test responsibilities been defined in the project plan for the customer?
9.Have the major tasks and deliverables been included in the system integration test plan?
10.Have the pass/fail criteria been defined in the system integration test plan?
11.Is the defect management approach described in the system integration test plan?
12.Have the project and system risks been assessed?
13.Have the project and system risks been included in the system integration test plan?
14.Have the project and system contingencies been included in the system integration test plan?
15.Have the normal suspension criteria been defined in the system integration test plan?
16.Have the abnormal suspension criteria been defined in the system integration test plan?
17.Have the resumption requirements been defined in the system integration test plan?
18.Has the system integration test plan been reviewed by the project team and the test team?
RESPONSE
ITEM / YES / NO / N/A / COMMENTS
WORKSHEET #8
TEST EXECUTION
1.Have requirements been verified?
2.Did the requirements meet the pass criteria?
3.Has unit testing been performed on all modules?
4.Did the software units pass unit testing?
5.Has unit-to-unit testing been performed on all modules?
6.Did the software units pass unit-to-unit testing?
7.Has system integration testing been performed on the system?
8.Did the system pass system integration testing?
9.Has model office testing been performed on the system?
10.Did the system pass model office testing?
RESPONSE
ITEM / YES / NO / N/A / COMMENTS
WORKSHEET #9
TEST EVALUATION
PASS/FAIL CRITERIA
1.Have the pass criteria been defined for the system?
2.Have the failure criteria been defined for the system?
DEFECT TRACKING
3.Is there a defect log in place for this project?
4.Is there a defect tracking tool in place?
5.Does the defect log capture the needed information?
a) Unique ID number
b) Defect Description
c) Date reported
d) Reported by
e) Test case ID
f) Where defect was discovered
g) Where defect was introduced
h) Defect type
I) Defect severity level
j) Defect priority level
k) Assigned to fix
l) Date assigned
m) Transfer date
n) Close date
o) Comments
6.Is there a defect administrator to track and route defects?
7.Has the defect been charged with finding and eliminating redundant defect reports?
STATUS REPORTING
8.Has the status reporting frequency been established?
9.Has the status reporting routing list been defined?
10.Appendix A - Critical Success Factors