<project name>
<Application Name>
Master Test Plan
[PROJECT NAME]
[PROJECT CODE]
Document Version: [x.x]
Date: [dd/mm/yy]
V 0.1
(To be completed by: Test Manager )
Version History
Review / Approvals
<List all approvers that are required to signoff on the project. Add or delete Stakeholders as required. Obtain their approval of this Master Test Plan document
Role/ Designation / Name / Signature / Date /Business Sponsor
IT Director
IT Project Manager
S&P, if applicable
Q/A manager, if applicable
Additional Distribution
Table of Contents
1 Introduction 5
1.1 Purpose and scope 5
1.2 Out of Scope 5
1.3 Background 5
2 Test Strategy 6
2.1 Test Level Selection 6
2.2 Test Level Characteristics 6
2.3 Test SCOPE 7
2.4 REFERENCES 7
2.5 Testing tools 7
3 Roles and responsibilities 8
4 test environment and Resources 9
4.1 Unit test Environment 9
4.2 eNVIRONMENT aCCESS 10
4.3 Test Data Acquisition 10
5 Test Assumptions and Risks 10
5.1 Assumptions 10
5.2 Risks 10
6 Test Reporting 11
7 Glossary 11
Template Usage Note: The template instructions are written in blue italics. This information should be replaced with relevant details of the project. In certain sections boilerplate information is provided that can be retained in the final document.
<Company Name> confidential – internal use only
<project name>
1 Introduction
This document provides the overall testing strategy and approach required to ensure that the requirements of the project name are tested adequately, and that the required level of quality and reliability of the software deliverables is attained.
Master Test Plan is initiated in the Analysis phase and completed by the end of the Design phase.
[The testing guidelines document should be referenced as an aid to completing this template]
1.1 Purpose and scope
The purpose of this document is to communicate activities related to the planning, staffing, managing and execution of testing activities for the ---- project.
This document focuses on:
· Overall testing strategy
· Levels of testing to be performed
· Entry and Exit criteria for each test level
· Supporting testing tools
· Roles and Responsibilities of supporting testing resources
· System resources
· Assumptions and risks
[Do not describe scope of each testing g here. Instead focus on overall broad scope such as the project increment in scope or applications/ interfaces in scope. Scope of testing levels is covered in section 2.3]
1.2 Out of Scope
[Identify any specific items that are out of scope for this project]
1.3 Background
This section provides a brief summary of background information for the project.
[Enter a brief description of the application (e.g. architecture, components, user interfaces, etc.) Include information such as major functions and features, and a brief history of the project.
Refer to other project documentation for pertinent information and reuse as needed.]
2 Test Strategy
This section addresses test level selection, characteristics and testing tools
2.1 Test Level Selection
This section contains specific information relating to the selection of the test levels. Refer to the testing guidelines document for the objective and detailed description of each test level.
[Rationale for omitting test level - Justify reason for not testing a test level, relate to the technology, business requirements, test data acquisition approach, etc. e.g. possible scheduling and resource constraints ,unavailability of test data, etc]
Test Level / Applicable? / Rationale for omitting test level /Unit Test / Yes
No
Systems Integration Test* / Yes
No
End to End Test / Yes
No
User Acceptance Test / Yes
No
Other / Yes
No
* see guidelines document for relationship of SIT to system and integration testing
2.2 Test Level Characteristics
Test Level / Owner / Planned Iterations / Entry Criteria / Exit CriteriaUnit Test / 2
SIT / 4
UAT / 3
E2E Test / 3
[Other]
[Multiple iterations of testing should be planned. An example using three iterations is below:
Iteration 1: A comprehensive iteration that includes the execution of all the test cases.
Iteration 2: Includes all the test cases that failed during iteration 1 and have been corrected.
Iteration 3: The final iteration and should result in no or minimal defects being identified.
Additional iterations may be required beyond the number planned due to the unexpected failure rate in test results. Testing should be continued until exit criteria are met.
Examples of Entry Criteria:
· Testing environment established
· Approved Business Requirements
· Adequate Test data is available
· Level specific test plans were completed
· Documented Test Cases and Results from prior test phase
· Completed and reviewed test cases / test scripts
· Access rights for the testers were established
Examples of Exit Criteria:
· All items in scope was tested
· All test cases (100%) are executed: failed cases have a satisfactory resolution
· Defects were documented and reported
· All severity 1 (critical) and 2 (Major) defects are resolved and implemented
· Applicable sign-off on testing was obtained ]
2.3 Test SCOPE
[Identify requirements, components, modules, etc to be included and the ones excluded from a particular testing level]
Test Level / In Scope / Out of ScopeUnit Testing / Application processing module
Only Critical Admin functions (to be identified in Design Phase) / Admin functions (non-critical)
Reports Testing
SIT / All Online Module functions including reports / Security testing
Inbound/ Outbound FTPs
Maestro Trails
UAT / All Online functions
Batch processes
Security / Inbound/ Outbound FTPs
E2E Testing / Maestro Trails
Inbound/ Outbound Interfaces
2.4 REFERENCES
[Documents or other artifacts that were referenced for test planning]
Document / LinkBRD
SRS
Traceability Matrix
2.5 Testing tools
This section identifies the testing tools and any required support planned for each test level. Add the name and supporting requirements for each tool for each test level. Test Supporting Needs may include creation of project, storage, user accounts etc.]
Test Level / Tool(s) / Tool supporting Needs /Unit
Systems Integration
Stress & Performance
Regression
Q/A
End to End
User Acceptance
Disaster Recovery
Other
3 Roles and responsibilities
Role/ Group / Responsibilities / NameTest Manager
(or PM or Project Lead) / Provides testing management oversight. Responsibilities:
· provide technical direction
· acquire appropriate resources
· provide management reporting
Test Team (Build Team or BSA) / Executes the tests.
Responsibilities:
· execute tests
· log results
· recover from errors
· document change requests
QA Team / Executes the tests.
Responsibilities:
· decide on the scope of the Q/A testing in agreement with Project Manager
· execute tests
· log results
· recover from errors
· document change requests
ET / Ensures test environment and assets are managed and maintained.
Responsibilities:
· administer test management system
· install and manage access to test systems
Database Administrator, Database Manager / Ensures test data (database) environment and assets are managed and maintained.
Responsibilities:
· administer test data (database)
S&P Team / · Performs S& P testing
Business Partner / · Develops UAT test cases
· Performs UAT
· Reviews test results
Other
Other
4 test environment and Resources
The following table is used to identify the system resources (hardware, software etc. required for the test environment needed to support each test level.
[An instance of each table can be repeated for each test level. Delete or add items as appropriate.].
4.1 Unit test Environment
[Complete with the available information, although the specific elements of the test system within environment may not be fully known at this time Where appropriate, it is recommended that the system simulate the production environment, scaling down the accesses and database sizes.}
Resource / TypeDatabase Server
Network or Subnet / TBD
—Server Name / TBD
—Database Name / TBD
Client Test PC's
—Include special configuration requirements / TBD
Test Repository
—Network or Subnet / TBD
—Server Name / TBD
Test Development PC's / TBD
Other
4.2 eNVIRONMENT aCCESS
Environment / Group / Reason for AccessSIT / QA Team / Own functional testing
Development Team / Create and run scripts for Integration
UAT / Business Partner / Approval of testing, resolution of issues
Business Users / Test cases development and execution
Development Team / Results Validations, Issues analysis
4.3 Test Data Acquisition
The following table is used to identify the approach for acquiring and securing the test date to be used for each test level.
[The table below can be repeated for each test level. Delete or add items as appropriate.].
Source of Test Data / Extraction approach / Type of test data (input or pre-existing) / Security controls /Will be created according to test plans / N/A / input / N/A
Provided from a copy of backup production files / Develop program to extract selected records / pre-existing / Will use data masking to ensure sensitive personal data cannot be identified
5 Test Assumptions and Risks
5.1 Assumptions
This section lists assumptions that are specific to the test planning.
# / Assumption1 / e.g. Testing environments will be stable
2 / e.g .Users will commit adequate resources to UAT
3 / Other
4 / Other
5.2 Risks
The following risks to the testing plan have been identified and the supporting contingency plan included to mitigate their impact on the project. The impact (or severity) of the risk is based on how the project would be affected if the risk was triggered. The trigger is the milestone or event that would cause the risk to become an issue to be addressed.
# / Risk / Impact / Trigger / Mitigation/ Contingency Plan1 / e.g. Delay in availability of test environment / delay project / 30 day delay / obtain approval to delay the project
2 / e.g. Problems with testing tools / reduce quantity of tests / vendor notification of a problem / develop manual workaround
3 / Other
4 / Other
6 Test Reporting
Standard data capture:
Following data fields will be collected for all test cases and results:
§ Test Case ID
§ Description
§ Status:
§ Defects ID
§ Detected ON
§ Severity Level
§ Detected By
§ Closed ON
Following measurements will be collected and reported.
# / Metrics / Measurement Data / Frequency / Responsible / Reported To /7 Glossary
Item / DescriptionBlack box testing / Focus is on the external attributes and behavior of the software. Such testing examines the software from the user perspective. UAT is the classical example of this type of testing
Defect / A defect is a flaw, error or omission identified during the testing process. Defects are typically classified by level of severity ranging from non-critical to “show stopper”
Negative Testing (destructive) / Testing attempts to prove that the software can be broken using invalid or erroneous input conditions. Both defined and undefined error conditions should be generated.
Positive Testing / Testing attempts to prove that the software satisfies the requirements
S&P testing / Stress and Performance testing
Test Case / A test case is a specific test designed to verify a particular condition or requirement. It identifies input data with predicted results and describes the testing objective.
Test Script / Provide the step by step procedures comprising the actions to be taken and the verification of the results
White-box testing / It tests software with knowledge of internal data structures, logical flow at the source code level. Unit testing is the classical example of this type of testing.
.
<Company Name> confidential – internal use only
Page 11 of 11 Page 11 of 11