STE™ Program – Testing Final Project

eBuy v2.0


System Test Results

Template

STE™ Program – Testing Final Project

Revision Information

Ver. / Date / Author / Description / Approved by / Approval date
1.0 / Aug 31, 2006 / Yossi Avni / Initial Draft / Alon Linetzki / Aug 31, 2006

Table of Contents

1. Introduction 1

1.1. Purpose and Scope 1

1.1.1. Timing 1

1.1.2. Responsibilities 1

1.2. Terms and Phrases 1

1.3. Referenced documents 1

2. Executive Summary 2

2.1. Summary of Activities 2

2.2. System Evaluation 2

2.2.1. Summary of Execution Results 2

2.2.2. Defects 2

2.2.3. Conclusions 2

3. Summary of Activities 3

3.1. General 3

3.2. Time Table 3

3.3. Testing Scope - Variance from Plan 3

3.4. Other Testing Variances 3

3.4.1. Test Environments 4

3.4.2. Test Design 4

3.4.3. End-to-end Scenarios and Test Cases 4

3.4.4. Test Data 4

3.4.5. Automation Effort 4

3.4.6. Responsibilities 4

3.4.7. Resources 4

4. System Evaluation 4

5. Summary of Results 6

5.1. System Test Results 6

End-to-end Scenarios (ST) Test Execution Results 6

5.2. Integration Test Results 7

Test Case Execution Results 7

5.3. Unit Test Results 7

Test Case Execution Results 7

6. Defects 8

6.1. Critical Defects 8

6.2. Total Number of Defects 8

7. Conclusions and Limitations 8

Version 1.1, Updated: January 14, 2006 Page 6

© Copyright SELA Technology Center Ltd. 14-18 Baruch Hirsch St. Bnei Brak 51202 Israel

STE™ Program – Testing Final Project

1.  Introduction

1.1.  Purpose and Scope

<What is the purpose and scope of this document. Specify the SUT, version, test phases covered in this document and the target audience>

1.1.1.  Timing

<When is this status report issued: The phase we are reporting, what is the next phase. In case a delivery is planned consequently to the reported testing activity – specify it>

1.1.2.  Responsibilities

<Who are the personnel that should provide their input for this report. Which input is provided by each one of them

1.2.  Terms and Phrases

<Table specifying and defining of all terms and phrases used in this document>

Term / Definition /

1.3.  Referenced documents

<List of documents related to this document. Each document in the list should be numbered and described by name, version, author and date>

[1] <Doc #1>

[2] <Doc #2>

2.  Executive Summary

<The target of this section is to provide the major facts and findings relevant to this status report. Target audience is mainly executive level who seek for high level view of the SUT testing status>

2.1.  Summary of Activities

The main testing activities performed in the reported period, including planning, design & execution. Main facts such as: number of ST end-to-end scenarios and no. of test cases per scenario>

2.2.  System Evaluation

2.2.1.  Summary of Execution Results

<Evaluation of the system, in terms of execution rate, pass rate, failure rate, given both for progression and regression, and exit criteria compliance

2.2.2.  Defects

How many defects were detected, how many of them are still open. Details should be presented by severity. Exit criteria compliance should be presented

2.2.3.  Conclusions

<What is the Go/No-Go decision for the next stage/delivery>

3.  Summary of Activities

3.1.  General

<Summarize in high level, how was the testing project managed: with which tools, which status meetings and reports were generated

3.2.  Time Table

<Time table of the testing levels in the reported period>

/ Planned / Actual /
Test Level / Start Date / End Date / Start Date / End Date
Unit Test
Integration Test
System Test

3.3.  Testing Scope - Variance from Plan

< This section of the report should specify any variances/deviations from the planned version testing scope as described in the STP. The following topics should be covered: New functionality, external interfaces, regression, defects retesting and non-functional testing. For each variance indicate what caused the variance and what was the impact on the quality of the testing effort

3.4.  Other Testing Variances

< In this section, additional possible variance items are described. As in the previous section, reasons and impacts should appear for each variance. In case the variance is not relevant in this project, indicate: N/A. In case there was no variance from plan, indicate: No variance from plan>

3.4.1.  Test Environments

3.4.2.  Test Design

3.4.3.  End-to-end Scenarios and Test Cases

3.4.4.  Test Data

3.4.5.  Automation Effort

3.4.6.  Responsibilities

3.4.7.  Resources

4.  System Evaluation

This section presents the current evaluation of the SUT's features, based on the test>

< results. The evaluation is comprised of 2 topics per feature :

< An overall evaluation (the possible values are described below in the legend), and an

< optional recommendation for enhancements or improvements>

< Feature evaluation legend :

·  C: Certified - Tests planned for these features passed successfully (Possibly minor defects),

·  P: Partially Passed - High or medium defects are opened,

·  F: Failed,

·  W: Waiting - Testing for this item hasn't started yet,

·  N: Will Not be tested - during the current version testing period,

·  I: Testing in process.

Feature / Evaluation / Recommendations /

5.  Summary of Results

5.1.  System Test Results

< In case a column specifies amount values, the last row will specify the >

< total amount of column rows,>

< In case a column specifies % values, the last row will specify the average % of>

< all column rows >

<The dashboard should summarize the achievements of the last test execution cycle>

<Cycle #4>

End-to-end Scenarios (ST) Test Execution Results

End-to-end scenario Details / Execution Details
(% out of Executed TCs) / Execution Details
(% out of Planned TCs) /
Scenario Name / Planned TCs (#) / Executed (%) / Passed (#) / Passed (%) / Failed (#) / Failed (%) / Passed (#) / Passed (%) / Failed (#) / Failed (%) / No Run (#) / No Run (%)
Total / Average

5.2.  Integration Test Results

Test Case Execution Results

Execution Details / Execution Details
(% out of Executed TCs) / Execution Details
(% out of Planned TCs) /
Scenario Name / Planned TCs (#) / Executed (%) / Passed (#) / Passed (%) / Failed (#) / Failed (%) / Passed (#) / Passed (%) / Failed (#) / Failed (%) / No Run (#) / No Run (%)
Total / Average

5.3.  Unit Test Results

Test Case Execution Results

Execution Details / Execution Details
(% out of Executed TCs) / Execution Details
(% out of Planned TCs) /
Feature Name / Planned TCs (#) / Executed (%) / Passed (#) / Passed (%) / Failed (#) / Failed (%) / Passed (#) / Passed (%) / Failed (#) / Failed (%) / No Run (#) / No Run (%)
Total / Average

6.  Defects

6.1.  Critical Defects

All defects in Severity="Critical". All open defects should be listed first in the table, and afterwards the rest of the defects which are either "closed" or "cancelled" >

Defect
ID / Status / Reported on Date / Injected in Phase / Detected in Phase / Summary /
Total no. of open critical defects: <number>

6.2.  Total Number of Defects

Open defects / Closed Defects / Cancelled Defects / Total Defects /
Critical + High / Medium +low / Critical + High / Medium +low

7.  Conclusions and Limitations

< This is the final section of the report and it can be utilized for: reporting any limitations of the system, or conclusions that should be considered for further testing activities. The conclusions can refer to various issues, such as: resources, environments, exit criteria, schedules, testing tools, automation, methods and techniques or anything else that is worthwhile mentioning>

Version 1.1, Updated: January 14, 2006 Page 6

© Copyright SELA Technology Center Ltd. 14-18 Baruch Hirsch St. Bnei Brak 51202 Israel