System Test Plan

Binary Search Tree

(Version 0.32)

By

Lahary Ravuri

Mansi Modak

Rajesh Dorairajan

Rakesh Teckchandani

Yashwant Bandla

Instructor:

Dr. Jerry Gao Ph. D

© Copyright 2003 San Jose State Unversity

TABLE OF CONTENTS

Revision History 3

Primes 3

Approvals 3

1. Overview 4

1.1 Introduction 4

1.2 Scope 5

1.3 Purpose of this Document 5

1.4 Test Criteria 5

1.5 Evaluation Criteria 6

2. Project Organization 6

2.1 Team Roles 6

2.2 Deliverables 7

2.3 Software Testing Process 7

2.4 Repository 7

3. System Resources 8

3.1 Hardware 8

3.2 Software 8

4. Test Strategy & Approach 8

4.1 Test Approach 8

4.2 Testing Tool & Techniques 8

4.3 Testing environment 9

5. Defect Reporting 9

5.1 Defect Categorization 9

5.2 Severity of Defect 9

5.3 Type of Defect 10

5.4 Commutation of Defects 10

6. Test Strategy and Approach 10

6.1 Unit Testing Plan 10

6.2 System Testing Plan 11

6.3 Test Reports 12

7. References 13

Appendix 14

Revision History

No. / Date / Name / Modifications
/ 02/25/03 / Rakesh / First Draft
/ 03/03/03 / Rakesh / Revised Test Plan
/ 03/05/03 / Rajesh / Final Test Document
/ 03/12/03 / Rajesh / Test Criteria, Problem Report format, Updated Work Schedule
Primes
Development / Testing / Test Management
Lahary Ravuri
/ Rajesh Dorairajan / Rakesh Teckchandani
Mansi Modak / Yashwant Bandla
Approvals
Date / Name / Department / Signature
03/06/03 / Dr. Jerry Gao / SJSU, Computer Engineering

1. Overview

1.1 Introduction

This document sets forth a Testing framework for the Binary Search Tree program built for the Software Testing class conducted by Dr. Gao. The project involves a Graphical User interface, developed using Java, which will communicate with the user and a binary tree repository to add, delete or update integers as a sorted binary tree. Following are some of the main features of the Tree Applet developed:
·  Adding new nodes
·  Deleting nodes
·  Searching for a Node
·  Loading a Tree
·  Saving a Tree
·  Traverse a Tree in:
o  Pre-Order
o  In-Order
o  Post-Order
The Tentative UI of the Binary Tree is indicated in the figure (Fig. 1) below:
Fig. 1
It is subject to change as the project starts evolving.

1.2 Scope

This document is a procedural guide for designing test cases and test documentation. To achieve our goal of 100% correct code, we shall employ both black box as well as white box testing techniques. Using these techniques will enable us to design test cases that validate the correctness of the System/Module with respect to the requirements specification.

1.3 Purpose of this Document

The aim of the test plan is to achieve 100% CORRECT code and ensure all functional and design requirements are implemented as specified in the documentation. We have to provide a procedure for Unit and System Testing and identify the documentation process and test methods for Unit and System Testing.

1.4 Test Criteria

The following criteria will be applied to test various modules of the Binary Search Tree Applet:
Adding new nodes:
·  Test the basic "Add" functionality, that is, the ability to add a node to the tree
·  Test adding a node to an empty tree
·  Test adding a node to a non empty tree
·  Test to make sure that the node added is in the right position
·  Test for valid input from the user, that is, only integers can be added to the tree
Deleting nodes:
·  Test the basic "Delete" functionality, that is, the ability to delete a node from the tree
·  Test deleting a node from an empty tree
·  Test deleting a node from a non empty tree
·  Test to make sure if the parent node is deleted, it's chidren are taken care of
·  Test to make sure if the child node is deleted, the tree is still sorted
·  Test for valid input from the user, that is, only the nodes that are present in the tree can be deleted
Searching for a node:
·  Test the basic "Find" functionality, that is, the ability to find a node in the tree
·  Test to find a node from an empty tree
·  Test to find a node from a non-empty tree
·  Test for valid input from the user, that is, only the nodes that are present in the tree can be found
Clearing the Tree:
·  Test the basic "Clear" functionality, that is, the ability to clear the tree
Tree Traversal:
·  Test to make sure that the user can select a traversal from Pre-Order, In-Order, and Post-Order
·  Test to make sure that upon depending on the chosen traversal order, the user can view the nodes sorted in the selected order
·  Test the basic "Pre-Order" functionality
·  Test the basic "Post-Order" functionality
·  Test the basic "In-Order" functionality
Load:
·  Test "Load" functionality, that is, the ability to load a previously saved tree
Save:
·  Test "Save" functionality, that is, the ability to save a tree so that it can loaded at a later time

1.5 Evaluation Criteria

The results for each test will be compared to the pre-defined expected test results, as documented in the Test Plan. The actual results are logged in the Test Results if they differ from the expected results. If the actual result matches the expected result, the test case will be marked as a passed item. A test case will be considered as a failure if the actual result produced by its execution does not match the expected results. The source of failure may be the applet under test, the test case, the expected results, or the data in the test environment. Test case failures will be logged regardless of the source of the failure.

2. Project Organization

2.1 Team Roles

Each team member will have a specific role as outlined by the assignment specifications; however, the team would like to complete the project by adopting the principles of extreme programming method, where team members typically work in pairs achieving Synergy. Following is a list of the team members and the role played by them in the project:
Team Member /
Role
Lahary Ravuri / Developer
Mansi Modak / Developer
Rajesh Dorairajan / Test Engineer
Rakesh Teckchandani / Test Manager
Yashwant Bandla / Test Engineer

2.2 Deliverables

Listed below are some of the major deliverables along with the deadlines
No / Deliverable / Owner / Time (Days) / Planned Date / Date Completed
1 / Test Plan / Test Manager / 14 / 03/06/03 / 03/01/03
2 / Program Functional specs / Developer / 21 / 03/13/03
3 / Program Source Code / Developer / 21 / 03/13/03
4 / System Testing Specification / Test Manager/ Engineer / 21 / 03/13/03
5 / Unit Testing Specifications / Test Manager/ Engineer / 21 / 03/13/03
6 / Results of Unit Testing / Developer/Test Engineer / 28 / 03/20/03
7 / System Testing Results / Test Engineer / 28 / 03/20/03
8 / Final Report / Everyone / 35 / 04/03/03
A detailed project schedule is provided in the form of a Gantt. Chart in the Appendix

2.3 Software Testing Process

We shall adopt the following Release/Testing Process:
·  Release process
o  Once the developer adds a feature/fixes a bug he/she will check-in the code into the source code repository currently located in our yahoogroups website
o  A notification will be sent to all the team members through E-mail once a new code is checked into the repository
·  Testing Process
o  The Test Engineer downloads a copy of the working code into his/her working environment
o  He/She then compiles the code using the software development environment available in their machines
o  He/She then runs the pre-defined test cases on the compiled code to verify that the software meets all normal, boundary, and illegal test specifications
o  Any new issues that are discovered during the testing process are reported into the bug repository which is again located at our yahoogroups website
·  Problem Resolution
o  Once a bug is reported into the repository we’ll use the Defect reporting mechanism reported in the section 5 below to address and resolve the bugs reported by the test engineers

2.4 Repository

The repositories for source code, bug database, and all documents are located at our group website at yahoogroups at:
http://groups.yahoo.com/group/cmpe287_sp2003

3. System Resources

3.1 Hardware

Following will be the Hardware resources:
·  Intel Pentium based PCs

3.2 Software

Following will be the Software resources:
·  Browser: IE (5.x, 6.x)/ Netscape (4.7, 7.0)
·  JDK 1.4.x/JRE 1.4.x
·  JUnit 3.8.1
·  Textpad

4. Test Strategy & Approach

4.1 Test Approach

This test plan defines the overall approach that we will adopt to develop an error-free Binary Tree Applet. This document shall be reviewed by all the functional groups of our team and will be updated regularly before the beginning of testing phase.
·  General Test Approach
o  Unit Testing
§  White-Box Testing – Basis Path Testing, Branch Testing, Conditional Testing
§  Black-Box Testing – Boundary Value Analysis, Equivalence Partitioning
o  System Testing
§  Black-Box Testing – Boundary Value Analysis, Equivalence Partitioning
o  Bug Reporting
o  Defect Categorization
o  Test case Maintenance
o  Problem resolution
Please refer to the Appendix for the Template of the Test case and the Test result we plan to adopt during our testing.

4.2 Testing Tool & Techniques

The team will use JUnit (http://www.junit.org) as a testing framework. JUnit is Open Source Software, released under the IBM's Common Public License Version 1.0 and hosted on Source Forge. It is a regression-testing framework used by the developers who implement unit tests in Java. JUnit has been chosen since it is known as an ultimate testing resource for extreme programming.
4.3 Testing environment
Priority / Browser/Environment / Version / Comments
1 / Internet Explorer / 6.0
2 / Internet Explorer / 5.0
3 / Netscape / 7.0
4 / Netscape / 4.7
5 / JRE Appletviewer / 1.4.0 / The JRE Applet will be tested only Windows platform

5. Defect Reporting

5.1 Defect Categorization
This section of the document outlines the Defect Tracking process being used. This would allow us to establish a standard framework for identification and resolution of bugs in the program. Defect Tracking allows for early detection and subsequent monitoring of problems that can affect a project’s success. Defect Tracking incorporates the following
·  Identifying project issues
·  Logging and tracking individual defects
·  Reporting defect status
·  Ongoing resolution of defects
5.2 Severity of Defect
Each defect is assigned a severity code. The severity of a defect will determine assignment of resources for resolution, and actions that should be taken in the event that there are any conflicts preventing resolution of the defect prior to its target date. This priority code is established based on impact on the system / interface.
Defect prioritization is established as “Critical”, "High", "Medium" or "Low". Characteristics of a defect in each of the priority categories are as follows:
Severity / Characteristic
Critical – 1 / ·  Stops test team from being able to move forward with any other test cases.
High – 2 / ·  One or more of the requirements for the test case cannot be tested due to functional defects.
Medium – 3 / ·  A work around can be used to continue the test case but specific functionality of the test case cannot be verified.
Low – 4 / ·  Cosmetic in nature.
·  All functionality of the test case can be verified.
5.3 Type of Defect
Each defect is assigned to one or more categories, and a category is a collection of events that are tracked to identify the trend and impact of defects on existing and future work. The tester who discovered it will assign a defect to a category or categories. The tester will then inform the test manager of the defect, who then assigns the defect to the system developer.
5.4 Commutation of Defects
The process of logging and resolving defects is a multi-step process. The process starts with executing a test case and finding that the actual results do not match the expected results. The tester will then bring the defects to the attention of the team for evaluation of the risk and the priority of the defect.
The individual tester, with the aid of the above definitions, will then log the defect into the defects tracking database in an online defect tracking system. The original tester will document in the defects database the steps needed to reproduce the error. The defect will be assigned to the tester and to a developer to find a solution for the error. Subsequently, the same tester will retest the issue during the formal system-testing event.
Once a tester and a developer have been assigned to the defect, the severity level will determine the timeline for resolution. Any defects with a severity rating of 1 are dealt with on an immediate basis. Defects with a severity rating of 2 will have a same day turn around. Severity 3 and 4 defects will be handled on a weekly status meeting between the test team and the developers.
To ensure that all defects are being worked and no defects slip through the cracks, a weekly meeting will be conducted between the developers and the test team.

6. Test Strategy and Approach

6.1 Unit Testing Plan
The unit test cases shall be designed to test the validity of the program's correctness. Both White-box testing as well as Black-box testing will be used to test the modules and procedures that support the modules.
White-box testing will be achieved by using different techniques such as
·  Branch Testing,
·  Basis-path Testing
·  Conditional Testing
Black Box testing will be achieved by using
·  Boundary Value Analysis
·  Equivalence Partitioning method
Test case designers shall generate cases that will force execution through different paths and sequences:
  1. Each decision statement in the program shall take on a true value and a false value at least once during testing.
  2. Each condition shall take on each possible outcome at least once during testing.

6.2 System Testing Plan