CIS224: Software Projects: Software Engineering and Research Methods

Verification and Validation

1. Software Testing Strategies

There are several main software testing strategies:

- testing begins at the most detailed component level and works

outward toward the integration of the entire software system;

- different testing techniques are appropriate at different testing levels;

- testing is conducted by the testing engineer and sometimes in

cooperation with the developer;

- testing is different from debugging.

The software testing process may be organized according to the spiral model.

The software testing strategy can be viewed in the context of the spiral model:

- Unit testing: focuses verification effort on the smallest unit of software design

that is on the software component functions and procedures;

- Integration testing: these are techniques for constructing the program structure

while at the same time conducting tests to uncover errors;

- Validation testing: includes a series of tests after software has been assembled

and interfacing errors have been uncovered and corrected;

- System testing: when the software is to be incorporated into another system

a series of additional integration tests are performed.

The final question that arises in software testing is:

"When are we done testing- how do we know that we have tested enough?"

The answer to this question can be found using statistical modelling and

software reliability theory. They advise how to develop models of the

software failures as a function of the execution time.

Such an example is the logarithmic Poisson execution-time model:

f(t) = ( 1 / p ) ln[ i0 p t + 1 ]

where: f(t) is the cumulative number of failures that are expected to occur

once the software has been tested for some time t

i0 is the initial software failure intensity

p is the exponential reduction in failure intensity as errors are recovered

The derivative of the cumulative number of failure gives the

instantaneous failure intensity:

i(t) = i0 / ( i0 p t + 1 )

This equation for the instantaneous failure intensity allows to predict

the decrease of errors as testing progresses.

The actual error intensity can be plotted against the predicted curve.

If the actual data gathered during testing and the logarithmic Poisson

execution model are reasonably close over a number of points, then

this model can be really taken to predict the total testing time necessary

to achieve an acceptably low failure intensity.

2. Debugging

Debugging occurs as a consequence of successful testing.

When a test case uncovers an error, debugging is a process

that aims at removal of this error.

Debugging can be difficult due to various reasons:

- the symptom may temporarily disappear when another error is corrected

- the symptom may be caused by non-errors, like round-off inaccuracies

- the symptom may be caused by human error that is not easily traced

- the symptom may be a result of timing problems

- the symptom may be due to causes that are distributed across a number of tasks

- it may be difficult to reproduce the initial conditions


3. The Software Testing Specification

I. Scope of Testing

II. Test Plan

A. Test Phases

B. Schedule

C. Overhead Software

III. Testing Procedures

A. Order of integration

B. Unit Tests

- description of tests for modules

- overhead software

- expected results

C. Test Environment

D. Test Case Data

IV. Test Results

V. Appendices

Software testing is part of a broader topic that is often referred to as

verification and validation:

Verification refers to the set of activities that ensure that the software

correctly implements a particular function:

Are we building the product right ?

Validation refers to a different set of activities set of activities that ensure

that the software is traceable to the customer requirements:

Are we building the right product ?