ESE Module 6-1

SoftwareTestingConcepts

Think back to the days when you learned how to cre-

ate computer programs. You probably spent weeks or

even months learning one or more programming lan-

guages. If you had good training, you spent addition-

al time learning the mechanics of problem analysis

and program design. But if your background is typical,

we'll bet that you spent very little time learning about

software testing.

For that reason, you might still be asking questions

like these: What is software testing? What are our

objectives as we conduct testing? Why can't we

guarantee that we'll find all errors before we deliver

software to an end user? And since we can't just how

effective is software testing in finding errors.

The answers to these, and many other questions,

will enable us to introduce a series of basic testing

concepts that are pivotal to a more detailed under-

standing of testing strategies and tactics presented in

later ESE modules.

SoftwareTesting:TheBigPicture

Although software testing is the most time and

resource consuming of all software engineering activi-

ties, many software developers really don't under-

stand it. Certainly, everyone recognizes the need to

test, but managers and practitioners often miss the

big picture. That's what we'll look at in the following

reading.

Readings

The following excerpt has been adapted from A

Manager's Guide to Software Engineering and

answers a number of fundamental questions about

software testing.

Whataretheobjectivesoftesting?

Glen Myers [1] states a number of rules that can serve well

as testing objectives:

1. Testing is a process of executing a program with the

intent of finding an error.

2. A good test case is one that has a high probability of

finding an as-yet undiscovered error.

3. A successful test is one that uncovers an as-yet undis-

covered error.

The above objectives imply a dramatic change in view-

point. They move counter to the commonly held view that

a successful test is one in which no errors are found. Our

objective is to design tests that systematically uncover dif-

ferent classes of errors and to do so with a minimum

amount of time and effort.

If testing is conducted successfully (according to the

objective stated above), it will uncover errors in the soft-

ware. As a secondary benefit, testing demonstrates that

software functions appear to be working according to spec-

ification: that performance requirements appear to have

been met. In addition, data collected as testing is conducted

provide a good indication of software reliability and some

indication of software quality as a whole. But there is one

thing that testing cannot do: testing cannot show the absence

of defects; it can only show that software defects are present. It is

important to keep this (rather gloomy) statement in mind

as testing is being conducted.

I often hear the phrase "verification and validation" used

in conjunction with testing. What does this phrase mean?

Software testing is one element of a broader topic that is

often referred to as verification and validation (V&V).

Verification refers to the set of activities that ensures that

software correctly implements a specific function.

Validation refers to a different set of activities that ensures

that the software that has been built is traceable to cus-

tomer requirements. Boehm [2] states this another way:

Verification: "Are we building the product right?"

Validation: "Are we building the right product?"

The definition of V&V encompasses many of the activities

that we have referred to as software quality assurance

(SQA): formal technical reviews, quality and configuration

audits, performance monitoring, simulation, feasibility

study, documentation review, database review, algorithm

analysis, development testing, qualification testing, and

installation testing. Although testing plays an extremely

important role in V&V, many other activities are also nec-

essary.

How do we achieve the testing objectives that you've

described?

Testing presents an interesting anomaly for the software

engineer. During earlier software engineering tasks, the

engineer attempts to build software from an abstract con-

cept to a tangible implementation. Now comes testing.

The engineer develops a strategy for conducting tests, and

tactics for designing test data that result in a series of test

cases that are intended to demolish the software that has

been built. In fact, testing is the one step in the software

engineering process that could be viewed (psychologically,

at least) as destructive rather than constructive.

Software developers are by their nature constructive

people. Testing requires that the developer discard pre-

conceived notions of the correctness of the software just

developed and overcome a conflict of interest that occurs

when errors are uncovered.

TestingPrinciples

The basic concepts and principles discussed in the

video segments of this ESE module lead to one impor-

tant conclusion: If we apply a set of basic principles, it

is possible to plan testing systematically and to exe-

cute a series of test cases that have a high likelihood

of finding errors.

Readings

In the reading that follows (adapted with permission

from a posting by James Bach on the Internet news-

group Comp.Software.Eng) we'll explore a few more

testing principles.

Principles of Software Testability

Software testability is simply how easily [a computer pro-

gram] can be tested. Since testing is so profoundly diffi-

cult, it pays to know what can be done to streamline it.

Sometimes programmers are willing to do things that will

help the testing process. I'd like to have a checklist of pos-

sible design points, features, etc., to use in negotiating with

them.

There are certainly metrics that could be used to mea-

sure testability in most of its aspects. Sometimes, testability

is used to mean how adequately a particular set of tests

will cover the product. It's also used by the military to

mean how easily a tool can be checked and repaired in the

field. Those two meanings are not the same as "software testability.”

[The checklist that follows provides a set of characteristics that lead to testable software.]

Operability

“The better it works, the more efficiently it can be tested.”

  • The system has few bugs (bugs add analysis and reporting overhead to the process).
  • No bugs block the execution of tests.
  • The product evolves in functional stages (allows simultaneous development and testing).

Observability

“What you see is what you test.”

  • Distinct output is generated for each input.
  • System states and variables are visible or queriable during execution.
  • Past system states and variables are visible or queriable (e.g. transaction logs).
  • All factors affecting the output are visible.
  • Incorrect output is easily identified.
  • Internal errors are automatically detected through self-testing mechanisms.
  • Internal errors are automatically reported.
  • Source code is accesible.

6-1.2 ··EssentialSoftwareEngineering

[1] Myers, C., The Art of Software Testing, Wiley, 1979.

[2] Boehm, B., Software Engineering Economics, Prentice-

Hall, 1981. D

Exercise6-1,

SelectiveTesting

Review the flowchart.

1. How many independent program paths are there

for the flow chart? Note: Each new independent

path will introduce program statements that have not

been executed before.

2. Using colored markers or different types of lines,

show each independent path on the flowchart.

3. Can you come up with a general rule that defines the number of independent paths in a flowchart?

Controlability

"The better we can control the software, the more the test-

ing can be automated and optimized."

  • All possible outputs can be generated through some

combination of input.

  • All code is executable through some combination of

input·

  • Software and hardware states and variables can be

controlled directly by the test engineer.

  • Input and output formats are consistent and struc-

tured.

  • Tests can be conveniently specified, automated, and

reproduced.

Decomposability

"By controlling the scope of testing, we can more quickly

isolate problems and perform smarter retesting.

  • The software system is built from independent

modules.

  • Software modules can be tested independently.

Simplicity

"The less there is to test, the more quickly we can test it."

  • Functional simplicity (e.g., the feature set is the mini-

mum necessary to meet requirements).

  • Structural simplicity (e.g., architecture is modularized

to limit the propagation of faults).

  • Code simplicity (e.g., a coding standard is adopted for

ease of inspection and maintenance).

Stability

"The fewer the changes, the fewer the disruptions to test-

ing.

  • Changes to the software are infrequent.
  • Changes to the software are controlled.
  • Changes to the software do not invalidate existing

tests.

  • The software recovers well from failures.

Understandability

"The more information we have, the smarter we will test."

  • The design is well understood.
  • Dependencies between internal, external and shared

components are well understood.

  • Changes to the design are communicated.
  • Technical documentation is instantly accessible.
  • Technical documentation is well organized.
  • Technical documentation is specific and detailed.
  • Technical documentation is accurate.

SoftwareTestingConcepts..6-1.3

The preceeding article is copyright © 1994 by James Bach.

Exercise6-2,

TestingCosts

A variation on the quality metric that we call defect

removal efficiency is the metric that is called yield.

Yield is a measure of the error removal efficiency as

the software engineering process moves from activity

to activity. For example, assume that formal technical

reviews (see ESE Module 7-3) are used to remove

errors during the analysis activity. The yield achieved

during analysis would be:

yieldanaly = errorsanaly / (errorsanaly + errorsother)

where

errorsanaly = errors found during analysis

errorsother = errors found during each step that fol-

lows (i.e., design, code, and each testing step)

Assume that there are 100 errors to be found through-

out the entire software process through delivery, and

that every error found is corrected without introducing

new errors. (A tenuous assumption, but we'll accept

it!)

The cost of finding and correcting an error during

analysis and design is one monetary unit. The cost of

finding and correcting errors in testing is 10 times more

expensive than finding and correcting them during

analysis and design. The cost of finding and correcting

errors during coding is three times more expensive

than finding and correcting them during analysis and

design. The cost of finding and correcting a defect

(i.e., an error that is released to the end user) is 100

times as expensive as finding and correcting the same

error during analysis and design.

Assuming that the yield for analysis is 40%: for

design is 60%: for coding, 50%; for testing 85%. What is

the overall cost of error removal and how many errors

become defects?

6-1.4 · EssentialSoftwareEngineering

Post-Test,Module6-1

This post-test has been designed to help you assess

the degree to which you've absorbed the information

presented in this ESE module.

SoftwareTestingConcepts

1. Why can't we simply delay error removal until test-

ing?

a. the error removal efficiency of testing is not

high enough

b. testing methods aren't good enough

c. the testing strategy demands earlier removal

d. there's nothing wrong with waiting until

testing

2. The primary objective of testing is:

a. to show that the program works

b. to provide a detailed indication of quality

c. to find errors

d, to protect the end-user

3. A software engineer should:

a. perform all tests because he/she knows the

program best

b. perform some tests, but shareresponsibility

with an independent tester

c. not be involved in testing

d. none of the above

4. Exhaustive testing is difficult or impossible

because:

a. it would take too long

b. there are too many path permutations in

most programs

c. it would be difficult to develop test cases to

test every path permutation

d. all of the above

5. Selective testing is used to test:

a. every program path

b. selected independent paths

c. selected paths, data structures and variables

d. selected inputs and outputs

6. Tests that are conducted must be traceable to:

a. software requirements

b. performance constraints

c. hardware characteristics

d. all of the above

7. The test plan is developed:

a. just before testing commences

b. as soon as design is complete

c. as soon as requirements have been

documented

d. during project planning

8. In general, a good test plan will assign testing

effort in the following way:

a. evenly across all program modules

b. unevenly, focusing on the 40% that account

for the majority of the errors

c. unevenly, focusing on the 20% that are likely

to contain the majority of the errors

d. unevenly, with slightly more effort focused on

those modules that are likely to be error prone

9. The best way to test a large program is:

a. test major chunks of the program to uncover

errors

b. test incrementally

c. test sequentially

d. none of the above

10. Software testability is:

a. how easily (a computer program) can be

tested

b. how adequately a particular set of tests will

cover the product

c. how easily a program can be checked and

repaired in the field

d. all of the above

e. none of the above

11. Testing should:

a. never be conducted outside normal

operating conditions of the software

b. always be conducted outside normal

operating conditions of the software

c. sometimes be conducted outside normal

operating conditions of the software

d. testing outside normal operating conditions is

impossible

12. Testing and debugging are:

a. different words that means essentially the

same thing

b. related in that one uncovers errors and the

other finds the cause

c. different activities that should always be

performed by different people

d. both focused on symptoms