Highlights Foundations of Software Testing (ISTQB)

Highlight document

Foundations of Software Testing: ISTQB

1. Fundamentals of testing

risk impact + likelihood

-error: mistake: human action

-defect: bug, fault: flaw can cause system to fail

-failure: deviation from expected result

Figure 1.1 Types of error and defect

Figure 1.2 Cost of defects (Bohm)

Quality: the degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations

-validation: is this the right specification?

-verification: is the system correct to specification?

root cause analysis: track back failures

Testing principles:

-testing is context dependent: testing is done different in different context

-Exhaustive testing (complete testing): A test approach in which the test suite comprises all combinations of input values and preconditions

 impossible

use risk and priorities to focus testing efforts

-Start testing as early as possible

example testing goals:

  • finding defects
  • information level quality
  • preventing defects

not always fixing defects  just measure the software

regression testing: testing to ensure nothing has changed that should not have changed

 defect clustering: most of the defects in a small number of modules

 pesticide paradox: review and revise test cases regularly

 testing shows presence of defects (can't prove there are no defects)

 absence of errors fallacy: finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations

Fundamental test process:

-Test planning and control

  • planning:
  • determine scope and risks and identify objectiveness of testing
  • determine the test approach
  • implement the test policy and/or test strategy
  • determine the required test resources (people, environment, PCs)
  • schedule all tasks and activities
  • determine the exit criteria (coverage criteria  when test finished)
  • monitoring and control:
  • measure and analyze the results of reviews and testing
  • monitor and document progress, test coverage and exit criteria
  • provide information on testing
  • initiate corrective actions
  • make decisions (stop/continue test, release/retain software)

-Test analysis and design

general testing objectives transformed into tangible test conditions and test designs

  • review the test basis
  • identify test conditions
  • design the tests
  • evaluate testability of the requirements and system
  • design the test environment set-up and identify any required infrastructure and tools

-Test implementation and execution

make test conditions into test cases  build from design

  • implementation
  • develop and prioritize test cases and create test data
  • create test suites for test execution
  • implement and verify environment
  • execution
  • execute test suites and individual test cases
  • log outcome, record identities and versions of software
  • compare actual results with expected results
  • report discrepancies as incidents
  • repeat test activities as result of action taken of each discrepancy: confirmation testing/ re-testing

-Evaluating exit criteria and reporting

  • check test logs against exit criteria
  • assess if more tests are needed or if the exit criteria specified should be changed
  • write a test summary report

-Test closure activities

  • check deliverables
  • finalize and archive testware (test scripts, environments…)
  • hand over testware to maintenance organization
  • evaluate the testing process

A tester should be independent: you can't test your own work.

Levels of independence:

-tests by person who wrote the item under test

-tests by another person within the same team

-tests by person from a different organizational group (independent test team)

-tests by a person from a different organization or company

as a tester:

-communicate findings in a neutral way without criticizing

-explain that by knowing about this now we can work around or fix it so the delivered system is better

-start with collaboration rather than battles

2. Testing throughout the software life cycle

Waterfall model  V- model

4 test levels:

-Component testing

-Integration testing (interfaces)

-System testing

-Acceptance testing

V-model

iterative life cycles/ incremental development

phase 1 / phase 2 / phase 3
define, develop, build, test, implement / define, develop, build, test, implement / define, develop, build, test, implement

 prototyping, RAD, RUP, agile development

-RAD: parallel development of functions and subsequent integration

-early validation of risks and rapid response to changing customer requirements- update your plans

-encourage active customer feedback

-Agile development

 extreme programming (XP)

on-site customer for feedback, test script before coding, automated tests

Test levels:

Component testing

unit, module, program testing

 stubs + drivers to replace missing software

test-driven development: prepare and automate test cases before coding

 iterative

Integration testing

tests interfaces between components

-component IT: between components

-system IT: between systems

incremental integration testing:

-top-down: start at main menu (use stubs)

-bottom-up (use drivers)

-functional incremental

Sytem testing

functional and non-functional (performance, reliability)

Acceptance testing

can the system be released? what are the risks?

as-if production environment

establish confidence in the system

-contract acceptance testing: contract's requirements for custom-developed software

-compliance acceptance testing/ regulation acceptance testing: performed against regulations: governmental, legal, safety

+ Alpha testing: developer's site, potential users observed

+ Beta testing: user's site

Test types:

Functional testing

 from requirement specifications, functional specifications, use cases

 black box testing

-requirements-based

-business-process-based

Non-functional testing

quality characteristics

performance, load, stress, usability, maintainability, reliability, portability

-functionality: suitability, accuracy, security, interoperability, compliance

-reliability: maturity (robustness), fault-tolerance, recoverability, compliance

-usability: understandability, learnability, operability, attractiveness, compliance

-efficiency: performance, resource utilization, compliance

-maintainability: analyzability, changeability, co-existence, replaceability, compliance

Structural testing

system architecture, white box

code coverage (% of the code tested)

Confirmation (Re-testing) and regression testing

Maintenance testing: testing during life cycle of system

-testing the changes

-regression tests

Impact analysis: what parts of the system need regression testing

-planned modifications

  • perfective modifications (whishes)
  • adaptive modifications (to environmental changes (hardware))
  • corrective modifications (defects)

-ad-hoc corrective modifications

3. Static techniques

-informal review (not based on procedure)

-formal review (documented procedures and requirements)

phases of a formal review:

1. planning

request for review by author to moderator (or inspection leader)

moderator performs entry check and defines exit criteria

possibly entry criteria:

-short check, not many defects

-references are available

roles within a review (focuses):

-focus on higher-level documents: requirements

-focus on standards: internal consistency, clarity,…

-focus on related documents at the same level

-focus on usage: testability or maintainability

2. kick-off

get everybody on the same wavelength

3. preparation

participants work individually

4. review meeting

logging phase/ discussion phase/ decision phase

severity classes: critical/ major/ minor

5. rework

author will improve the document

6. follow-up

roles:

-the moderator: leads the review process

-the author

-the scribe: records each defect

-the reviewers

-the manager

Types of review

-Walkthrough: author guides participants through the document

-Technical review: discussion meeting focused on achieving consensus about the technical content of a document

-Inspection: most formal: document prepared and checked thoroughly by reviewers before meeting

static analysis: examination of requirements, design, code

 tools: depth of nesting, cyclomatic number, graphic control graph

code metrics:

-cyclomatic complexity: nr of decisions in a program: sum nr of binary decision statements + 1

-control flow: count nodes

code structure:

-control flow structure

-data flow structure

-data structure

4. Test design techniques

Categories of test design techniques:

-static testing techniques

-specification-based (black-box) testing techniques

-structure-based (white-box) testing techniques

-experience-based testing techniques

Specification based or black-box techniques

-Equivalence partitioning and boundary value analysis

partition sets of tests into groups that are the same and test boundaries between partitions

-Decision table testing

deal with combinations of things

-State transition testing

 state diagram

-Use case testing

Structure based or white-box techniques

 measure coverage: amount of testing performed

coverage= number of coverage items exercised x 100%

total number of coverage items

100% coverage is not 100% tested

-EP: percentage of equivalence partitions exercised

-BVA: percentage of boundaries exercised

-Decision tables: percentage of business rules or columns tested

-State transistion testing

  • Percentage of states visited
  • Percentage of (valid) transitions exercised (Chows 0-switch coverage)
  • Percentage of pairs of valid transitions exercised ('transition pairs' or 'Chows 1-switch coverage')
  • Percentage of invalid transitions exercised

statement coverage=number of statements exercised x 100%

total number of statements

READ A100% statement coverage:

READ B1 test case with A greater than B

IF A> B THEN C=0

ENDIF

1. READ A(6 statements: coverage)

2. READ B

3. C= A + 2 xB

4. IF C> 50 THEN

5. PRINT 'Large C'

6. ENDIF

83% coverage

Test 1_1: A=2, B=3  C=8, statements 1 to 4 and 6

Test 1_2: A=0, B=25  C=50, statements 1 to 4 and 6

Test 1_3: A=47, B=1  C=49, statements 1 to 4 and 6

Test 1_4: A=2-, B=25  C=70

 makes 100% coverage

decision coverage=number of decision outcomes exercised (IF) x 100%

total number of decision outcomes

stronger than statement coverage

100% decision coverage guarantees 100% statement coverage

previous exmpale: extra test:

"IF A>B= false" (A<=B)

Other:

-Branch coverage measures coverage of both

conditional and unconditional branches (100% decision coverage= 100% branch coverage)

-Linear code sequence and jump (LCSAJ) coverage

-Condition Coverage

-Multiple condition coverage (condition combination coverage)

-Condition determination coverage (multiple/ modified condition decision coverage, MCDC): coverage of all conditions that can effect or determine the decision outcome

-Path coverage (difficult with loops)  independent path segment coverage

Experience based techniques:

-error guessing: experienced tester

-exploratory testing: gain information while testing for more tests

Choosing a test technique:

Internal factors:

-models used

-tester knowledge/ experience

-likely defects

-test objective

-documentation

-life cycle model (sequential model: formal technique, iterative model: exploratory testing)

External factors:

-risk

-customer/ contractual requirements

-type of system

-regulatory requirements

-time and budget

5. Test management

test plan: project plan for testing work to be done

test approaches or strategies;

-analytical (risk based, requirements based)

 analytical technique during requirements and design stages

-model based

-methodical

-process- or standard- compliant (IEEE…)

-dynamic (exploratory testing)

-consultative or directed: ask users or developers what to test

-regression- averse: automate tests

Factors in choosing a strategy:

-risks

-skills of the testers

-objectives: satisfy needs of stakeholders

-regulations

-product

-business

incident management:

defect detection percentage: defects(testers)

defects (testers) + defects (field)

6. Tool support for testing

Test management tools:

-management of tests

-scheduling of tests to be executed

-management of testing activities

-interface to other tools (test execution, incident management, requirement management, configuration management)

-traceability of tests, test results, defects

-logging test results

-preparing progress reports based on metrics

Requirements management tools:

storing, check consistency

Configuration management tools:

version management

tool support for static testing:

-review process support tools

review planning and tracking, communication support, collaborative reviews

-static analysis tools (developers)

understand code structure

calculate metrics (cyclomatic complexity)

-modeling tools (developers)

validate models of system or software

tool support for test specification:

-test design tools

generate test input values from requirements, code , gui

generating expected results

-test data preparation tools

select data from database, make anonymous, sort

tool support for test execution and logging:

-test execution tools

runs tests, capture/ playback tools

-test harness/ unit test framework tools (developers)

test harness: stubs and drivers

-test comparators

-coverage measurement tools (developer)

-security tools: trying to break into a system

tool support for performance and monitoring:

-dynamic analysis tools (developer)

analyze what happens behind the scenes while software runs

-performance- load- and stress-testing tools

-monitoring tools; continuously keep track of status of system

potential benefits of using tools:

-reduction of repetitive work

-greater consistency and repeatability

-objective assessment

-ease of access to information about tests or testing

risks of using tools:

-unrealistic expectations for the tool

-underestimating time, cost, effort for initial introduction of a tool

-underestimating time, cost, effort needed to achieve significant and continuing benefits from the tool

-underestimating effort required to maintain test assets generated by the tool

-over-reliance on the tool

by Esther Martijn1/10