Test Status Report
Test Status Reporting is a formalized reporting on the status of the project from a testing point of view. This is part of the project Communication plan. The Report shows quantitative information about the project.
Test Status Reporting is the component that informs key project stakeholders of the critical aspects of the project's status. Good status reporting prevents surprises to project sponsors and stakeholders. Formal status reporting needs to be provided as part of the project steering committee meetings.
Purpose
The purpose of a Test Status Report is to provide an ongoing history of the project, which becomes very useful in terms of tracking progress, evaluation and review. Test Status Reports forms a part of the Project Review Process both during and after completion of the project.
A Test Status Report should identify the key areas of importance that will assist the stakeholders of the project in determining the “state” of the software development and test efforts. It helps in answering one of the most asked questions of system testers … “Will the software be ready for release on the agreed upon date?”
The Test Lead should try to maintain a careful balance between timeliness, accuracy, and consistency when preparing these reports as the status of the defects keeps changing.
Owner(s)
The person responsible for managing all test activities (Test Manager or Test lead) should be the owner of the test status report.
Who should use it?
Test Status Report is a document that will be used by the Project Manager to understand the ongoing history of the project, which becomes very useful in terms of tracking progress, evaluation and review. Test Status Reports forms a part of the Project Review Process both during and after completion of the project.
The target audience for a Test Status Report could vary. Any one interested on project health can be the audience. This can be the Project Management team, Project Steering committee, Customer or other key stakeholders of the project. (Click on the below pic to see clear view)


A brief description of the different attributes of test status report (figure-1) presented below.
1. Project Name – The project name that you are reporting metrics on.
2. Duration – The reporting period
3. Report by – Owner / Author of the Report.
4. Report to – Target Audience.
5. Previous weeks Accomplishment – Activities performed after last reporting date.
6. Missed Accomplishment – Not able to work on any planned activity of the previous report.
7. Plans for this week – Future planning for the tasks to be performed during the current reporting period.
8. Issues –
The role of a Test Lead is to give approximate if not accurate reports on the status of the project. At the start of test execution, Test Lead presumes that all the risks to be addressed by this phase of testing still exist. As we progress through the test plan, one by one, risks are cleared as all the tests that address each risk are passed. Halfway through the test plan, the tester can say, “we have run some tests, these risks have been addressed, here are the outstanding risks of release.”
Suppose testing continues, but the testers run out of time before the test plan is completed. The go live date approaches, and management wants to judge whether the system is acceptable. Although the testing has not finished, the tester can say, “we have run some tests, these risks have been addressed, here are the outstanding risks of release.” The tester can present exactly the same message throughout the test phase, except the proportion of risks addressed to those outstanding increases over time. How does this help?
Throughout the test execution phase, management always has enough information to make the release decision. Either management will decide to release with known risks, or choose not to release until the known risks (the outstanding risks that are unacceptable) are addressed. Most of the time Coding gets 90% of the effort and testing is squeezed yet again. To avoid this, code and test activities should be defined as separate tasks in project plan.
Most managers like the risk-based approach to testing because it gives them more visibility of the test process. Risk-based testing gives management a big incentive to let you take out a few days early in the project to conduct the risk assessment. All testers have been arguing that they should be involved in projects earlier. If managers want to see risk-based test reporting they must let testers get involved earlier. This must be a good thing for testers and our projects.
9. Cumulative Issues– Issues from previous report, if not addressed should be listed here.
10. Test execution Details for previous week – Test details for the reporting period

1.  Total effort spent on test execution

2.  Total number of functionality tested during the reporting period.

11. Test Summary

1.  Total number of test cycles, number of defects found, and defect tracking details should be listed in this section.

2.  Test Coverage details in terms of functionalities tested and test cases executed should be detailed in this section.

12. Project Milestone – Important project schedule from a testing point of view should be listed here.

The Test Status report should also represent the status through charts and graphs for easy and better understanding.
The following are few charts that can be included in the test status report.
Functionality Coverage vs. Duration
Functionalities tested during different releases or during different reporting period can be shown in a graph to give an indication about the progress of test coverage.

(Figure-2 : Functionality Coverage chart)
Defects reported vs. closed
The above report shows the find and fixes counts for bugs reported against the reporting period. This gives Management an idea of the defect trends. A flattening cumulative open curve indicates stability, or at least the inability of the test team to find many more bugs with the current test system. A cumulative closed curve that converges with the open curve indicates quality, a resolution of the problems found by testing. Overall, this chart gives a snapshot of product quality as seen by Testing, as well as telling Management about the bug find and fix processes.

(Figure-3: Defect status)
Defect Location
This graph displays the defect occurrence in different modules. This helps the management in assessing the most problematic area in the project. Based upon this a root-cause analysis can be conducted and appropriate corrective measures can be taken to reduce the risk.

(Figure-4: Defect Location)
Defect Classification
These details help in conducting a root cause analysis and taking corrective measures to improve product quality.

(Figure-5: Defect Classification)
Test Case Progression Chart
This gives a clear indication of test progress against the test plan over a time.

(Figure-6: Test Progression)
Severity wise Defect Distribution
This gives a indication about the severity wise defect distribution.

(Figure-5: Defect Distribution)
What should be the ideal frequency?
The frequency of test status report can be decided based upon the project test plan.
In case of a small project where the releases to testing team happens every alternate day , status report should be prepared on a daily basis to keep the management informed about the test progress.
But in case of large projects where the application released on in a month for testing status report should be prepared once in a week.
Periodic meetings should be held to discuss the project status, either verbally or based on the Test Status Report. The meetings should be often enough that progress could be reported against a number of milestones since the last meeting.
Read 2nd Part: Guide to Metrics Collection in Software Testing (includes Benefits of implementing metrics in software testing)

Part -2

Guide to Metrics Collection in Software Testing:
Metrics are defined as ‘standards of measurement’ used to indicate a method of gauging the effectiveness and efficiency of a particular activity within a project.
Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager, Senior Management and Client Every project should track the following Test Metrics:
• Number of Test Cases Passed
• Number of Test Cases Failed
• Number of Test Cases Not Implemented during the particular test cycle
• Number of Test Cases Added / Deleted
• Number of Test Cases Re-execute
• Test taken for Execution for the Test Cases
Calculated metrics:
Calculated Metrics means to convert the existing base metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels like by module lead, Technical Tead, Project Manager and Testers. The following Calculated Metrics are recommended for implementation in all test efforts:
• % Complete Test Cases
• % Defects Corrected in particular test cycle
• % Test Coverage against Use Cases
• % Re-Opened Defects
• % Test Cases Passed
• % Test Cases Failed
• % Test Effectiveness
• % Test Cases Blocked
• % Test Efficiency
• % Failures
• Defect Discovery Rate
• Defect Removal Rate and Cost
Measurements for Software testing
Following the V model of software development for every development process there needs to be a corresponding software testing process. Every step of this testing process needs to be measured so as to guarantee a quality product to the customer. At the same time, measurement needs to be easy to understand and implement.
1. Size of Software: Software size means the amount of functionality of an application. The complete project estimation for testing depends on determining the size of the software. Many methods are available to compute the size of the software. The following are couple of them:
• Function Point Analysis
• Use Cases Estimation Methodology
2. Requirements review: First a SRS (Software requirement specifications) should be obtained from the client. This SRS should be:
a) Complete: A complete requirements specification must precisely define all the real world situations that will be encountered and the capability's responses to them. It must not include situations that will not be encountered or unnecessary capability features.
b) Consistent: A consistent specification is one where there is no conflict between individual requirement statements that define the behaviour of essential capabilities and specified behavioural properties and constraints do not have an adverse impact on that behaviour.
c) Correct: For a requirements specification to be correct it must accurately and precisely identify the individual conditions and limitations of all situations that the desired capability will encounter and it must also define the capability's proper response to those situations.
d) Structured: Related requirements should be grouped together and the document should be logically structured.
e) Ranked: Requirement document should be ranked by the importance of the requirement
f) Testable: In order for a requirement specification to be testable it must be stated in such as manner that pass/fail or quantitative assessment criteria can be derived from the specification itself and/or referenced information
g) Traceable: Each requirement stated within the SRS document must be uniquely identified to achieve trace ability. Uniqueness is facilitated by the use of a consistent and logical scheme for assigning identification to each specification statement within the requirements document.
h) Unambiguous: A statement of a requirement is unambiguous if it can only be interpreted one way. This perhaps, is the most difficult attribute to achieve using natural language. The use of weak phrases or poor sentence structure will open the specification statement to misunderstandings.
i) Validate: To validate a requirements specification all the project participants, managers, engineers and customer representatives, must be able to understand, analyze and accept or approve it.
j) Verification: Requirements specification requires review and analysis by technical and operational experts in the domain addressed by the requirements. In order to be verifiable requirement specifications at one Practical Measurements for Software Testing level of abstraction must be consistent with those at another level of abstraction.
Review efficiency can then be computed. Review efficiency is a metric that provides an insight on quality of review and testing conducted.
Review efficiency = 100 * Total number of defects found by reviews / Total number of project defects
High values indicate an effective review process is implemented and defects are detected as soon as they are introduced.
3. Effectiveness of testing requirements: Measuring the effectiveness of testing requirements involves.
a) Specification of requirements and maintenance of Requirement Trace-ability matrix
Specification of requirements must include:
a) SRS Objective
b) SRS Purpose
c) Interfaces
d) Functional Capabilities
e) Performance Levels
f) Data Structures/Elements Safety
g) Reliability
h) Security/Privacy
i) Quality
j) Constraints & limitations
Once the requirements have been specified and reviewed the next step is to update the requirement trace-ability matrix. The RTM is an extremely important document. In its simplest form, it provides a way to determine if all requirements are tested. However, the RTM can do much more. For example,

Requirement / Estimated Tests required / Type of tests / Automated/Manual / Re-useable test cases
User Interface / 20 / 5 Functional
3 Positive
3 Negative
4 Boundary / 8 Automated
12 Manual / TC101
Command line / 18 / 10 Functional
3 Positive
5 Negative / 10 Automated
8 Manual / TC201
Total / 110 / 20 Automated
40 Manual / 50
Estimated Tests / 50 Manual


In this example, the RTM is used as a test planning tool to help determine how many tests are required, what types of tests are required, whether tests can be automated or manual, and if any existing tests can be re-used. Using the RTM in this way helps ensure that the resulting tests are most effective.