Wondering whether your QA process is on the right track? Curious about measuring the efficiency of your testing efforts? Eager to calculate the economic impact of your QA practices? We have compiled a concise summary of vital test metrics that will help you answer these questions. By leveraging these formulas, you can evaluate the success of your QA process and make informed decisions to improve its effectiveness. So, let's jump right in and explore the key testing metrics that hold the key to unlocking your QA potential!
We have categorized all QA testing metrics into two main groups: fundamental testing metrics (also known as base testing metrics) and derived metrics.
Base Metrics, also referred to as fundamental QA metrics, are quantitative measurements collected by analysts throughout the development and execution process. These metrics provide absolute numbers that serve as indicators of various aspects of quality assurance. The list of key QA base metrics includes:
1. Number of test cases. The number of test cases formula is simple: you just need to simply count the total number of test cases that have been created for testing a software application.
2. Number of passed, failed, and blocked test cases. This metric tracks the number of test cases that have passed, failed, or are currently blocked due to certain issues or dependencies.
3. Total number of defects and critical issues reported, accepted, rejected, and deferred. This is an overview of the overall defect status in the software. It includes the total count of reported defects and how many have been accepted, rejected, or deferred for future resolution. It also highlights critical issues that require immediate attention.
4. Number of planned and actual test hours. This metric compares the estimated or planned test hours with the actual time spent on testing activities. It helps in assessing the accuracy of time estimation and resource allocation for testing.
5. Number of bugs discovered after shipping. This metric tracks the number of bugs or defects that are identified in the software after it has been released or deployed to the end users.
6. Number of critical issues reported. This metric focuses specifically on the count of critical issues or defects that have been reported during testing. Critical issues are those that have a severe impact on the functionality, performance, or security of the software.
7. Number of defects accepted. This metric indicates the count of defects that have been accepted by the development or quality assurance team as valid issues requiring resolution.
8. Number of defects rejected. This metric represents the count of reported defects that have been reviewed and rejected as either invalid or not meeting the criteria for fixing.
9. Number of defects deferred. This metric tracks the count of defects that have been postponed or deferred for resolution to a later stage or future releases of the software.
Derived software testing metrics are metrics that are derived or calculated based on other fundamental testing metrics. These metrics provide additional insights and analysis beyond the basic measurements, offering a more comprehensive understanding of the testing process. Derived metrics often involve calculations, formulas, or comparisons to assess specific aspects such as test efficiency, defect density, or test coverage. They help in evaluating the effectiveness, efficiency, and overall quality of the software testing efforts. Rest assured, you won't have to spend excessive time on calculations. With Zebrunner Automation Reporting, you can leverage various widgets that automatically calculate and display your key metrics.
10. Code coverage. Percentage of code covered by tests.
11. Test execution coverage percentage. Percentage value. With this, we gain insights into the overall progress of our testing efforts. This metric provides a clear picture of the completed tests in relation to those that are yet to be conducted.
12. Requirement coverage. Percentage of requirements covered by tests.
13. Requirement defect density. Identifying the relative risk associated with different requirements. While the test cases may appear to be satisfactory, it is possible that the source of issues lies within the requirement itself.
14. Test case coverage. Percentage of test cases executed or passed.
15. Functional coverage. This metric quantifies the functions invoked during software testing, providing valuable insights into the overall coverage achieved.
16. Risk coverage. While there isn't a specific formula for this metric, it can be calculated using the following steps:
- Identify the potential risks or constraints associated with the software being tested.
- Determine the number of identified risks or constraints that have been specifically addressed or accounted for in the testing process.
- Calculate the Risk Coverage metric using the formula:
17. Test case execution percentage. This metric provides insights into the progress and coverage of test cases by determining the percentage of test cases that have been executed.
18. Test case execution pending percentage. It is utilized to determine the percentage of test cases that are pending execution.
19. Test case pass percentage. The test case pass percentage formula provides insights into the success rate of test case execution.
20. Test case failure percentage. It provides insights into the failure rate of test case execution.
21. Test execution efficiency. This formula should be different from calculating test execution rate, where you measure the number of test cases executed within a specified period, usually per day or per week.
22. Test execution time. Time taken to execute test cases.
23. Defect density. Number of defects per size or unit of code.
24. Defect rejection rate. Percentage of defects rejected during review or testing.
25. Defect leakage / Defect detected after release. Number of defects found in production.
26. Defect removal efficiency (DRE). The metric is used to assess the effectiveness of testing in identifying and removing defects from the system.
27. Test case efficiency. It measures how effective the test cases are in identifying defects.
28. Test case productivity. Number of test cases executed per tester or per unit of time.
29. Test automation coverage. Percentage of test cases automated.
30. Test execution effort. Effort required to execute test cases.
31. Defect detection percentage (DDP). The percentage of defects detected by the testing team compared to the total defects present in the software.
32. Test case execution time. The average time taken to execute a test case.
33. Test case success rate. It indicates the reliability of the testing process.
34. Test execution productivity. The number of test cases executed per unit of time.
35. Defect fix cycle time. This metric helps measure the efficiency and speed of the defect resolution process. It provides insights into the turnaround time for fixing defects, allowing you to assess the effectiveness and timeliness of defect management in the testing process.
36. Test environment setup time. It measures the efficiency of environment preparation.
37. Test data preparation time. It helps assess the efficiency of data preparation.
38. Test case review efficiency. It measures the efficiency of the review process.
39. Test case reusability. The percentage of test cases that can be reused across different testing cycles or projects. It indicates the efficiency of test case design and maintenance.
These metrics in software testing are typically measured through surveys, feedback ratings, or other feedback mechanisms, which are subjective and qualitative in nature. As a result, there are no specific formulas associated with these metrics. However, here are a few approaches commonly used to gauge user satisfaction in software testing:
40. Customer-reported defects. This metric reflects the impact of defects on end-users and their perception of the software quality.
41. Customer feedback rating. Rating or satisfaction score provided by customers based on their experience with the software. It can be collected through surveys, feedback forms, or online reviews. This metric helps gauge overall customer satisfaction and provides insights into areas for improvement.
42. Customer satisfaction index (CSI). The percentage of satisfied customers by dividing the number of customers who indicate satisfaction by the total number of respondents and multiplying the result by 100.
43. Likert scale analysis. Assigning numeric values to the ratings provided by users (e.g., 1 to 5) and calculating the percentage of the total possible score achieved by summing up the scores for each rating category and dividing it by the maximum possible score.
44. Net promoter score (NPS). Classification customers into promoters, passives, or detractors based on their likelihood to recommend the product or service.
45. Usability success rate. The percentage of successful user interactions with the software.
46. Task completion rate. The percentage of tasks that users are able to successfully complete.
47. Time on task. The average time spent by users to complete a task.
48. Response time. The average response time of the software.
49. Throughput. The rate at which the software processes requests.
50. Error rate. The percentage of errors encountered during the processing of requests.
51. Latency. The latency of the software by subtracting the time spent on network-related activities from the total time taken.
52. Scalability. The scalability of the software by dividing the total number of requests by the number of concurrent users.
53. Cost of quality (CoQ). The total cost incurred to ensure and maintain the quality of the software, including prevention, appraisal, and failure costs.
54. Return on investment (ROI) of testing. The financial return achieved by investing in the testing activities, is calculated by comparing the benefits gained from testing (such as defect prevention and customer satisfaction) to the costs incurred.
55. Test efficiency ratio. The ratio of the time spent on testing activities to the total project time, providing insights into the efficiency of testing efforts within the project timeline.
56. Cost per defect. The average cost associated with identifying, reporting, and resolving each defect, calculated by dividing the total testing costs by the number of defects identified.
57. Test automation return on investment (ROI). The financial return achieved by investing in test automation efforts, calculated by comparing the benefits gained from test automation (such as time savings and improved accuracy) to the costs incurred.
58. Test case optimization savings. The cost savings are achieved by optimizing test cases, reducing redundant or unnecessary test cases, and maximizing test coverage while minimizing effort and resources.
59. Defect escape rate. The percentage of defects that are discovered by end-users or customers after the software has been released, indicating the effectiveness of testing in capturing defects before deployment.
60. Test case execution productivity. The number of test cases executed per unit of time, indicating the efficiency and productivity of the test team.
61. Test case execution efficiency. The ratio of the number of test cases executed to the number of defects found, measuring how effective the test cases are in identifying defects.
62. Defect detection rate. The rate at which defects are identified by the test team during the testing process.
63. Test case review effectiveness. The percentage of test cases reviewed that result in improvements or changes, indicating the effectiveness of the review process.
64. Test environment setup time. The time taken to set up the test environment for executing test cases, measuring the efficiency of environment preparation.
65. Test data preparation time. The time taken to prepare test data required for executing test cases, assessing the efficiency of data preparation.