Many companies are moving towards test automation to stay competitive in their market segment. Indeed, test automation allows you to quickly identify issues, fix them, speed up the release cycle significantly, and reduce the routine workload on the QA team. However, when it comes to measuring the impact of test automation on a project, determining its value, many managers get confused. In this article, we will explain how to effectively measure the impact of test automation, share with you the best metrics for doing this, and explain why some classic testing metrics are not suitable for evaluating test automation.
The key problem with measuring the impact of test automation is that many people often have no idea what metrics can help them with this. Almost any popular quantitative metric in the field of testing has negative sides when quantity is at the expense of quality. If you want to see how automation affects your organization, test automation metrics should measure the value at a granular level.
At the first stage, before choosing the right metrics for you, you need to look at the big picture and understand exactly how and where automation can positively affect your project. To determine this, answer the following questions:
At the second stage, you need to select and adapt metrics for your project that will allow you to get answers in numbers to the above questions.
As an example, consider the following case. The IS subsidiary ICDC, which is responsible for the test repositories of the French group Caisse des Dépôts, made the transition to test automation and evaluated the success of the automation a few years ago. See first results:
You can also measure the success of your company's transition to test automation. We have compiled a selection of good metrics for evaluating the impact of test automation. These metrics reduce the ambiguity of measuring the impact of test automation and provide more information to help in decision making. Remember, each of the metrics needs to be adapted to a specific project.
The base metrics count the time and total number of tests completed along with their results: test passed, failed, inconclusive. These metrics are fundamental and should be kept at all times. They allow you to evaluate how the automated testing process is going. So, if an automated project leads, for example, to a decrease in the number of tests performed per day or week, then most likely something went wrong.
Basic metrics such as time saved, percentage of defects found are easy to calculate. Organizations can easily use them as the basis for setting KPIs. But charts through time of completed test count categories aren't so visual. At the same time, they allow you to dive deeper into the results of automation.
One of the main advantages of automated tests is that they allow you to save effort for manual testing. Automated tests perform routine testing tasks. At this time, testers-manuals focus on higher priority tasks, test modules that are difficult to automate, and explore the application based on critical thinking.
In general, testing time saved is a good metric. For example, if automated tests reduce manual testing effort from two days to two hours in a two-week sprint, that's a big achievement for the organization that translates into money saved.
Flaky tests can fail due to many factors. This is a common problem in many teams. All efforts in an automation environment are wasted if the team spends more time maintaining automated tests than running them to find defects in the software. As a result, teams become distrustful of automated tests and eventually decide to return to manual testing of functionality.
To avoid this problem, it's better to start with a small number of automated tests and run them constantly, separating stable tests from weak ones.
You need to prioritize testing based on risk. The risk includes unexpected events that may affect the business, defect-prone application areas, any past or future events that may influence the project.
A good approach is to measure the success of automation in terms of risk reduction. To do this, you need to rank risks from high to low priority, then automate test cases based on risk priority and track the number of risks that have been reduced due to automated tests.
Equivalent Manual Test Effort (EMTE). This is a classic concept in software quality assurance. EMTE is typically defined by answering the question, “Is this project better after automating it?”
A metric that is designed to answer the question of how likely it is that your tests will find a bug.
This is a subjective metric, however, it is the place to be. If automation is implemented successfully, automated tests do not need special support, anyone can easily run them, the tests have a simple and understandable code, you can easily view the results of the tests, logs, dashboards, screenshots.
Let's look at classic testing metrics and see why they are not suitable for assessing the impact of test automation.
The number of tests. Lots of tests don't mean that the automation team works effectively. The number of tests is a classic metric that shows how many tests are being written by a team. It means that a lot of tests make it possible to ensure the best possible quality of the product in the end. But it doesn’t. Sometimes it is better to combine several tests into one — this way, the work will be done better, but the number of tests will decrease. A quantitative metric is very primitive, and it can motivate to create tests solely to improve quantitative indicators.
The number of test runs also neglects the quality aspect. For example, a team may continuously perform smoke tests, but a large part of the application may not be tested at all.
Defects found is a metric that means that the more defects you find, the more efficient your testing team is. However, this is not quite the right approach. If the developers are working well, then the number of defects should decrease as the code develops. The fact that there are fewer and fewer defects means that the code is stable.
Code coverage is a metric that project managers like. It measures how much code is covered by tests. However, achieving the coveted 100% code coverage is often impractical and inspires engineers to write tests specifically designed to pass.
Automated test coverage. Many companies want to automate as many tests as possible, but this is not as good as it might seem at first glance. Some tests are not suitable for automation.
It is time to decide where it is better to collect, count and visualize the selected metrics.
Zebrunner Testing Platform is a test automation platform that allows you to combine all processes in one place and get high-quality reports and analytics.
You gain the whole picture of test automation. Numerous dashboards and widget templates allow you to track any metrics, from the simplest ones (for example, pass rate over time) to complex ones like automation growth forecasts. Measuring team productivity and ROI is also possible. Moreover, you can easily add custom widgets if your project has special needs.