How to measure test automation impact on your project: review of good and ambiguous metrics

Nadezhda Yushkevich
Updated on
Jun 21, 2024
reading time
Try Zebrunner

Many companies are moving towards test automation to stay competitive in their market segment. Indeed, test automation allows you to quickly identify issues, fix them, speed up the release cycle significantly, and reduce the routine workload on the QA team. However, when it comes to measuring the impact of test automation on a project, determining its value, many managers get confused. In this article, we will explain how to effectively measure the impact of test automation, share with you the best metrics for doing this, and explain why some classic testing metrics are not suitable for evaluating test automation.

Test automation impact: how to start calculating

The key problem with measuring the impact of test automation is that many people often have no idea what metrics can help them with this. Almost any popular quantitative metric in the field of testing has negative sides when quantity is at the expense of quality. If you want to see how automation affects your organization, test automation metrics should measure the value at a granular level.

At the first stage, before choosing the right metrics for you, you need to look at the big picture and understand exactly how and where automation can positively affect your project. To determine this, answer the following questions: 

  • How fast can a release happen? Release acceleration is a basic goal for most product companies. Compare pre-automation release cycles with current ones to give you an idea of ​​how automation affects timelines.
  • Is the quality of the software improving? Quality is a subjective concept. An engineer and coder may think that an app has outstanding features, but an end-user may get those same features without enthusiasm. To achieve a user perspective, you will need to request bug reports, surveys, and other feedback.
  • What are the automation costs? The initial costs of automation are expected to be high, i.e., much higher than the costs of testing before the transition to automation begins. At the same time, the benefit is that with a successful automation process, long-term costs should be reduced. The key goal of automation is to be able to compare the cost of project automation with the savings it has generated. After all, if automation does not provide high ROI, then it makes no sense.

At the second stage, you need to select and adapt metrics for your project that will allow you to get answers in numbers to the above questions.

Good automation testing metrics

As an example, consider the following case. The IS subsidiary ICDC, which is responsible for the test repositories of the French group Caisse des Dépôts, made the transition to test automation and evaluated the success of the automation a few years ago. See first results: 

  • 70% of the test steps in the yielded scripts were directly generated by preexisting keywords. 
  • Productivity increased 4 times due to the transition to automation. 
  • The development cycle was shortened.

You can also measure the success of your company's transition to test automation. We have compiled a selection of good metrics for evaluating the impact of test automation. These metrics reduce ambiguity in measuring the value of test automation and provide more information to assist in decision-making. Remember, each of the metrics needs to be adapted to a specific project.

Number of completed tests

The base metrics count the time and total number of tests completed along with their results: test passed, failed, inconclusive. These metrics are fundamental and should be kept at all times. They allow you to evaluate how the automated testing process is going. So, if an automated project leads, for example, to a decrease in the number of tests performed per day or week, then most likely something went wrong.

Basic metrics such as time saved, percentage of defects found are easy to calculate. Organizations can easily use them as the basis for setting KPIs. But charts through time of completed test count categories aren't so visual. At the same time, they allow you to dive deeper into the results of automation.

Testing time saved

One of the main advantages of automated tests is that they allow you to save effort for manual testing. Automated tests perform routine testing tasks. At this time, testers-manuals focus on higher priority tasks, test modules that are difficult to automate, and explore the application based on critical thinking. 

In general, testing time saved is a good metric. For example, if automated tests reduce manual testing effort from two days to two hours in a two-week sprint, that's a big achievement for the organization that translates into money saved.

Number of stable and flaky tests

Flaky tests can fail due to many factors. This is a common problem in many teams. All efforts in an automation environment are wasted if the team spends more time maintaining automated tests than running them to find defects in the software. As a result, teams become distrustful of automated tests and eventually decide to return to manual testing of functionality.

To avoid this problem, it's better to start with a small number of automated tests and run them constantly, separating stable tests from weak ones.

Amount of risks mitigated

You need to prioritize testing based on risk. The risk includes unexpected events that may affect the business, defect-prone application areas, any past or future events that may influence the project.

A good approach is to measure the success of automation in terms of risk reduction. To do this, you need to rank risks from high to low priority, then automate test cases based on risk priority and track the number of risks that have been reduced due to automated tests.

Equivalent Manual Test Effort (EMTE)

Equivalent Manual Test Effort (EMTE). This is a classic concept in software quality assurance. EMTE is typically defined by answering the question, “Is this project better after automating it?”

Defect containment efficiency

A metric that is designed to answer the question of how likely it is that your tests will find a bug.

Ease of use of automated tests

This is a subjective metric, however, it is the place to be. If automation is implemented successfully, automated tests do not need special support, anyone can easily run them, the tests have a simple and understandable code, you can easily view the results of the tests, logs, dashboards, screenshots.

Ambiguous metrics for the automation

Let's look at classic testing metrics and see why they are not suitable for assessing the impact of test automation.

The number of tests. Lots of tests don't mean that the automation team works effectively. The number of tests is a classic metric that shows how many tests are being written by a team. It means that a lot of tests make it possible to ensure the best possible quality of the product in the end. But it doesn’t. Sometimes it is better to combine several tests into one — this way, the work will be done better, but the number of tests will decrease. A quantitative metric is very primitive, and it can motivate to create tests solely to improve quantitative indicators.

The number of test runs also neglects the quality aspect. For example, a team may continuously perform smoke tests, but a large part of the application may not be tested at all.

Defects found is a metric that means that the more defects you find, the more efficient your testing team is. However, this is not quite the right approach. If the developers are working well, then the number of defects should decrease as the code develops. The fact that there are fewer and fewer defects means that the code is stable.

Code coverage is a metric that project managers like. It measures how much code is covered by tests. However, achieving the coveted 100% code coverage is often impractical and inspires engineers to write tests specifically designed to pass.

Automated test coverage. Many companies want to automate as many tests as possible, but this is not as good as it might seem at first glance. Some tests are not suitable for automation.

Tools for measuring test automation metrics

It is time to decide where it is better to collect, count and visualize the selected metrics.

  • Spreadsheets and dashboards. As the most accessible way, you can do it in Google Sheets by setting the metrics and visualizing through charts. However, if the project is large, the capabilities of this tool may not be enough.
  • Test management tools. Some metrics for automation testing can be counted using task management tools such as Jira. However, this option is also not universal, since the functionality of such tools can not always perform such highly specialized tasks as measuring the impact of test automation.
  • Platforms for test automation. If you've moved into test automation, chances are you've been thinking about moving to an all-in-one automation platform, or you're already using one of those tools. In this case, it will be easy to calculate and beautifully present the necessary metrics for the effectiveness of test automation. For example, consider the Zebrunner Testing Platform.

Zebrunner Testing Platform is a test automation platform that allows you to combine all processes in one place and get high-quality reports and analytics.

You gain the whole picture of test automation. Numerous dashboards and widget templates allow you to track any metrics, from the simplest ones (for example, pass rate over time) to complex ones like automation growth forecasts. Measuring team productivity and ROI is also possible. Moreover, you can easily add custom widgets if your project has special needs.

About the author

Nadezhda Yushkevich

Content Writer and Tech Journalist

With 11 years of work experience in journalism, media management, PR, and content marketing, she has specialized in IT and startup areas for the last five years. Interested in practices, trends, and approaches in the field of quality assurance.