Tests#
Test is an execution of a particular test case implementation at a particular time. Optionally and if applicable, test can be associated with one or multiple test sessions and vice versa.
Test session is a group of test interactions with an application under test that take place within a given time frame. Selenium WebDriver session is an example of test session, however in Zebrunner you are not limited to WebDriver sessions only: you can define your own sessions according to your testing needs (e.g. Smart TV apps testing, gaming console apps, etc.).
Test statuses#
Test can have one of the following statuses:
Status | Definition |
---|---|
IN PROGRESS | Indicates that a test is currently being executed |
PASSED | Indicates that a test execution was successful and no assertions failed |
FAILED | Indicates that a test execution failed (due to runtime error or failed assertions) |
SKIPPED | Indicates that a test was scheduled for execution, but for some reason was not executed (e.g. a dependent test failed) |
ABORTED | Indicates that a test was interrupted by a user or system (e.g. on timeout) |
Test attributes#
Additionally, test may have the following optional attributes and associations:
- Test sessions (see the definition above)
- Test case references
- Linked issue reference
- Logs
- Screenshots
- Labels
- Stability score
This guide provides an overview of actions on tests and sessions and key Zebrunner capabilities for results analysis.
Working with tests#
Most of the actions on tests and test sessions can be performed from Launch (can be accessed from Launches by opening any launch) and Test details (can be accessed from Launch by opening any test).
Browsing sessions#
Each test session contains such information as browser version, platform name, device name, logs (can be downloaded as a separate file) and video recordings. Moreover, Zebrunner applies the mechanism of mapping tests and test sessions, and they relate to each other as many-to-many:
- One test may use multiple sessions (e.g. if 2-3 browsers were started during a test). In this case, all the test sessions with their attributes will be displayed for a test both on Launch and Test details.
- Multiple tests can be executed within one session (e.g. if several tests were covered within one session). In this case, all the tests linked with this test session will be displayed under its attributes.
To access the artifacts of a test session (if any is related to a test) click X session(s) link on the needed test.
A half-page will be opened, displaying all the available information for a certain test session(s) and related tests, if any.
The same information is also available on Test details right after opening the test.
Execution history#
It can be useful to check the history of test results and have the ability to compare them by different criteria, thus easier identifying flaky tests and tracking any changes in their behavior.
Execution history panel at Zebrunner allows users to see and analyze the results of the last 10 executions of the same test (on the basis of its test function), compare the duration and the attached artifacts for more thorough analysis.
This way, it’s possible both for the current test and previous ones to check extended details like test artifacts (logs, screenshots, videos, etc.) and attached labels, link failures to Jira issues, mark as passed or failed for deeper test results understanding.
Execution time deviation in percent#
In some tests on the Execution history panel, you can see the icon with a negative or positive time value in percentage. If there is a certain deviation in the average execution time of the tests in the history (more than twice), it is displayed with this icon and deviation in percentage (- or + depending on whether it’s more or less than average).
Stability indication#
In addition to Execution history, Zebrunner provides the Test stability rate feature. It allows you to understand if your tests can be considered stable and unstable on the basis of the latest 10 execution results.
With this functionality, it’s easier to understand when to refactor or fix the test code to get more stable test results.
The stability rate can be checked for every test on Launch with a percentage specification that means:
- From 0 to 69% — test is often failing (marked red)
- From 70 to 89% — test is unstable and can possibly be flaky (marked yellow)
- From 90 to 100% — test is passing stable (marked green)
AI/ML results classification#
Zebrunner’s AI/Machine Learning-based mechanism automatically categorizes failure reasons by stack traces, helping QA engineers save time on test analysis and prioritize daily activities.
By default, there are 4 main failure reasons detected by Zebrunner’s AI/ML:
- BUSINESS ISSUE – a potential bug/defect in your application under test.
- LOCATOR ISSUE – one of the most popular failures related to UI locator or layout changes.
- INFRA ISSUE – a problem with 3rd party infrastructure components like network, computer, framework (Selenium, Appium, etc.), application, database, etc.
- UNCATEGORIZED – the default tag for an unknown failure that is not yet marked by the system or human.
After your test is failed or skipped, Zebrunner’s AI/ML automatically detects the failure reason with a certain level of accuracy (up to 100%).
If the failure reason cannot be associated with any of the above-mentioned categories, or AI/ML requires additional training, the UNCATEGORIZED tag is added. It means the failure reason needs additional investigation and human actions.
The AI classification is performed on a test finish with a small possible delay. If during this delay user assigns the failure tag first, their assignment will always take precedence while the system assignment will be ignored.
If the AI/ML accuracy ratio is less than 90%, a special exclamation mark will appear beside the failure label. This means that AI/ML needs the user’s attention to check the correctness of the classification. The more rounds of human training the system has undergone, the more accurate the results will be.
To change the category assigned by AI/ML, simply select the most suitable one in the modal and click Save.
Assigning failure tag to multiple tests at once
If you want to assign a failure tag to tests in bulk, select them using checkboxes. Once you've made your selection, you'll find the bulk action Assign failure tag. Click it and follow the prompts.
The new failure tag will override other tags assigned to tests. If the new tag matches the tag that was assigned by AI/ML, an upvote will be given to AI/ML. In the case of a match with a user-assigned tag, no new assignment will occur and the action will be ignored.
Updating result status#
If after reviewing the test artifacts the user realizes that the test is false positive (or negative), they can manually change the test status to the corresponding one.
To do this, perform the following steps:
- Pick a test and go to More Options on the right
- Press Mark as Passed (or Mark as Failed) and confirm the action
Info
Availability of Mark as Passed/Mark as failed option depends solely on the current test status.
Alternatively, test status can be updated on the Test details by selecting it in the dropdown in the upper right corner of the screen:
Test cases linking#
One of the most useful Zebrunner features is the ability to keep correlation between manual test cases and automated tests. This is possible thanks to the integrations with popular test case management tools such as TestRail, XRay, Zephyr.
After integrating Zebrunner with TestRail, XRay or Zephyr users can:
- Link automated tests to test cases in the chosen TCM system
- Preview linked test cases on Zebrunner UI
- Push automated test execution results to corresponding test cases in the chosen TCM system (more on this here)
After configuring the integration with TestRail, XRay or Zephyr test cases from these systems linked to tests at Zebrunner are displayed on the Launch as special labels.
After clicking on the label, test case preview is shown with a brief description of the test case (including the test case title, priority, pre-conditions, steps and expected result), so no need to look for the corresponding test case in the TCM system.
When the preview is displayed, users can access the linked test case in the TCM tool in a click.
Issue linking#
When a test failure is caused by a bug/issue/defect in the application, you may want to file it in a bug tracking system and keep a correlation between the bug and the failed test for the sake of traceability.
Zebrunner can help you to keep track of those links and even to create issues directly from Zebrunner UI in one of the integrated systems.
How it works: if after linking the failure to Jira issue, Zebrunner identifies a similar stack trace of the same test, and this issue is not yet in the closed status in Jira, then this failure will be automatically linked to the issue and will be counted separately in the launch statistics.
As soon as the Jira issue is closed or unlinked manually, the linking will not be performed automatically, but will be displayed for the previous tests in order to keep the history.
Info
In order for this feature to work properly, make sure that you configured the Jira integration.
There are two ways to link a failure to Jira issue: link to existing Jira issue or create a new Jira issue directly from Zebrunner.
Link an existing Jira issue#
To link a Jira issue to a test, perform the following steps:
- Click Link issue button
- Start entering the issue ID or summary to the search field
- Choose the needed issue
- Click Link button
The linked issue will be added to the linking history.
A red issue label will be added to the test, and the number of known issues in test statistics at the top will be updated as well. You can quickly access the linked issue in Jira by clicking the corresponding Jira label.
Create and link a new Jira issue from Zebrunner#
Besides the possibility to link an already existing Jira issue to a test, it’s possible to create different kinds of new Jira issues right from Zebrunner.
To do this, perform the following steps:
- Click Link issue button
- Click Create a new issue button
- Define the Jira project, issue type and fill in the required fields
- Click Create and link button
The issue will be automatically added to Jira with the provided info and linked to the chosen tests. You can access the newly created issue in Jira by clicking the corresponding Jira label.
This functionality is also available from the Test details.
Info
Some Jira projects may have the required fields that are not supported by Zebrunner. In this case, the issue in Jira will not be created. Please contact us at support@zebrunner.com to add your required field to the system.
Labeling tests#
Zebrunner allows to label tests and use those labels for further grouping of tests and results analysis in the system. Labels can be seen both on Test view and Launch view (collapsed by default).
At the moment of writing, labeling is not possible via UI and can be done only directly from test code. For more information on reporting APIs allowing to do that, see reporting section of the documentation.
Tip
In order to expand (or collapse) labels for all tests in the launch, go to More Options in the upper right corner and select Show labels (or Hide labels) option.
Filtering, sorting and grouping#
The Launch contains all the results of the tests executed within a particular launch. The more tests were performed inside one launch, the more difficult it can be to manage them and analyze. To simplify this process, Launch has filters allowing users to search, group and sort tests according to different criteria.
You can search by test name and stack trace via the Search field.
You can filter tests by status (failed, failed (no linked issues), skipped, passed, aborted, in progress).
You can group your tests by:
- File
- Directory
- Label (e.g. test case ID in TCM, feature, priority, etc. set in your code)
- Failure (stack trace)
- Failure tag (AI/ML-set categories)
- Maintainer
To group your tests by any of these criteria, choose the necessary option from the Group by dropdown.
You can also sort tests by:
- Execution order (by default)
- Status (failed, skipped tests at the top, followed by passed)
- Failure tag (AI/M-set categories)
- Fastest first
- Slowest first
- Name
To sort your tests by any of these criteria, choose the necessary option from the Execution order dropdown.
Info
You can apply a search option, group and sort the tests simultaneously. Press Reset to come back to the default view.
Navigation between tests#
Zebrunner offers seamless navigation between test executions within automation launch.
The navigation can be performed via designated controls on the UI or using keyboard shortcuts:
- 'h' or : navigate to the previous non-successful test
- 'j' or : navigate to the previous test
- 'k' or : navigate to the next test
- 'l' or : navigate to the next non-successful test
A non-successful test means a test execution with any status other than PASSED.
The tests you can navigate to constitute a navigable set of tests. This set is built from the tests visible on the Launch page. That means if you use filters, sorting, or grouping criteria on the Launch page, these criteria shape the tests in the navigable set.
If you want to share a direct link to test after applying filters, sorting, or grouping, the link will encompass all applied criteria (not the tests included in the navigable set). In cases where a test is no longer a part of the navigable set or if no filters, sorting, or grouping were applied, the navigable set will include all launch tests, sorted in the default order.
Any updates made to tests on the Test details do not affect the navigable set.
Example: You are on the Test details viewing details of a failed test and the navigable set includes only those tests that have failed. Change the status of the test to PASSED, navigate to the next test and then navigate to the previous non-successful test . As a result, the last navigation takes you to the test that has just been marked as PASSED, because the test was originally failed when the navigable set was created, and the navigable set remains unchanged.