Skip to content

Test runs#

Test run represents a collection of tests executed together (in a way, a test run can be treated as a test suite execution) at a particular time.

Test runs grid

Test run statuses#

A test run can have one of the following statuses:

Status Definition
QUEUED Indicates that a run was scheduled via Launcher, but has not actually started yet
IN PROGRESS Indicates that a run contains tests that are still being executed
PASSED Indicates that that all tests are passed or classified as known issues
FAILED Indicates that a run contains at least one failure, but is not classified as a known issue or skipped test
SKIPPED Indicates that there were no tests executed
ABORTED Indicates that a run was interrupted by a user or system (e.g. on timeout)

Test run attributes#

Additionally, a test run may have the following optional attributes:

Environment

Indication of an environment where the application under test is deployed (e.g. stage, dev, production). Environment can only be set before execution via reporting agent parameters and cannot be updated once set.

Platform

Indication of an actual platform that is used for application testing. In case of web application UI tests, it can be a combination of a browser name and its version. By default, a platform is set to API. Can be set via reporting agent parameters and cannot be updated once set.

Labels

Labels are a set of arbitrary key-value pairs that can be attached to a test run via reporting agent APIs, allowing you to link custom metadata.

Build

Indicates a revision/version/build number of an application under tests. Can be set via reporting agent parameters and cannot be updated once set.

Notification channels

Notification channels are used to supply information on the specific means of notifying users about important events happening during a test run (e.g. failed test or run finish). Notification channels can be defined via reporting agent configuration or set via Launcher.

Working with test runs#

Most operations on a test run can be performed from either the Test runs grid or a detailed Test run view (test results). Below is the summary of key operations.

Rerun#

Rerun allows to repeat execution of some or all tests using the exact same configuration that the original run had. Tests executed in scope of a rerun will update corresponding executions in the original run.

Info

The Rerun action is only available for runs executed via Launcher or an integrated CI server

To perform a rerun, complete the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Press Rerun. The Rebuild test run dialogue will appear
  3. Choose what tests you want to rerun: all tests or failures only
  4. Confirm the selection by pressing Rerun

Rerun test run

Alternatively, a rerun may be triggered from the Test run view (test results) by clicking Rerun / Build Now icon:

Rerun from test run view

Bulk rerun

You can rerun several test runs at a time. Select multiple test runs using checkboxes on the Test runs grid and click Rerun in the bulk actions panel (it will appear above the grid)

Build Now#

Build Now allows to repeat execution of some or all tests with an ability to override configuration the original run had. Tests executed as a result of Build Now will create a new test run.

Info

Build Now action is only available for runs executed via Launcher or an integrated CI server

To run a test run with updated configurations (Build Now), complete the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Press Build Now. You’ll be redirected to the wizard with the configurations provided during the previous execution
  3. Change the test run configurations if needed (learn more about Launcher
  4. Click Launch

Build Now test run

Alternatively, Build Now may be triggered from the Test run view (test results) by clicking Rerun / Build Now icon:

Build Now from test run view

Abort#

To abort a test run that is in progress, perform the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Press Abort

The test run will become aborted.

Abort test run

Bulk abortion

You can abort several test runs at a time. Select multiple test runs using checkboxes on the Test runs grid and click Abort in the bulk actions panel (it will appear above the grid). Note: this action will only be available (active) when the selection contains runs eligible for abortion

Delete#

To delete a test run perform the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Press Delete and confirm the action

The test run will be removed.

Delete test run

Bulk deletion

You can delete several test runs at a time. Select multiple test runs using checkboxes on the Test runs grid and click Delete in the bulk actions panel (it will appear above the grid)

View run summary#

Run summary is a panel available on the detailed Test run view.

It provides quick access to core test run attributes (such as platform, environment (if set), miletone (if set), labels and attached artifacts). Additionally, it contains a test results donut chart and a pass rate trend for this particular suite.

Run statistics

Review#

Zebrunner allows to maintain a high level of collaboration inside the automation team by providing comments for other users when someone has already reviewed the results of the test run and performed the required actions (e.g. reported to the team or filed an issue).

To mark a test run as reviewed, perform the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Click Mark as reviewed, the Comments dialogue will appear
  3. Provide a comment if necessary
  4. Click Mark as reviewed

A reviewed icon will appear beside the test run name, together with a (if any comment was provided; you can open the comment by clicking on the icon).

Mark as reviewed

Assign to milestone#

Milestones provide an easy and convenient way to organize your test runs, track the testing progress and plan release timelines.

By linking runs to milestones, you can organize test strategy according to the upcoming product update or feature release.

Info

Test run can only be assigned to existing open milestone, meaning that it should be created beforehand. Please refere to the Milestones docs to learn how to do that

To assign a test run to a milestone, perform the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Choose Assign to milestone. The assignment dialogue will appear on the screen
  3. Choose the milestone to assign and click Assign.

Assign to milestone

Alternatively, a milestone can be assigned from the Test run view (test results) by clicking More Options Assign to milestone.

Bulk assignment

You can assign several test runs to a milestone at a time. Select multiple test runs using checkboxes on the Test runs grid and click Assign to milestone in the bulk actions panel (it will appear above the grid)

Sync with TCM#

Zebrunner provides an ability to push test executions to external test case management systems (TCMs) on test run finish. For some TCMs, it is possible to upload results in real-time during the test run execution.

Test cases and run mapping

In order for this functionality to work, tests should be mapped to corresponding test cases and a test run should be mapped to corresponding run in a TCM system. To learn more on how to create such mappings please refer to the documentation of reporting agent you are using

This section describes how to sync test executions of a run that initially was not configured for sync. It may be useful in cases when:

  1. Users reviewing results need to manually update test statuses first
  2. When executions were synced in one run, but due to some reason need to be synced with another
  3. When sync for the run was simply not enabled, but that was not desired

To sync test executions, perform the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Choose Sync with TCM. The syncing dialogue will appear
  3. Enter id of target test run in TCM and click Sync. Note: if this run already has an id of a TCM run associated with it, this id will be pre-set in the input box. You can override this value to sync executions to another TCM run of your choice.

Sync with TCM

Alternatively, sync can be performed from the Test run view (test results) by clicking More Options Sync with TCM.

Compare#

It can be useful to compare the results of several test runs in one place.

To do this from the Test runs view, just choose the needed test runs with the same name from the list via bulk action, click More at the top and choose Compare.

Comparing test runs

You’ll be redirected to the Comparison page with the stack trace and execution time comparison for every test within the test run.

You can hide identical tests for better analysis.

Comparison page

Filtering and sorting#

With the growing amount of test runs on the Test runs grid, it can be useful to filter them by different attributes. It is also important to have a quick access to the most relevant filters. To fulfill this need, Zebrunner users can construct filters based on multiple predicates and work with data in a more efficient and convenient way.

Test runs filters

Below are some of the attributes that can be used for filtering:

  • Test run name
  • Browser
  • Platform
  • Status
  • Environment
  • Locale
  • Date
  • Reviewed
  • Milestone

Most-used filters can be seen on UI next to the search input box, while others are not displayed by default and can be accessed by cliking Add filter. Depending on the attribute type, it is possible to apply exact match predicates (combining multiple for the same attribute with a logical OR condition) or ranges (in case of dates). All the predicates applied will be combined using a logical AND.

Saved filters#

It is possible to save a filtering preset for quick and convenient future access. To do this, complete the following steps:

  1. Apply all the predicates you neeed in the filter panel
  2. Click Save (it will become active once at least one predicate is applied)
  3. Give this filter a name and click Save

The new filter will be saved to the list with the brief info (available by clicking on the info icon) and the filter owner.

Filters access control and favorites

  • By default, after you save a new filter to the list, it is private - meaning that only you can see it and use it. But you can make it public, allowing all members of your project to use it by going to More Options next to the filter name and clicking Make public
  • It is possible to define a set of favorite filters that will be always displayed on top of the list. To do this, hit the star icon. Click it once again to remove the filter from favorites

Saved filters

Sharing test run report#

Sometimes it may be useful to bring someone's attention to a test run by sharing it. Zebrunner supports sharing options that work both for users inside and outside of your workspace.

Sharing results via link allows you to send a test run for furhter inspection in Zebrunner to another workspace user.

To quickly obtain the link to a test run, perform the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Click Copy link

Link to the test run will be copied you your buffer.

Share via link

Alternatively, you can obtain the link from the Test run view (test results) by clicking Share in the top right corner and hitting Copy link button:

Share via email

via email#

Sharing results via email allows you to send a compiled test run results report to anyone, not only your workspace user.

To send test results via email, perform the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Click Send as email. The sharing dialogue will appear
  3. Enter recepient's emails (you can enter multiple at once)
  4. Click Send

Share via email

Alternatively, sharing can be performed from the Test run view (test results) by clicking Share in the top right corner, entering recepient's emails and providing an optional message:

Share via email

via export to HTML#

Sharing results via export allows you to send a compiled HTML test run results report to anyone, not only your workspace user.

To export an HTML report, perform the following steps:

  1. On the Test runs grid, pick a run and go to More Options on the right
  2. Click Export to HTML

The report will be generated, and downloading to your machine will start automatically.

Export to HTML

Alternatively, export can be performed from the Test run view (test results) by clicking More Options Export to HTML.