Skip to content

Reporting API guide#

The REST APIs are provided for developers who want to programmatically integrate test framework of their choice with Zebrunner.

The REST API is based on open standards and can be used by any programming language or HTTP client library.

Core concepts#

Reporting APIs are built around the following concepts

  • test runs, representing one or more test executions. Both test runs and test executions can have attached labels and can produce certain artifacts (e.g. logs) that can also be tracked and attached to test run report.

  • test sessions (remote driver sessions), representing a variety of interactions with certain target environment ( e.g. browser or device) within a given timeframe. Unlike test executions, sessions created in scope of test run are represented by flat structure - sessions can co-exist independently of each other and even independently of test execution (and vice versa).

Difference between Launches and Test Runs

You may encounter a naming conflict between the concepts of a Launch and a Test Run. What is called a "Test Run" in this documentation actually refers to Launches in Zebrunner - a collection of automated test executions. At the same time, what is called a "Test Run" in Zebrunner is a planned execution of certain test cases (not automated tests), and belongs to the Test Case management product of Zebrunner (included in Testing Platform).

This collision arises because of historical reasons. Initially, Zebrunner was an automation reporting tool only. At that time, the API described in this documentation was created. Over time, Zebrunner was enriched with test case management capabilities, and we renamed Test Runs to Launches. Since the API must be backwards compatible, the original naming was retained, but the actual meaning has changed.

Test run and sessions tracking (reporting) process is event-driven, meaning that events (e.g. test execution start, test execution finish, sessions start and session finish) representing typical test run lifecycle are tracked independently of each other.

Preconditions#

In order to use the REST API you must have:

  • active Zebrunner user, on whose behalf access token will be issued and API will be accessed. This token has long validity period. It must be exchanged for short living authentication token in order to report test run lifecycle events (more on this below).

  • valid Zebrunner API hostname

This information can be found on “Account & profile“ page.

api_access.png

Authentication#

Each API request needs to be authenticated using Bearer token as there is no session concept in REST.

The first step you need to perform is exchanging the Access Token for a short living authentication token. This token is valid for 12 hours. This time should be enough to report even the longest test runs.

POST /api/iam/v1/auth/refresh

{
    "refreshToken": "<access-token>"
}

Response for this request contains authToken field. This is the token you need to send in Authorization HTTP header for every reporting API call. Format for the header is Bearer <authToken>.

Content negotiation#

At the moment of writing Reporting APIs only support JSON as a resources representation format.

Test run execution reporting#

Below is the step-by-step guide describing what APIs should be used in the process of tracking test run execution events. Authentication header is omitted, however it should be provided with every API request.

APIs are listed in “chronological order” - in the order they are meant to be called in scope of test run execution.

Test run start#

In order to report test run execution start event, the following API endpoint should be used:

POST /api/reporting/v1/test-runs?projectKey={projectKey}

{
    "uuid": "c36233c4-d281-11eb-b8bc-0242ac130003",
    "name": "TestNG run",
    "startedAt": "2021-06-06T10:12:14.333+01:00",
    "status": "IN_PROGRESS",
    "framework": "testng",
    "config": {
        "environment": "PROD",
        "build": "1.2.13"
    },
    "milestone": {
        "id": 42,
        "name": "1.20.1"
    },
    "notifications": {
        "notifyOnEachFailure": true,
        "targets": [
            {
                "type": "SLACK_CHANNELS",
                "value": "qa-team"
            }
        ]
    }
}

Request params

  • projectKey [mandatory] - the project this test run belongs to. The possible values can be found in Zebrunner on Projects tab.

Request payload

  • uuid [optional] - preliminarily generated test run uuid. This uuid can later be used by rerun functionality or by execution environment to do some actions after the run is completed. If the uuid value is not provided, it will be generated. If there is a test run with the given uuid, it will be considered as rerunning. If test run with the given uuid was not found, a new test run will be tracked.

  • name [mandatory] - test run display name. Even if there’s no concept of run name in the test framework you’re using it is still has to be provided. It is recommended to supply some meaningful value that would make sense for users working with test run reports via UI.

  • startedAt [mandatory] - ISO8601 timestamp with an offset from UTC. Represents test run execution start. The supplied value must be a timestamp in the past.

  • status [optional] - current status of the test run. Can be either QUEUED or IN_PROGRESS(default). QUEUED means that the test run has been initiated on execution environment, but the underlying test framework has not started to execute tests. QUEUED status might be useful for test runs initiated on ci server.

  • framework [mandatory] - the test framework that is used for the test run.

  • config.environment [optional] - environment that is associated with the test run.

  • config.build [optional] - build that is associated with the test run.

  • milestone [optional] - JSON object containing a reference to milestone that this run belongs to. The JSON object should contain either id or name. If both values are supplied, then only id will be used. If there is no milestone with given id or name in Zebrunner, no errors will be thrown but response will contain a warningMessages in metadata.

  • notifications.enabled [optional] - specifies whether Zebrunner should send notifications with the test run results. The default value is true.

  • notifications.notifyOnEachFailure [optional] - specifies whether Zebrunner should send notification to Slack/Teams on each test failure. The notifications will be sent even if the suite is still running. The default value is false.

  • notifications.targets [optional] - JSON array containing targets to send notifications about the suite execution. type field must be either EMAIL_RECIPIENTS or MS_TEAMS_CHANNELS or SLACK_CHANNELS. value field is a list of comma-separated channels or emails (depending on the type value). In case of Teams or Slack, notification will be sent only if integration is properly configured in Zebrunner with valid credentials for the project the tests are reported to. Under the hood, Zebrunner splits the target values by comma, groups them by target type, and then sends notifications.

Server response contains generated test run ID, that should be used when reporting subsequent events.

Test execution start#

In order to report test execution start event the following API endpoint should be used. Note, that you have to provide test run ID obtained earlier.

POST /api/reporting/v1/test-runs/{testRunId}/tests

{
    "name": "Test ABC",
    "className": "com.test.MyTests",
    "methodName": "featureTest()",
    "argumentsIndex": 0,
    "startedAt": "2021-06-06T10:12:15.444+01:00",
    "maintainer": "johndoe",
    "labels": [
        {
            "key": "feature",
            "value": "List Users"
        },
        {
            "key": "priority",
            "value": "High"
        }
    ],
    "testGroups": [
        "Regression",
        "Smoke"
    ]
}

Request payload

  • name [mandatory] - test display name. Some test frameworks allow to add meaningful display names for test methods.

  • className [mandatory] - test class/file name. Not every programming language enforces classes to be created as a must for functions - this value will be used for tests grouping - meaningful alternative to class can be provided ( e.g. some sort of location grouping tests).

  • methodName [mandatory] - test method name.

  • argumentsIndex [optional] - index of arguments passed into parametrized test. The very first index can be either 0 or 1 or another number - Zebrunner does not impose any restriction.

  • startedAt [mandatory] - ISO8601 with an offset from UTC. Represents test execution start. The supplied value must be a timestamp in the past.

  • maintainer [optional] - username of Zebrunner user responsible for maintenance of this test. If provided - additional features will be available via Zebrunner UI (see Zebrunner docs). Default value is anonymous. If there is no user with given username in Zebrunner, anonymous will also be set as a maintainer.

  • labels [optional] - an array of arbitrary key-value pairs that should be attached to the test execution. Label may contain any data associated with test, such as its priority, feature, group of tests, etc. Each supplied label must contain not blank key and value.

  • test-groups [optional] - an array of test groups that test belongs to.

Server response contains generated test ID, that should be used when reporting subsequent update or finish.

Test execution start - headless option#

Note: this endpoint was designed for a very specific use case. We recommend that you use “normal” test execution start reporting API wherever it possible.

Sometimes you might need a test id to be generated by Zebrunner before the actual test execution starts. This can be useful when your test frameworks supports handlers (e.g. @BeforeMethod in TestNG) allowing you to perform some sort of preparation operations before actual test starts. Since those handlers are usually executed before every test method it would be fair to say that those can be treated as something that belongs to test execution scope - and it can be really important to track artifacts produced in scope of those handlers execution. It is not always possible to supply all the data Zebrunner would typically expect on test method start, and for such purpose a special lightweight API was designed, see it below:

POST /api/reporting/v1/test-runs/{testRunId}/tests?headless=true

{
    "name": "headless test",
    "startedAt": "2021-06-06T10:12:15.444+01:00"
}
  • name [optional] - preliminary test display name. May contain the name of the currently executing “before“ method.

  • startedAt [mandatory] - ISO8601 with an offset from UTC. Represents test execution start. The supplied value must be a timestamp in the past.

Test execution update - headless option#

Note: this endpoint was designed for a very specific use case. We recommend that you use “normal” test execution start reporting API wherever possible.

In order to update a started headless test execution, the following API endpoint should be used (note, that every headless test must be fulfilled using this endpoint before test finishing):

PUT /api/reporting/v1/test-runs/{testRunId}/tests/{testId}?headless=true

{
    "name": "Test ABC",
    "className": "com.test.MyTests",
    "methodName": "featureTest()",
    "argumentsIndex": 0,
    "maintainer": "johndoe",
    "labels": [
        {
            "key": "feature",
            "value": "List Users"
        },
        {
            "key": "priority",
            "value": "High"
        }
    ]
}

Request payload

  • name [mandatory] - test display name. Some test frameworks allow to add meaningful display names for test methods.

  • className [mandatory] - test class/file name. Not every programming language enforces classes to be created as a must for functions - this value will be used for tests grouping - meaningful alternative to class can be provided ( e.g. some sort of location grouping tests).

  • methodName [mandatory] - test method name.

  • argumentsIndex [optional] - index of arguments passed into parametrized test. The very first index can be either 0 or 1 or another number - Zebrunner does not impose any restriction.

  • maintainer [optional] - username of Zebrunner user responsible for maintenance of this test. If provided - additional features will be available via Zebrunner UI (see Zebrunner docs). Default value is anonymous. If there is no user with given username in Zebrunner, anonymous will also be set as a maintainer.

  • labels [optional] - an array of arbitrary key-value pairs that should be attached to the test execution. Label may contain any data associated with test, such as its priority, feature, group of tests, etc. Each supplied label must contain not blank key and value.

Revert test registration#

In some cases it might be handy to revert registration of test execution in Zebrunner. This may be caused by very special circumstances of test environment or execution conditions.

The following API endpoint should be used when an already started test execution should not be registered in Zebrunner ( note, that you have to provide testRunId returned by API that registered test run execution start and testId returned by API that registered test execution start):

DELETE /api/reporting/v1/test-runs/{testRunId}/tests/{testId}

Test execution finish#

In order to report test execution finish event, the following API endpoint should be used (note, that you have to provide testRunId returned by API that registered test run execution start and testId returned by API that registered test execution start):

PUT /api/reporting/v1/test-runs/{testRunId}/tests/{testId}

{
    "result": "PASSED",
    "reason": "NullPointerException: bla bla bla",
    "endedAt": "2021-06-06T10:12:15.555+01:00"
}

Request payload

  • result [mandatory] - result of test execution. One of the following values will be accepted by API (otherwise an error will be thrown): PASSED, FAILED, ABORTED, SKIPPED.

  • reason [optional] - message somehow explaining the result, e.g. exception message, stacktrace, or some other meaningful comment.

  • endedAt [mandatory] - ISO8601 with an offset from UTC. The supplied value must be a timestamp in the past.

Test run execution finish#

In order to report test run execution finish event the following API endpoint should be used (note, that you have to provide ID returned by API that registered test run execution start):

PUT /api/reporting/v1/test-runs/{id}

{
    "endedAt": "2021-06-06T10:12:15.666+01:00"
}

Request payload

  • endedAt [mandatory] - ISO8601 with an offset from UTC. The supplied value must be a timestamp in the past.

Collecting test run artifacts#

Currently it is possible to collect the following artifacts that can later be accessed via Zebrunner UI:

  • logs (applicable to test executions) - archived by Zebrunner.

  • screenshots (applies to test executions) - archived by Zebrunner.

  • artifacts (applies to test run executions and test executions) - files that should be stored by Zebrunner.

  • artifact references (applies to test run executions and test executions) - any other kinds of artifacts, that are referenced by hyperlinks. Zebrunner does not store those artifacts and only stores links to those. It is up to users to decide what artifacts should be referenced and make sure that those are accessible.

Sending test execution logs#

It is possible to start sending test execution logs immediately once you have testRunId and testId returned by corresponding Zebunner API, meaning that you can send this data in realtime. Note, that you can publish multiple logs belonging to different tests at once. It is recommended to implement some sort of client-side batching to reduce IO and avoid API RPS limits. Following API endpoint should be used for such purpose:

POST /api/reporting/v1/test-runs/{testRunId}/logs

[{
    "testId": "12345",
    "level": "INFO",
    "timestamp": "1588866687",
    "message": "log message 01"
}, {
    "testId": "12345",
    "level": "INFO",
    "timestamp": "1588866689",
    "message": "log message 02"
} ,
{
    "testId": "12345",
    "level": "INFO",
    "timestamp": "1588866704",
    "message": "log message 03"
}]

Collecting test screenshots#

When it comes to testing of user interfaces it is usually handy to capture steps with a sequence of screenshots. Zebrunner allows you to send those screenshots and access those in test reports via UI. You have to supply testRunId and testIdreturned by corresponding Zebrunner API in order to save screenshots, meaning that similarly to logs you can start sending those immediately once test execution start is reported. Following API endpoint should be used for such purpose:

POST /api/reporting/v1/test-runs/{testRunId}/tests/{testId}/screenshots

Content-Type: image/png
x-zbr-screenshot-captured-at: <epoch-millis>

raw bytes

Max screenshot size is 5MB.
x-zbr-screenshot-captured-at is an optional custom application header. If it’s not provided it will be generated by Zebrunner back-end. However it’s value accuracy defines where in the test view screenshot will be placed, so it is strongly recommended to supply this value from client.
Screenshots can only be uploaded one at a time, there’s no batch support at the moment.

Publishing test run artifacts#

It is possible to publish artifacts that were produced by a test run. The following API should be used for such purpose:

POST /api/reporting/v1/test-runs/{testRunId}/artifacts

Request body for this endpoint should be a multipart request with file part which contains a file to be published and attached to the specified test.

Publishing test execution artifacts#

It is possible to publish artifacts that were produced by a test execution. The following API should be used for such purpose:

POST /api/reporting/v1/test-runs/{testRunId}/tests/{testId}/artifacts

Request body for this endpoint should be a multipart request with file part which contains a file to be published and attached to the specified test.

Attaching test run artifact references#

It is possible to attach artifacts referenced by hyperlinks to test runs. Zebrunner does not store those artifacts and only stores links to those. It is up to users to decide what artifacts should be referenced and make sure that those are accessible.

The following API should be used for such purpose:

PUT /api/reporting/v1/test-runs/{testRunId}/artifact-references

{
  "items": [
    {
      "name": "google",
      "value": "https://www.google.com/"
    },
    {
      "name": "Zebrunner",
      "value": "https://zebrunner.com/"
    }
  ]
}

The request body contains the items array with name-value pairs with information about arbitrary link. Those references will be attached to the specified test run.

Attaching test execution artifact references#

It is possible to attach artifacts referenced by hyperlinks to test executions. Zebrunner does not store those artifacts and only stores links to those. It is up to users to decide what artifacts should be referenced and make sure that those are accessible.

The following API should be used for such purpose:

PUT /api/reporting/v1/test-runs/{testRunId}/tests/{testId}/artifact-references

{
  "items": [
    {
      "name": "google",
      "value": "https://www.google.com/"
    },
    {
      "name": "Zebrunner",
      "value": "https://zebrunner.com/"
    }
  ]
}

The request body contains the items array with name-value pairs with information about arbitrary link. Those references will be attached to the specified test execution.

Attaching test run labels#

It is possible to attach test run labels during the test run execution. The following API should be used for such purpose:

PUT /api/reporting/v1/test-runs/{testRunId}/labels

{
  "items": [
    {
      "key": "feature",
      "value": "List Users"
    },
    {
      "key": "priority",
      "value": "High"
    }
  ]
}

The request body contains the items array with key-value pairs that should be attached to the test run. Label may contain an arbitrary data, such as its priority, feature it relates to, group of tests, etc.

Attaching test execution labels#

It is possible to attach test execution labels separately from reporting test execution start. This can be useful when the actual label value is available or calculated only at runtime. The following API should be used for such purpose:

PUT /api/reporting/v1/test-runs/{testRunId}/tests/{testId}/labels

{
  "items": [
    {
      "key": "feature",
      "value": "List Users"
    },
    {
      "key": "priority",
      "value": "High"
    }
  ]
}

The request body contains the items array with key-value pairs that should be attached to the test execution. Label may contain an arbitrary data, such as its priority, feature it relates to, group of tests, etc.

Test sessions reporting#

In order to get the maximum out of reporting it should be possible to track test sessions and their artifacts as well. Since it is not possible to guarantee for every test framework or Selenium Hub that it would be possible to collect such data there’s no hard linking between tests and test sessions.

Test session start#

In order to report session start, the following API should be used:

POST /api/reporting/v1/test-runs/{testRunId}/test-sessions

{
  "sessionId": "4d23af61-f666-4758-b3ec-1b8ba129c0822",
  "initiatedAt": "2021-06-06T10:12:15Z",
  "startedAt": "2021-06-06T10:12:16Z",
  "status": "RUNNING",
  "desiredCapabilities": {
    "browserName": "firefox",
    "platformName":"macos"
  },
  "capabilities": {
    "browserName": "firefox",
    "platformName":"macos"
  },
  "testIds": [
    123215,
    123216,
    123217,
    123218
  ]
}

Request payload

  • sessionId [optional] - test session UUID, typically generated by execution platform on session creation. A session in RUNNING status must have this value.

  • initiatedAt [mandatory] - represents test session initialization time. The supplied value must be a timestamp in the past. Each session regardless of status must have this value.

  • startedAt [optional] - represents test session start. The supplied value must be a timestamp in the past. A session in RUNNING status must have this value.

  • desiredCapabilities [mandatory] - a JSON object with capabilities that were passed to driver session execution environment.

  • status [optional] - status of the test session. Can be either RUNNING or FAILED. Default value is RUNNING.

  • capabilities [optional] - a JSON object with capabilities that were returned by driver session execution environment. A session in RUNNING status must have this value.

  • testIds [optional] - arrays of test ids that this session belongs to

Register failed to start sessions#

In some cases test session could not be initiated. E.g. due to network issue or expired credentials. Zebrunner allows to track such test sessions as well. The API is a stripped-down version of the regular test session start API. sessionId , startedAt and capabilities fields may not be provided, since not started test session may not contain such properties. Also, you must explicitly supply status == FAILED. Test sessions with status == FAILED cannot be finished using the API described in next section.

POST /api/reporting/v1/test-runs/{testRunId}/test-sessions

{
  "initiatedAt": "2021-06-06T10:12:15Z",
  "status": "FAILED",
  "desiredCapabilities": {
    "browserName": "firefox",
    "platformName":"macos"
  },
  "testIds": [
    123215,
    123216,
    123217,
    123218
  ]
}

Request payload

  • initiatedAt [mandatory] - ISO8601 with an offset from UTC. Represents test session initialization time. The supplied value must be a timestamp in the past.

  • status [mandatory] - field describing status of the test session. For failed test sessions, the value must be FAILED. For successfully started session, the explicit value is RUNNING.

  • desiredCapabilities [mandatory] - a JSON object with capabilities that were passed to driver session execution environment.

  • testIds [optional] - arrays of test ids that this session belongs to

Linking test executions to sessions#

It is possible to link a test session to tests executions at any moment in time using the following API (note, that you have to provide testRunId and testSessionId returned by Zebrunner API):

PUT /api/reporting/v1/test-runs/{testRunId}/test-sessions/{testSessionId}

{
  "testIds": [
    123215,
    123216,
    123217,
    123218
  ]
}

Test session finish#

In order to report session finish the following API should be used (note, that you have to provide testRunId and testSessionId returned by Zebrunner API):

PUT /api/reporting/v1/test-runs/{testRunId}/test-sessions/{testSessionId}

{
  "endedAt": "2021-06-06T10:12:18Z",
  "testIds": [
    123215,
    123216,
    123217,
    123218
  ]
}

Request payload

  • endedAt [mandatory] - ISO8601 with an offset from UTC. Represents test session finish time. The supplied value must be a timestamp in the past.

  • testIds [optional] - arrays of test ids that this session belongs to