Skip to main content
Matrix42 Self-Service Help Center

Testing your Extension

This is a step-by-step guide on how to test your Digital Workspace Platform Extension.


Setup your Tests

To write tests you first need to setup your local environment:

  1. Inside your Configuration Project you want to create the following folder structure:

  2. Use npm to create a package.json inside the Tests folder.
    > npm init
  3. Install testcafe as a development dependency using npm.
    > npm i -D testcafe
  4. Optionally install the test runtime for common logic and utilities.
    > npm i @matrix42/extensions-test-runtime

Writing your first Test

In the following you will find an example test file which includes one test to verify the existance of the "Home" navigation item in the Self Service Portal.

You can read more about how to write tests using testcafe here.

import { Selector } from 'testcafe'
import * as runtime from '@matrix42/extensions-test-runtime'

fixture`Example Tests`.beforeEach(async (t) => {
  // this call ensures that there is a valid access token for the current test session
  await runtime.ensureSessionToken()

test('Home button exists in Self Service Portal', async (t) => {
  // navigate to the Self Service Portal
  await t.navigateTo(`https://${runtime.executionContext.dwpHost}/wm/app-SelfServicePortal`)
  // get the text value of home navigation item
  const result = await Selector('').innerText
  // expect the text value to be "Home"
  await t.expect(result).eql('Home')

Test files should always end with .test.js

Running your Tests locally

To run your tests locally and see if they succeed you can use the Matrix42 Command-Line Interface (CLI).

  1. Install the CLI using npm.
    > npm i -g @matrix42/cli
  2. Navigate to your Tests folder and run the CLI.
    > m42 run-extension-tests .\*.test.js -h <host> -t <token> [-b <browsers>] [-o <output>] [-s] [-l]

host: The hostname of the DWP Environment you want to run the tests against.

token: The API token of the DWP principal you want to run the tests with.

browsers: A comma seperated list of browsers you want to run the tests on.

output: The output path where the test reports are saved.

s: If specified screenshots will be recorded on failure of a test and saved to in the /screenshots folder in the output path.

l: If specified the tests will not run in docker containers but rather on the executing machine.

You can now see the results of your tests in the console or you can inspect the *_report.json files in the output path.

When running tests without docker (by setting the -l flag) you need to make sure that the required browser version is installed on the executing machine.

Generally we do not recommend to run tests without the use of docker.

Defining the Browsers List for your Tests

There are different ways to define for which browsers and which versions you want to run your tests.

The easiest way is to specify a comma separated list of browsers and versions as the -b (--browsers) parameter of the run-extension-tests CLI command:

> m42 run-extension-tests ... -b chrome:95,firefox,edge:96

This would run your tests in Google Chrome version 95.0, Mozilla Firefox on the latest available version and Microsoft Edge version 96.0 .

Currently we only support the following browsers:

Key Browser Supported Versions
chrome Google Chrome
firefox Mozilla Firefox
edge Microsoft Edge

When setting the -l (--local) flag on the run-extension-tests command you can only control the browser version by the version installed on the executing machine.

A more sophisticated way to define the browsers you want to run your tests on is to define a browser list.
You can do this by creating a .browserslistrc file or by extending the package.json in your tests folder:

> 1%
last 2 versions
not dead

This configuration for example defines to run your tests on all supported browsers which are still maintained and globally used by more than one percentage of all users.
Also it will only run the tests on the last two browser versions which match the other criterias.

You can read more about the options to configure your browsers list here.

Prepare the Test Studio for your Tests in the DWP

To run your tests on a DWP Server and see if they succeed you can install the Matrix42 Extension Test Studio via the Matrix42 Extension Gallery.
After the Installation, the filesystem on the DWP must be prepared for the Test Studio so it can find the Test Files and execute them.

  1. The previously created Tests folder, which contains the Test Files and the nodejs package.json, need to be placed on the DWP Server at the following path.

    "Matrix42 Workplace Management\Extensions\<PackageId>\<Version>\"

    PackageId: The Id for an Extension which can be found inside the package.json of the Configuration Package.

    Version: The currently installed version of the Configuration Package.

    e.g. the Tests for the first TP of the Extension Test Studio are located at

    "Matrix42 Workplace Management\Extensions\0a85e0ac-922c-4536-9043-161dcc67a8a7\\Tests"
  2. The Test Studio needs a valid API Token for the environment to login for the Test Runs.
    Write the valid API Token to a new config.json File:
      "ApiToken": "<token>"

    and place this file on the DWP Server at

    "Matrix42 Workplace Management\ExtensionTestStudio\config.json"

The Extension is now ready to be tested on the DWP by the Matrix42 Extension Test Studio.

Running your Tests in the DWP

After finishing the previous preparation steps, you can start creating and executing Test Runs for your Extension.

  1. Head to the Administration Page on the DWP and navigate to "Extensions > Test Studio > Test Runs". There you can click on "Add Test Run".
  2. When creating the new Test Run, you need to specify the associated Configuration Package. The Test Studio will then load any Test Files it found on the file system, which belong to the specified Configuration Package. You can then select the Test Files that should be executed and confirm the selection by clicking on "Run Tests".
  3. The status and results of the Test Run will then be visible when selecting the newly created Test Run.
  • Was this article helpful?