Subset

4 minute read
On this page

A subset is a set of tests dynamically selected from a larger test suite using Predictive Test Selection .

'Dynamically selected' refers to how the tests returned in subsets change based on the request parameters.

Properties

A subset is the output of a subset request made using the smart-tests subset CLI command. You make a subset request every time you want to run a subset of tests in your CI pipeline:

subsetting diagram 2x

A subset request takes various inputs:

  1. The Build being tested

  2. A subset optimization target

  3. The test runner you use

  4. The input test list : the complete list of tests that would typically run in a non-subset session (or "full run")

And outputs:

  1. A subset list of tests formatted for your test runner

  2. [Optional] The remainder list of tests formatted for your test runner

Build being tested

When you request a subset of tests to run in your CI process, you pass in the name of the Build you’re testing:

smart-tests subset --build $BUILD_NAME --session $SESSION_NAME [other options…​]

This is important so that the Predictive Test Selection service can analyze the changes in the build and select tests appropriately.

Optimization target

When you request a subset of tests to run in your CI process, you include an optimization target :

smart-tests subset \ # one of: --target [PERCENTAGE] # or --confidence [PERCENTAGE] # or --time [STRING] \ [other options...]

Test runner

When you request a subset of tests, specify the test runner you will run tests with. This value should be the same between smart-tests subset and smart-tests record tests commands.

The CLI uses this parameter to adjust three things automatically:

  1. Input test list format

  2. Subset altitude

  3. Output test list format

Input test list format

The complete list of tests you would typically run is a crucial input to any subset request. CloudBees Smart Tests uses this list to create a subset of tests.

How this list is generated, formatted, and passed into the smart-tests subset depends on the test runner. In general, you don’t have to worry about creating this list; the documentation for each test runner goes over the specific flow for your tool.

However, for completeness, we’ll outline the various methods used across test runners:

  1. Some test runners can generate a list of tests via a particular command. The output of this command is then passed into smart-tests subset .

  2. Other test runners don’t provide that feature. In that case, you pass the directory/directories containing your tests into smart-tests subset . The CLI then creates the list of tests by scanning those directories and identifying tests using pattern-matching.

  3. Furthermore, some frameworks can list individual tests only after compiling test packages. In this case, generating a list of higher-level packages can be preferable to individual test cases. (This relates to the next section.)

Subset altitude and test items

To run a subset of tests, you pass the returned subset list into your test runner for execution.

Each test runner has an option for specifying a list of tests to run, and these options allow for different 'altitudes' of filtering. For example, some test runners only let you pass in a list of files to run, others support filtering by class , while some support filtering by test case or method .

Based on the test runner specified in smart-tests subset , the CLI automatically outputs a list of tests using the hierarchy level supported by that test runner.

Another factor that impacts subset altitude is the ability of the test runner/CLI to list tests at a low altitude. (See above section for more info)

For example, Maven supports filtering by class, so we say that Maven’s subset altitude is class . A test item for Maven is equivalent to a class. Test results captured using smart-tests record tests for Maven will include both class and test case identifiers, but the output of smart-tests subset will include a list of classes.

Here’s the mapping for all test runners:

Test runner Altitude

Android Compatibility Suite (CTS)

Class

Android Debug Bridge (adb)

Class

Ant

Class

Bazel

Target

Behave

File

Ctest

Test case

cucumber

File

Cypress

File

dotnet test

Test case

Go Test

Test case

GoogleTest

Test case

Gradle

Class

Jest

File

Maven

Class

minitest

File

Nunit Console Runner

Test case

pytest

Test case

Robot

Test case

RSpec

File

Output test list format

To run a subset of tests, you pass the returned subset list into your test runner for execution.

Each test runner has a method or option for specifying a list of tests to run. For example, one test runner might expect a comma-delimited list of tests, whereas another might expect a list separated by spaces, etc.

The CLI adjusts the output format automatically based on the test runner used in the request. In general, you don’t need to worry about the output format because you’ll pass it directly into your test runner per the documentation for your tool. But this does mean that the contents of subset files/outputs change based on the test runner value.

Input test list

As described above, the complete list of tests you typically run is a crucial input to any subset request. CloudBees Smart Tests uses this list to create a subset of tests.

This list is essential because it can change between requests due to

  • new tests

  • sub-suites being tested (see /docs/concepts/workspace/#sub-suites-within-larger-test-suites[#Sub suites within larger test suites] )

  • multiple test runner invocations per test session (see /docs/concepts/test-session/#static-bins[#Static bins] )

In general, you don’t have to worry about creating the input test list, but it’s essential to understand this concept because it relates to your optimization target. See Choosing a subset optimization target for more on this.

With Zero Input Subsetting , CloudBees Smart Tests generates the input test list for you.