Two ways to start
-
Observation mode (recommended) – run CloudBees Smart Tests alongside your usual CI runs for a short window to generate a Confidence Curve – this shows how much test time can be safely saved
-
Immediate use – go live with subsetting from day 1 when you need quick wins and are willing to iterate based on real results
Before you start
As you are setting up the workspace, make sure you have completed the pre-requisites - identify a use‑case you would use PTS for, review technical requirements, and tune optional configurations based on your needs – so you can begin smoothly.
|
A CloudBees Smart Tests Sales Engineer will help you evaluate which use-case best fits based on your problem statement and setup. Our team will also assist you in implementing best practices and any configuration setup you may need. |
Step 1: Identify a use-case
First of all, you need to pick a problem that you want to solve for your team by using CloudBees Smart Tests. Pick a use-case (listed below) that resonates, and dive deeper to see if it fits your needs.
An existing test suite may be taking a long time to run. Some teams' capacity for executing tests is finite & limited, yet the demand to run tests is high. Even in cases where test execution capacity is scalable & elastic, the number of tests are so many that it’s costing too much money.
To that end, the team would want to shorten the execution time of that test suite. Read more about it here.
The feedback on changes may be coming in too late because the tests are towards the right in your delivery pipeline (e.g. UI tests, E2E, Nightly tests). These are run infrequently (for example once in 3 days). In several teams, another common challenge is that their main/dev branch is too unstable, causing QA people a lot of overhead to deal with failures.
Read in detail about the use-cases for Predictive Test Selection.
Step 2: Review technical requirements
Next, it’s time to look at the must-have tooling & environment requirements from your end, under which CloudBees Smart Tests is supported. CloudBees Smart Tests will not run without these requirements.
Language support
-
Python 3 and Pip3: CloudBees Smart Tests CLI is a Python 3 package and you need Pip3 to install it
-
Java 8+
Version control system
-
Git (we work with all popular Git systems like GitHub, Bitbucket and GitLab)
-
NOTE: Git optimization tools may add additional complexity
Internet access
-
CloudBees Smart Tests is a SaaS service and the CLI needs access to the internet.
-
Team enabled to edit CI script. CloudBees Smart Tests is integrated into the CI script. In some larger teams, access to editing CI scripts is secured to another team. Ensure that as you trial CloudBees Smart Tests, you can edit the script.
Supported test frameworks
-
Here’s a list of supported test frameworks
Test results in binary (true/false)
-
CloudBees Smart Tests doesn’t support tests that don’t report tests in binary form (usually performance tests).
Step 3: Requirements for Predictive Test Selection
Here, we will take you through a couple of must-have requirements for running PTS. Along with that, we have also listed the best practices and a checklist to help your team build a mental map for PTS.
-
No inter-test dependencies. Tests need to be redefined/re-arranged. PTS re-orders tests. If tests have dependencies, they may not work as “higher” priority.
-
Test framework supports test file mapping. When an external file changes, CloudBees Smart Tests sends a subset of tests in a test file as if it runs it. In most cases, this is per line. The underlying test framework needs to support linking a test back to a file and test mappings. List of frameworks supported is given [here](#).
|
Before you proceed, make sure you have read the best practices & checklist. There are several important considerations for your team here. |
Step 4: Configuration options for your use-case
As you think about bringing in CloudBees Smart Tests, there are a few configuration topology options to think about.
|
Each of these configuration options may come in handy for your team. We advise you to make sure you have read through the configurations and have an understanding of its implementation as well. |
Observation mode
When to consider? To quantify PTS' impact before changing your CI behaviour. It’s ideal for teams that want to build confidence, evaluate accuracy, and present measurable ROI before rollout.
Typical duration: 1–2 weeks worth of test run data
Step 1: CI integration & verify data flow
Configure CloudBees Smart Tests CLI in your CI so it can start collecting build & test run data.
What to do?
-
Setup workspace & obtain API key
-
Edit CI script to integrate CloudBees Smart Tests CLI (refer CI integration blueprint)
-
Install CLI & verify installation
-
Record build information
-
Record test session
-
Record test results after test run job
-
Expected outcome: You can view the test results in the web-app once the CI runs:
-
Open your workspace in web-app
-
Navigate to Test sessions tab
-
Check your latest recorded run
Step 2: Turn on Observation mode
Enable CloudBees Smart Tests in passive mode to analyze your tests without changing which tests actually run.
What to do?
-
Update CI script to introduce subset command (refer CI integration blueprint)
-
Add
--observationoption
Expected outcome: You can view potential time savings in the web-app once the CI runs:
-
Open your workspace in web-app
-
Navigate to Predictive Test Selection → Observe tab
-
Open your latest recorded run
-
Note the Observation mode tag on the detail page
Step 3: Record at least 20 test runs
Now that the CI is configured correctly in observation mode, you just need to wait for CloudBees Smart Tests to collect enough data. We can quantitatively analyze the effectiveness of test selection with your data - this is called the Confidence Curve.
The Confidence Curve helps evaluate how effective the model is at predicting failing tests against how much time can be saved for every test run.
Step 4: Review Confidence Curve v2
Assess how much test time CloudBees Smart Tests could safely skip based on recorded execution history.
What to do?
|
Smart Tests team will schedule a walkthrough session to evaluate the Confidence curve and interpret its results |
-
Interpret the Confidence curve
-
What is it? This curve shows how subset size (number of tests selected) changes with TTFF (Time To First Failure). It helps visualize how quickly failures are found as you increase the number of tests executed.
-
X-axis → TTFF, measured in %/min, indicating how quickly a failure is detected relative to total test execution time
-
Y-axis → Subset size, the proportion or number of tests included in the subset
-
-
Identify a suitable Optimization target for subsetting
Expected outcome: You’re able to choose a suitable Optimization target using Confidence, Time or Target to add in your subset command [e. 90% confidence, 50% target]
Step 5: Remove observation flag & go live
Start using in production with an optimization target fitting your use-case and evaluation results from the previous step.
What to do?
-
Update the CI to remove the
--observationflag from the subset command -
Add optimization target option to the subset command, like -
--confidence 90%,--target 50%,--time 30m
Expected outcome: Only tests provided by CloudBees Smart Tests subset are executed when the CI runs. You can view results in the web-app:
-
Open your workspace in web-app
-
Navigate to Predictive Test Selection → Analyze tab
-
Open your latest recorded run
-
Note the Time saved tag with rest of the details about you subset run on the detail page
-
Additionally, you can also view your optimization target, tests included in the subset and the ones that were left out
-
Immediate use
When to consider? You want immediate time savings and are comfortable iterating in production. Ideal for teams that are looking to rapidly onboard and see quick wins.
Typical duration: Immediate rollout with ongoing iteration over the first few sprints.
Step 1: CI integration & verify data flow
Configure CloudBees Smart Tests CLI in your CI so it can start collecting build & test run data.
What to do?
-
Setup workspace & obtain API key
-
Edit CI script to integrate CloudBees Smart Tests CLI (refer CI integration blueprint)
-
Install CLI & verify installation
-
Record build information
-
Record test session
-
Record test results after test run job
-
Expected outcome: You can view the test results in the web-app once the CI runs:
-
Open your workspace in web-app
-
Navigate to Test sessions tab
-
Check your latest recorded run
Step 2: Go live with a conservative optimization target & iterate
Start using in production with a conservative optimization target and iterate as you go.
What to do?
-
Update CI script to introduce subset command (refer CI integration blueprint)
-
Add optimization target option to the subset command
-
Choose a conservative target for your first few runs like -
--confidence 95%,--target 80%, etc.
-
-
Modify the optimization target by taking feedback on first few subset runs
-
You may choose to decrease confidence in favour of more time savings
-
You may prefer to increase confidence in favour of more coverage
-
Expected outcome: Only tests provided by CloudBees Smart Tests subset are executed when the CI runs. You can view results in the web-app:
-
Open your workspace in web-app
-
Navigate to Predictive Test Selection → Analyze tab
-
Open your latest recorded run
-
Note the Time saved tag with rest of the details about you subset run on the detail page
-
Additionally, you can also view your optimization target, tests included in the subset and the ones that were left out
-
CI integration blueprint
# install, authenticate & verify uv tool install smart-tests export SMART_TESTS_TOKEN=your_API_key smart-tests verify # record build smart-tests record build --build BUILD_NAME [OPTIONS] # record session smart-tests record session --build BUILD_NAME --session SESSION_NAME [OPTIONS] # subset smart-tests subset -- confidence 90% --session SESSION_NAME <TEST_RUNNER> [OPTIONS] << run tests >> # record tests smart-tests record tests <TESTRUNNER> --session SESSION_NAME [OPTIONS]