Value stream concepts

31 minute read

You can use a value stream to:

  • Track changes.

  • Detect stalled and delayed changes.

  • Identify failures and blockages.

  • Find components that are ready for testing.

  • View contributing components.

  • Document and understand how software flows through you system.

Value stream modeling lets you see a visual representation of your software delivery processes so you can analyze and improve them. When you model a value stream, you set up phases and gates.

value stream phases gates

A phase is a group of activities that constitute major milestones in the software development lifecycle, such as ideation, design, development, and delivery.

A gate is a visual representation of one of the activities, such as build or test, that enables work to flow continuously through a value stream. A gate reflects the location of value in a value stream. DevOptics defines value as commits, which are represented by Jira® tickets. Each gate in a value stream contains the runs and tickets that have not yet moved to the next gate. Additionally, each gate is clearly marked with the number of tickets and commits that are in the gate, as well as the amount of time since the last run.

You can model an unlimited number of value streams with both the DevOptics Free version and the DevOptics Premium version. However, the DevOptics Premium version is required to see the number of tickets and commits for a gate, and to see value flowing through a value stream.

How tickets and commits are tracked in a value stream

DevOptics tracks tickets and commits as they flow through a value stream.

Tickets

Jira® tickets move from one gate to another as the artifacts that are created in a gate are delivered to later gates. Tickets remain in a gate until the artifact that was created in that gate is used in a later gate.

Commits

Commits are the changes applied to the software and committed to your SCM. Commits are the basic unit of value tracked by DevOptics. Commits may reference one or more tickets by mentioning the ticket in the commit message. For example, a commit message referring to Jira® ticket EXAMPLE-12345 must include EXAMPLE-12345 in the text.

DevOptics uses the following regex to identify ticket IDs in commit messages:

\b[A-Z]-[0-9]\b

When a commit references a ticket, the ticket is assigned to the gate that processes the commit. DevOptics supports only Git commits. Other source code control systems (such as Mercurial or Subversion) are not supported.

DevOptics can also track unticketed commits. Unticketed commits are the commits that are not associated with tickets.

Prerequisites for creating a value stream

Before you create a value stream, be sure that you have a DevOptics account.

You can model an unlimited number of value streams with the DevOptics Free version.

However, if you want to see metrics and value flowing through the value stream, you must have the following items: * Access to the DevOptics Premium edition. * The DevOptics plugin installed on all Jenkins masters whose jobs you want to include in your value streams.

The following additional items are not required, but are recommended to make initial setup and testing easier:

  • The ability to trigger Jenkins jobs.

  • The ability to commit code so that it can appear in a value stream.

  • The ability to promote code through various Jenkins jobs to ensure that everything is configured correctly.

Creating a value stream

When you create a value stream, DevOptics automatically sets up a template with three common phases and one unnamed gate in each phase. You can add or remove phases, add or remove gates, and connect gates to create your value stream.

You can also create a value stream by using the JSON editor.
  1. From the Value Streams screen, click Create New.

  2. (Optional) Add a new phase. DevOptics creates three default phases. If you do not require any additional phases, you can skip this step. Be sure to rename the default phases by clicking the phase name and typing a new name.

    1. Click the blue dot next to the phase where you want to add a new phase.

    2. Click +.

    3. Click the phase name.

    4. Type a new name.

    5. Press Enter.

  3. Add a gate. A gate must be in a phase and each phase can contain only one gate. Only a downstream gate, which is a gate to the right, can create a connected gate to the left. Only the last gate can create new gates downstream.

    1. To create a connected gate, hover over an existing gate, click + on the side where you want to add a gate, and then drag the line to the correct phase.

    2. Click + to add the gate.

    3. To create an unconnected gate, click + below the name of the phase where you want to add the gate.

  4. Configure each gate: NOTE: Configuring a gate is needed only if you want to see value flowing through the value stream. It is not needed if you only want to model a value stream.

    1. Click the gate.

    2. Click the gear icon.

    3. Add a name for the gate.

    4. Select the name of the Jenkins master that is associated with this gate. NOTE: If the name of the master is not in the list, make sure the plugin is installed on the master and that it is connected properly.

    5. Add the name of the Jenkins job that is associated with this gate.

    6. Verify the phase for this gate.

    7. Select the name of the downstream gate that this gate feeds. A downstream gate is the gate to the right of the current gate.

    8. If this is a deployment gate, check This is a deployment job.

    9. Click Save.

  5. Repeat step 4 for each gate in your value stream.

  6. Click Save.

Managing phases

A phase is a group of activities that constitute major milestones in the software development lifecycle, such as ideation, design, development, and delivery.

Adding a phase

You can add as many phases as you need. Be sure to use meaningful names that represent the major milestones in the software development lifecycle, such as ideation, design, development, and delivery.

  1. From the Value Streams screen, select the value stream where you want to add a phase.

  2. Click Edit.

  3. Click the blue dot next to the phase where you want to add a new phase.

  4. Click +.

  5. Click the phase name.

  6. Type a new name.

  7. Press Enter.

  8. Click Save.

Renaming a phase

You can rename a phase at any time. Be sure to use meaningful names that represent the major milestones in the software development lifecycle, such as ideation, design, development, and delivery.

  1. From the Value Streams screen, select the value stream that contains the phase you want to rename.

  2. Click Edit.

  3. Click the phase name.

  4. Type a new name.

  5. Press Enter.

  6. Click Save.

Deleting a phase

You can delete a phase only if it does not contain any gates. Be sure to delete gates before you delete a phase.

  1. From the Value Streams screen, select the value stream that contains the phase you want to rename.

  2. Click Edit.

  3. Click the phase name.

  4. Click the Delete icon.

  5. Press OK to confirm that you want to delete this phase.

  6. Click Save.

Managing gates

A gate is a visual representation of one of the activities, such as build or test, that enables work to flow continuously through a value stream.

Gate statuses

Each gate in a value stream displays an icon that represents the status of the gate, so you can quickly evaluate the status of your gates.

Statuses are as follows:

Table 1. Gate statuses
Icon Status

green check gate status

The gate does not have any issues.

error gate status

Issues are present at the gate.

running gate status

A job is running at the gate.

The gate has not been configured yet.

Adding a connected gate

You can create a new gate as a connection from an existing gate. When you hover over a gate you can see the possible new connections.

  1. From the Value Streams screen, select the value stream where you want to add a gate.

  2. Click Edit.

  3. Hover over an existing gate.

  4. Click + on the side where you want to add a gate. add connected gate

  5. Drag the line to the correct phase. click to add gate

  6. Click + to add the gate.

  7. Configure the gate.

  8. Click Save.

Adding an unconnected gate

You can create gates that are not connected to any existing gates. For example, if you want to have an independent gate or create a sub-stream. After you add an unconnected gate you can create new downstream or upstream gates from this initial gate.

  1. From the Value Streams screen, select the value stream where you want to add a gate.

  2. Click Edit.

  3. Click + below the name of the phase where you want to add the gate. add unconnected gate

  4. Configure the gate.

  5. (Optional) Add new connected downstream or upstream gates.

  6. Click Save.

Configuring a gate

To enable tickets and commits to flow through a gate, you must select the Jenkins master and the Jenkins job to associate with the gate. Additionally, name the gate, select a downstream gate that the gate feeds, and indicate if the gate is a deployment gate.

  1. From the Value Streams screen, select the value stream that contains the gate you want to edit.

  2. Click Edit.

  3. Click the gate.

  4. Click Settings icon.

  5. Add a name for the gate.

  6. Select the name of the Jenkins master that is associated with this gate. NOTE: If the name of the master is not in the list, make sure the plugin is installed on the master and that it is connected properly.

  7. Add the name of the Jenkins job that is associated with this gate.

  8. Verify the phase for this gate.

  9. Select the name of the downstream gate that this gate feeds. A downstream gate is the gate to the right of the current gate.

  10. If this is a deployment gate, check This is a deployment job.

  11. Click Save on the Configure this gate dialog box.

  12. Click Save at the top of the value stream dialog box.

Marking a gate as a deployment gate

Marking a gate as a deployment gate allows you to see the deployment frequency of that gate. The deployment frequency metric shows the frequency of successful runs at deployment gates. If you set up multiple gates in the same value stream to be deployment gates, the value stream metric for deployment frequency is an aggregation of all deployment gates in that value stream.

  1. From the Value Streams screen, select the value stream that contains the gate you want to edit.

  2. Click Edit.

  3. Click the gate.

  4. Click Setting icon.

  5. Check This is a deployment job.

  6. Click Save on the Configure this gate dialog box.

  7. Click Save at the top of the value stream dialog box.

Renaming a gate

You can rename a gate at any time.

  1. From the Value Streams screen, select the value stream that contains the gate you want to rename.

  2. Click Edit.

  3. Click the name of the gate.

  4. Type a new name.

  5. Press Enter.

  6. Click Save.

Deleting a gate

When you delete a gate, you can no longer see metrics for that gate. After you delete a gate you may need to add connections between the remaining gates.

  1. From the Value Streams screen, select the value stream that contains the gate you want to delete.

  2. Click Edit.

  3. Click the gate.

  4. Select the Delete icon.

  5. Click Save.

Editing gate details

You can change the name, the Jenkins master and the job that is associated with the gate, the phase, and the gate that is fed by this gate. You can also mark the gate as a deployment gate or remove the deployment label from the gate.

  1. From the Value Streams screen, select the value stream that contains the gate you want to edit.

  2. Click Edit.

  3. Click the gate.

  4. Click the gear icon.

  5. Edit the gate details.

  6. Click Save on the Configure this gate dialog box.

  7. Click Save at the top of the value stream dialog box

Connecting gates

Only a downstream gate (a gate to the right) can create a connected gate to the left.

  1. From the Value Streams screen, select the value stream that contains the gates you want to connect.

  2. Click Edit.

  3. Click the gate where you want to start the connection.

  4. Click + on the side where you want to add the connection.

  5. Drag the connection line to the gate you want to connect to.

  6. Click to add the connection.

  7. Click Save.

Deleting a connection between gates

If you delete a connection between gates, you can no longer see data flowing between those gates.

  1. From the Value Streams screen, select the value stream that contains the connection you want to delete.

  2. Click Edit.

  3. Click the connection line that you want to delete.

  4. Click the red scissors <icon>.

  5. Click Save.

Finding the job or Jenkins master associated with a gate

Follow these steps to see which job or Jenkins master is associated with a specific gate.

  1. In a value stream, select a gate.

  2. In the right pane, scroll down to the Gate details section.

Finding tickets and commits

Details about tickets and commits are available only in the Premium version of DevOptics.

Searching for Jira® tickets in a value stream

You can search for a Jira® ticket to see where it is located in the value stream. When you search for a ticket, DevOptics highlights the gates where the ticket is located and collapses all other gates. This lets you easily identify where a ticket is in the value stream.

To search for a ticket in a value stream:

  1. From the Value Streams screen, select the name of the value stream that you want to search.

  2. In the Search value stream field, type the Jira® ticket number, and then press Enter.

Finding unticketed commits in a gate

DevOptics surfaces Jira® tickets as they move through the value stream. Tickets are the main work artifact in DevOptics. However, not all organizations follow strict processes to associate commits with tickets. To help you understand where all the work that moves through the software development lifecycle is located, DevOptics also surfaces commits that are not associated with tickets in gates and in value streams.

Commits that are not associated with a ticket are aggregated and shown within a run so that you don’t lose track of work that was processed.

  1. In a value stream, select a gate.

  2. In the right pane, in the Current Tickets section, locate the run that contains the unticketed commit.

  3. Select an unticketed commit message for a run to see details about the unticketed commits for that run.

Finding the status of tickets in a value stream

  1. In a value stream, select a gate.

  2. In the right pane, locate the Current Tickets section to see all current tickets for the gate.

  3. (Optional) In Filter tickets, enter text to locate tickets that contain specific words.

DevOps performance metrics

DevOptics calculates and displays a set of key metrics, as popularized in the Annual State of DevOps Report. The State of DevOps report is the industry guide on CD and DevOps adoption and its correlation to improved organizational performance.

These DevOps performance metrics allow you to objectively and reliably measure and monitor improvements to your software delivery capability.

The availability of these metrics within DevOptics brings several important benefits:

  • Continuously updated data to drive improvement across teams

  • Trustworthy dashboards to guide informed decisions for better business outcomes

  • Data-driven discovery and use of best practices across teams

The following metrics are available for both value streams and gates:

Metrics are calculated on a continual basis for value streams defined in DevOptics.

Additionally, the following metrics are available for gates only:

You can select one of the following time periods to view metrics:

  • last 24 hours - this is the default

  • last 48 hours

  • last 7 days

  • last 14 days

  • last 30 days

  • last 90 days

Deployment frequency (DF)

This metric shows the frequency of successful runs of any gates that are identified as deployment gates in the value stream definition. Where multiple deploy gates exist in a value stream, the value stream metric is an aggregation.

This metric is available in both the DevOptics Free and Premium versions.

Note that high performers deploy more often.

A gate has to be marked as a Deployment gate before the deployment frequency can be calculated for that gate. You can achieve this by editing the gate and checking the option This is a deployment job

Deployment frequency is computed as follows:

Deployment frequency (DF) of a deploy gate = Count of successful deploys / number of days.

Deployment frequency (DF) of a value stream = Count of successful deploys of all deploy gates / number of days.

Mean lead time (MLT)

This metric shows the mean time for a ticket and associated commits to successfully flow through a gate. At the value stream level it is the mean time for a ticket and associated commits to successfully flow from their entry point in the value stream to their final gates.

This metric is available only in the Premium version of DevOptics.

Note that high performers have lower mean lead times.

Mean lead time is computed as follows:

For an individual gate, DevOptics computes the lead time (LT) of commits in that gate. Lead time is computed as follows:

Lead time (LT) = Time when a commit exited the gate - Time when a commit entered the gate.

Mean lead time (MLT) of a value stream = Mean of all lead times in a value stream.

Mean time to recovery (MTTR)

This metric shows the mean time it takes for a gate to return to a successful state from when it enters an unsuccessful state. It is also aggregated to the value stream level.

This metric is available only in the Premium version of DevOptics.

Note that high performers recover from failure faster.

Mean time to recovery is computed as follows:

For an individual gate, DevOptics computes the time to recovery (TTR) of failures in that gate. TTR is computed as follows:

Time to recovery (TTR) = End time of the most recent successful build - End time of the first of a consecutive sequence of recent failures.

A gate is considered in the failed state if the underlying job is in one of the following states:

  • Failure

  • Unstable

Mean time to recovery (MTTR) of a gate = Mean of all TTR’s of the gate.

Mean time to recovery (MTTR) of a value stream = Mean of MTTR’s of all gates in a value stream.

Change failure rate (CFR)

This metric shows the percentage of unsuccessful runs in a gate that are caused by new changes. It is also aggregated to the value stream level.

This metric is available only in the Premium version of DevOptics.

Note that high performers are less likely to introduce a failure with any change.

The change failure rate is computed as follows:

For an individual gate, DevOptics computes the change failure rate as follows:

Change failure rate (CFR) = Total number of unsuccessful runs of a gate caused by new changes, as a percentage of the total number of runs of the gate.

Change failure rate (CFR) of a value stream = The total number of unsuccessful runs of the value stream as a percentage of the total number of runs of the value stream.

Mean queue time

This metric shows the amount of time commits spent in the queue before the job started running.

This metric is available only in the Premium version of DevOptics.

Note that high performers have lower mean queue times.

This metric is available only at the gate level. It is not available at the value stream level.

The mean queue time is computed as follows:

For an individual gate, DevOptics computes the queue time of commits in that gate. Queue time is computed as follows:

Queue time (QT) = The time when the commit started processing - The time when the commit entered the gate. Mean queue time (MQT) = Mean of all queue times.

Mean processing time

This metric shows the amount of time it takes from when a job starts to when it completes successfully. The processing time can span multiple runs that fail until the job is part of a successful run.

This metric is available only in the Premium version of DevOptics.

Note that high performers have lower mean processing times.

This metric is available only at the gate level. It is not available at the value stream level.

The mean processing time is computed as follows:

For an individual gate, DevOptics computes the process time of commits in that gate. Process time is computed as follows:

Process time (PT) = Time when the commit was processed successfully - The time when the commit started being processed. Mean process time (MQT) = Mean of all process times.

Mean idle time

This metric shows the amount of time a successful commit spends in a gate after it is processed and before it is picked up by a downstream gate. If the gate is the last gate in a value stream, the idle time is zero.

This metric is available only in the Premium version of DevOptics.

Note that high performers have lower idle process times.

This metric is available only at the gate level. It is not available at the value stream level.

The mean idle time is computed as follows:

For an individual gate, DevOptics computes the idle time of commits in that gate. Idle time is computed as follows:

Idle time (IT) = Time when a commit passed to the next gate - The time when the commit was processed successfully. Mean idle time (MIT) = Mean of all idle times.

Run activity

This metric shows the mean number of runs at a gate per day. This metric appears for all non-deployment gates.

Viewing metrics for a value stream

You can view metrics for all value streams from the Value Streams screen. Or, you can select a specific value stream to see metrics, including sparkline graphics for each metric. By default, DevOptics shows metrics for the last 24 hours. You can select a different time range from the drop-down list above the metrics.

  1. Do one of the following:

    • To see an overview of the metrics for all of your value streams, go to the Value Streams screen. Click a column name to sort. Select the drop-down menu next to Aggregate metrics over to select a different time range.

    • To see an overview of a specific value stream, on the Value Streams screen, click the name of the value stream. Select the drop-down menu next to Metrics for this value stream over to select a different time frame.

Viewing metrics for a gate

You can view the individual metrics for any gate in a value stream. By default, DevOptics shows metrics for the last 24 hours. You can select a different time frame from the drop-down list above the metrics.

  1. From the Value Streams screen, select the value stream that contains the gate for which you want to view metrics.

  2. Click the gate for which you want to view metrics. Select the drop-down menu next to Metrics for this value stream over to select a different time frame.

Downloading metrics to a CSV file

DevOptics lets you download value stream and gate level metrics to a .csv file for additional analysis and reporting. The metrics are grouped per day so you can analyze how these metrics changed over time.

  1. From the Value Streams screen, select the value stream for which you want to download metrics.

  2. Do one of the following:

    • To download metrics for the value stream, click Download Metrics (CSV) on the Value Stream Overview window.

    • To download metrics for a gate, select the gate, and then click Download Metrics on the Gate metrics window.

  3. Select the metrics and the date range for the metrics, and then click Export Metrics (CSV).

  4. Go to your Downloads folder to locate the file. The file name begins with devoptics-metrics.

DevOptics surfaces how metrics changed over time. This allows you to analyze the impact of changes and initiatives that you made in your processes. You can view metrics trends at the value stream level and at the gate level.

DevOptics computes the metrics trends based on the timeframe that you select.

  • 1 day, 2 days, and 7 days - The metrics trends are rolled up by the hour, so each data point is the value of the metric for that hour.

  • 14 days, 30 days, 90 days - The metrics trends are rolled up by the day, so each data point is the value of the metric for that hour.

Follow these steps to view the value stream metrics trends:

  1. From the Value Streams screen, select a value stream.

  2. To change the timeframe for the metrics, select a time in Metrics for this Value Stream over.

If you are using the Free version of DevOptics, you can select up to 7 days. Up to 90 days of data is available only in the Premium version of DevOptics.

How to identify waste and low performance in a value stream

Gate waste insights provide a way for you to visualize waste and low performance in a value stream. This lets you compare performance metrics and identify where to focus your attention to improve your software delivery process.

This feature is available only in the Premium version of DevOptics.

When you select a value stream, the Gate waste insights section of the Value Stream Overview side panel lets you toggle between failing gates and slow gates. The Failing gates view shows the five gates that have the highest total failure time during the selected time period. The total failure time shows the total time spent queueing and processing runs that failed. The Slow gates view shows the five gates that have the highest total lead times during the selected time period. Available time periods include 24 hours, 48 hours, and 7, 14, 30, and 90 days. To view the details for a gate in the list, click the name of the gate.

How to use artifacts to show commits and tickets flowing through gates

DevOptics tracks value moving between upstream and downstream gates in a Value Stream in the following ways:

  • By explicitly tracking and matching SCM commits that are checked out by the upstream and downstream gates. For example, both gates check out different branches of the same SCM repository and see the same commits as the code is merged across branches. An example of this is GitFlow.

  • By tracking artifacts that are produced upstream and consumed downstream by unique ID.

Artifact concepts

An artifact is typically thought of as being anything that is produced or consumed by a run of a CI/CD job, such as:

  • File based artifacts , such as .jar, .exe, .sh, etc.

  • Docker images.

  • Amazon Machine images (AIM).

  • VMWare images.

  • Installable packages, such as RPMs, Debian, etc.

  • NPM packages.

  • A shared "fact" instead of a concrete artifact, such as a file or an image. A shared fact does not fall into any of the categories listed above. It is any unique identifier/key that both the upstream and downstream gates can both easily create/resolve, allowing DevOptics to link the appropriate runs of each job and promote value between the two gates.

There are two general categories from a produces/consumes perspective:

  • Artifacts that can be referenced using a file path.

  • Remote artifacts that are stored in an artifact repository, for example a Maven repository or a Docker registry. These are often referenced via a URL or a coordinate, such as org/name/version.

How to track the production/consumption of artifacts

Use of Jenkins fingerprinting for artifact tracking in DevOptics is deprecated. Support for it will soon be withdrawn. Instead, use the value stream Artifact Tracking features described in this section.

In order to track artifacts, the upstream Jenkins job run needs to inform Deliver that it has produced a specific artifact (by unique ID), while the downstream Jenkins job run needs to inform Deliver that it has consumed a specific artifact (by unique ID). Once Deliver has produced/consumed the unique ID, it can determine what value, such as SCM commits/tickets delivered to the upstream gate by the upstream run that produced the artifact, can be promoted to the downstream gate via the job run that consumed the artifact.

The important part of artifact tracking is the unique ID:

  • The run that produces the artifact needs to derive a unique ID for the artifact and use that ID when informing Deliver that it produced the artifact.

  • The run that consumes the artifact needs to derive a unique ID for the artifact and use that ID when informing Deliver that it consumed the artifact.

The most important point here is that the runs producing and consuming the artifacts need to have a scheme/mechanism that generates the same unique ID. The possibilities depend largely on whether the artifacts can be referenced by file paths, or not (see What is an artifact?):

  • File based artifacts: Generating a unique ID for a file based artifact is a simple process of computing a checksum of the file contents. For example, the DevOptics Job hooks for Jenkins Pipeline and Freestyle job types use SHA-512. More on this in later sections.

  • Non-file based artifacts: Generating a unique ID for a non-file based artifact requires more consideration. See Unique Artifact ID construction.

DevOptics pipeline steps

The DevOptics pipeline steps are an extension to the Jenkins Pipeline. They allow a Jenkins pipeline to explicitly declare that it produces artifacts, or consumes the changes made by other Jenkins jobs.

DevOptics gateProducesArtifact step

The DevOptics gateProducesArtifact step is a pipeline step that allows a Jenkins pipeline to explicitly declare that it "produces" an artifact that can be "consumed" by a downstream job (gate), for example via the gateConsumesArtifact pipeline step (for pipeline jobs), or via the Freestyle Job build step.

This step allows your pipeline to explicitly define what artifacts it is producing, and that you are interested in for DevOptics. Explicitly defining the artifacts you want DevOptics to track allows you to more accurately follow work as it moves across your value streams.

Produce a specific artifact with a known ID

Use the step in this form as follows:

gateProducesArtifact id: '<id>', type: '<type>', label: '<label>'
id

The ID you have assigned to the artifact you want to produce. This ID should match the ID used in a gateConsumesArtifact step call. This ID can be whatever identification scheme you use for artifacts. The only requirement is that the ID is unique within the context of your DevOptics organization. See Unique Artifact ID construction.

type

The type of artifact you are producing. Common values are file, docker, rpm. This type value should match the type value in a gateConsumesArtifact step call. This type can be whatever name you use for classifying artifact types.

label

(Optional) A readable label, providing contextual information about the artifact produced. This label should be human readable as it will be used in the DevOptics UI.

Produce a specific file artifact

In order to notify DevOptics that this run produces a file:

gateProducesArtifact file: '<file>', type: '<type>', label: '<label>'
file

The file within the workspace that you want to notify DevOptics about. This will hash the file to produce an ID.

type

(Optional) The type of artifact you are producing. Common values are file, docker, rpm. This type value should match the type value in a gateConsumesArtifact step call. This type can be whatever name you use for classifying artifact types. If it is not defined, it defaults to file.

label

(Optional) A readable label, providing contextual information about the artifact produced. This label should be human readable as it will be used in the DevOptics UI.

Example

Here is an example Jenkinsfile scripted pipeline that produces a plugin-a.txt and notifies DevOptics about it.

// Jenkinsfile scripted pipeline node { stage ('checkout') { checkout scm } stage ('build') { // Creates a file called plugin-a.txt. Using git rev-parse HEAD // here because it will generate a new artifact when the HEAD ref // commit changes. You could also just echo a timestamp, or something else. sh "git rev-parse HEAD > plugin-a.txt" // Records plugin-a.txt as a produced artifact. archiveArtifacts artifacts: 'plugin-a.txt' } stage ('produce') { // Notify DevOptics that this run produced plugin-a.txt. gateProducesArtifact file: 'plugin-a.txt' } }
DevOptics gateConsumesArtifact step

The DevOptics consumes step is a pipeline extension that allows a Jenkins pipeline to explicitly declare that it consumes artifacts that have been marked as produced by the gateProducesArtifact step.

This step allows your pipeline to explicitly define what artifacts it is consuming, that you are interested in for DevOptics. Explicitly defining the artifacts you want DevOptics to track allows you to more accurately follow work as it moves across your value streams.

Consume a specific artifact with a known ID

Using the step in this form is as simple as follows:

gateConsumesArtifact id: '<id>', type: '<type>'
id

The ID you have assigned to the artifact you want to consume. This ID must match the ID used in a gateProducesArtifact step. This ID can be whatever identification scheme you use for artifacts. The only requirement is that the ID is unique within the context of your DevOptics organization. See Unique Artifact ID construction.

type

The type of artifact you are consuming. Common values are file, docker, rpm. This type value should match the type value in a gateProducesArtifact step call. This type can be whatever name you use for classifying artifact types.

Consume a specific file artifact

In order to consume a file within the workspace:

gateConsumesArtifact file: '<file>', type: '<type>'
file

The file within the workspace you want to consume. This will hash the file to produce an ID.

type

(Optional) The type of artifact you are consuming. Common values are file, docker, rpm. This type value should match the type value in a gateProducesArtifact step call. This type can be whatever name you use for classifying artifact types. If it is not defined, it defaults to file.

Example

Here is an example Jenkinsfile scripted pipeline that consumes a plugin-a.txt artifact, notifies DevOptics about it, then produces a plugin-b.txt artifact and notifies DevOptics about it.

// Jenkinsfile scripted pipeline node { stage ('checkout') { checkout scm } stage ('build') { // Copies the artifacts of plugin-a/master (plugin-a.txt) in to this workspace. copyArtifacts projectName: 'plugin-a/master' // Notify DevOptics that this run consumed plugin-a.txt. gateConsumesArtifact file: 'plugin-a.txt' // Creates a file called plugin-a.txt. Using git rev-parse HEAD // here because it will generate a new artifact when the HEAD ref // commit changes. You could also just echo a timestamp, or something else. sh "git rev-parse HEAD > plugin-b.txt" // Records plugin-b.txt as a produced artifact. archiveArtifacts artifacts: 'plugin-b.txt' } stage ('produce') { // Notify DevOptics that this run produced plugin-b.txt. gateProducesArtifact file: 'plugin-b.txt' } }
DevOptics consumes run step

This step is a DevOptics pipeline extension that allows a Jenkins pipeline to explicitly declare that it consumes the changes (commits and Issue Tracker tickets) made by another Jenkins job upstream from it in a CD pipeline process.

Before using this step, consider using the gateConsumesArtifact and gateProducesArtifact steps to track artifacts instead. This step is intended for use in those edge cases where artifact tracking is not easy/possible.

This step allows your pipeline to explicitly define a run of an upstream Jenkins job via the job name, run ID and master URL.

Consume a specific upstream job run

Using the step in this form is as simple as follows:

gateConsumesRun masterUrl: '<master-url>', jobName: '<job-name>', runId: '<run-id>'
masterUrl

(Optional) The exact URL of the Jenkins master hosting the upstream job. The same URL used on the upstream gate configuration in the DevOptics application. If it is not defined, the URL will default to the URL of the master running the pipeline, for example it assumes the upstream job is on the same Jenkins master.

jobName

The exact name of the upstream job you want to consume from.

runId

(Optional) The ID of the upstream job run to be consumed. This can come from a job parameter or from a job trigger. If not defined, it defaults to the runId of the last successful run of the upstream job.

withMaven pipeline step

The DevOptics plugin includes an integration with the /https://plugins.jenkins.io/pipeline-maven/[Pipeline Maven plugin]. This integration allows the DevOptics plugin to automatically notify DevOptics about the dependencies used and the artifacts produced by a Maven build, which is executed from within a withMaven pipeline step.

To use this feature, Jenkins must have both the DevOptics plugin and the Pipeline Maven plugin installed.
Example

Consider the following pom files. There is a plugin-a and a plugin-b. plugin-b uses plugin-a as a dependency:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.cloudbees.devoptics</groupId> <artifactId>plugin-a</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>plugin-a</name> </project>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.cloudbees.devoptics</groupId> <artifactId>plugin-b</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>plugin-b</name> <dependencies> <dependency> <groupId>com.cloudbees.devoptics</groupId> <artifactId>plugin-a</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>

Plugin A can have a Jenkinsfile scripted pipeline like the following. Notice that there are no explict calls to DevOptics needed:

// Plugin A Jenkinsfile scripted pipeline node { stage ('checkout') { checkout scm } stage ('build') { withMaven() { sh "mvn clean install" } } }

Running this will result in DevOptics being notified about 2 events:

  • plugin-A pom file as a produced artifact.

  • plugin-A jar file as a produced artifact.

Plugin B can also have a Jenkinsfile scripted pipeline like the following. Again, notice that there are no explicit calls to DevOptics needed:

// Plugin B Jenkinsfile scripted pipeline node { stage ('checkout') { checkout scm } stage ('build') { withMaven() { sh "mvn clean install" } } }

This will result in DevOptics being notified about 3 events:

  • plugin-a jar file as a consumed artifact.

  • plugin-b pom file as a produced artifact.

  • plugin-b jar file as a produced artifact.

Disabling the integration

The integration with the Pipeline Maven plugin can be disabled in 2 ways:

  • Disable the integration just for a specific withMaven pipeline step:

    // Pipeline Maven plugin integration disabled node { stage ('checkout') { checkout scm } stage ('build') { withMaven(options: [ gateArtifactPublisher(disabled: true) ]) { sh "mvn clean install" } } }
  • Disable the integration globally by going to Jenkins  Manage Jenkins  Global Tool Configuration  Pipeline Maven Configuration  Options.

    If the DevOptics Gate Artifact Publisher is already listed, tick the Disabled tickbox. If it is not already listed, first add it using the Add Publisher Options dropdown, then tick the Disabled tickbox.

    ==== Freestyle job build steps

These DevOptics build steps allow a Freestyle job to explicitly declare that it produces artifacts or consumes the changes made by other Jenkins jobs.

Freestyle job build step

The DevOptics build step for Freestyle jobs is called Inform DevOptics of consumed artifact.

This step is a DevOptics extension that allows a Jenkins job to explicitly declare that it consumes artifacts that have been marked as produced by the gateProducesArtifact step.

This step allows your job to explicitly define what artifacts it is consuming, that you are interested in for DevOptics. Explicitly defining the artifacts you want DevOptics to track allows you to more accurately follow work as it moves across your value streams.

68 consumed artifact build step

69 consumed artifact build step

Consume a specific artifact with a known ID

Using the step in this form requires filling out the following fields:

id

The ID you have assigned to the artifact you want to consume. This ID should match the ID used in a gateProducesArtifact step call. This ID can be whatever identification scheme you use for artifacts. The only requirement is that the ID is unique within the context of your DevOptics Organization.

type

The type of artifact you are consuming. Common values are file, docker, rpm. This type value should match the type value in a gateProducesArtifact step call. This type can be whatever name you use for classifying artifact types.

Consume a specific file artifact

In order to consume a file within the workspace:

file

The file within the workspace you want to consume. This will hash the file to produce an ID. Note that the artifact must be in the workspace in order for this to work i.e. the pipeline script may need to "get" the artifact first, for example by copying from another job run, or pulling from an artifact repository.

type

(Optional) The type of artifact you are consuming. Common values are file, docker, rpm. This type value should match the type value in a gateProducesArtifact step call. This type can be whatever name you use for classifying artifact types. If it is not defined, it defaults to file.

Freestyle job post-build action

The DevOptics post-build action for Freestyle jobs is called Inform DevOptics of produced artifact.

This step is a DevOptics extension that allows a Jenkins job to explicitly declare that it produces artifacts that can be consumed by the gateConsumesArtifact step.

This step allows your job to explicitly define what artifacts it is producing that you are interested in for DevOptics. Explicitly defining the artifacts you want DevOptics to track allows you to more accurately follow work as it moves across your value streams.

70 produced artifact post build action

71 produced artifact post build action

Produce a specific artifact with a known ID

Using the step in this form requires filling out the following fields:

id

The ID you have assigned to the artifact you are producing. This ID should match the ID used in a gateConsumesArtifact step call. This ID can be whatever identification scheme you use for artifacts. The only requirement is that the ID is unique within the context of your DevOptics Organization.

type

The type of artifact you are producing. Common values are file, docker, rpm. This type value should match the type value in a gateConsumesArtifact step call. This type can be whatever name you use for classifying artifact types.

label

(Optional) A readable label, providing contextual information about the artifact produced. This label should be human readable as it will be used in the DevOptics UI.

Produce a specific file artifact

In order to notify DevOptics that this run produces a file:

file

The file within the workspace that you want to notify DevOptics about. This will hash the file to produce an ID.

type

(Optional) The type of artifact you are producing. Common values are file, docker, rpm. This type value should match the type value in a gateConsumesArtifact step call. This type can be whatever name you use for classifying artifact types. If it is not defined, it defaults to file.

label

(Optional) A readable label, providing contextual information about the artifact produced. This label should be human readable as it will be used in the DevOptics UI.

Unique Artifact ID construction

When producing/consuming an artifact that can’t be “referenced” using a file path (see What is an Artifact?), you need to supply an *id*.

gateProducesArtifact type: <type>, id: <id>

You can use the image ID, for example a Docker image ID, as the unique ID. Or, you can construct an ID based on a run environment variable or a build parameter, for example:

gateProducesArtifact type: “docker”, id: "acme-app-${env.BUILD_ID}"

In the purest sense, using the image ID is the most “correct” thing to do because of the lower risk of creating an *id* that clashes with an earlier ID.

However, using image IDs can also be troublesome/error-prone if you don’t use them consistently between the producer and the consumer. Getting the image ID can involve adding obscure code to your Jenkinsfile to execute commands to resolve the image ID, for example:

docker images --no-trunc --format='{{.ID}}' acme-image:latest

This code can very easily be executed inconsistently (different switches, etc.) on the produces and consumes side. This results in the creation of inconsistent IDs, and therefore the inability of DevOptics to track value.

If possible, use a scheme whereby they only execute “cryptic” *id* resolution commands in the upstream producer and then “pass” the *id* to the downstream consumer side via a mechanism that allows the consumer to get that *id* at low risk (no cryptic commands that can be executed inconsistently. For example, by passing as a build parameter.

For example, in the upstream producer Jenkinsfile:

// Get the Docker image Id for "acme-image:latest" from the Docker registry.
def digest = sh script: "docker images --no-trunc --format='{{.ID}}' acme-image:latest", returnStdout: true
def imageId = digest.split(":")[1]

// Tell Deliver that this job produced the image ...
gateProducesArtifact type: 'docker', id: imageId, label: "acme-image:${imageId}"

// Trigger deploy job (downstream consumer), passing the imageId
build job: 'acme-deploy-job', imageId : imageId

And then, in the downstream consumer Jenkinsfile (acme-deploy-job):

// Tell Deliver that this job consumed the image ...
gateConsumesArtifact type: "docker", id: imageId

The key point to note here is that all obscurity is in the upstream producer Jenkinsfile and none in the downstream consumer, reducing the risk of using inconsistent IDs upstream versus downstream.

You also have the option of constructing an ID based on a run environment variable or build parameter, for example:

def imageId = "acme-app-${env.BUILD_ID}"

// Tell Deliver that this job produced the image ...
gateProducesArtifact type: 'docker', id: imageId

// Trigger deploy job (downstream consumer)
build job: 'acme-deploy-job', imageId : imageId

If you can guarantee that the IDs produced in such an environment will be unique for every build (Jenkins build numbers not reset), then this can be an easier and more practical solution.

In conclusion, you have two choices and there are trade-offs with each approach; ease of use versus perceived purity.

Creating a value stream by using the JSON Editor

Value streams can also be defined as JSON entities.

To enter JSON editing mode: . Click the three dots on the top right of your value stream and then click Edit JSON.

JSON representation makes it easy to share templates and scaffolds or adds the ability to insert generated value streams based on your software delivery system.

It requires a list of phases. Each phase can have multiple gates.

{ "phases": [ { "id": "<custom_id_of_phase>", "name": "<name_of_phase>", "gates": [ { ... } ] } ] }

Defining a phase:

{
  "id": "<custom_id_of_phase>",
  "name": "<name_of_phase>",
  "gates": [<gate>]
}
id

Identifier for that phase.

name

(Optional) The name of the phase

gates

(Optional) List of gates within that phase

Defining a gate:

{
  "id": "<custom_id_of_gate>",
  "name": "<name_of_gate>",
  "master": "<master_connected_to_gate>",
  "job": "<job_connected_to_gate>",
  "feeds":"id_of_gate_this_gates_feeds_into",
  "type": "<deplpyment_gate>"
}
id

Identifier for that phase.

name

(Optional) The name of the phase

master

(Optional) Master that connects to this gate. (required to see tickets and commits within the gate)

job

(Optional) Job within master that connects to this gate. (required to see tickets and commits within the gate)

feeds

(Optional) ID of gate this gate feeds into. (Not needed for most right gate.)

type

(Optional) Set type to deployment if this gates represents a deployment job

See below for a simple example:

{ "phases": [ { "id": "phase1", "name": "Build", "gates": [ { "id": "gate1", "name": "Untitled Gate", "master": "", "job": "", "feeds": "gate2" } ] }, { "id": "phase2", "name": "Test", "gates": [ { "id": "gate2", "name": "Integration Tests", "master": "", "job": "", "feeds": "gate3" } ] }, { "id": "phase3", "name": "Release", "gates": [ { "id": "gate3", "name": "Untitled Gate", "master": "", "job": "", "type": "deployment" } ] } ] }

Value Stream Templates

Template: Large monolithic system

The software delivery system of a large and complex application usually contains many different components that need to go through rigorous testing and security checks before the release can be built and deployed. Value stream modeling visualizes the dependencies in these processes and surfaces the tickets and commits within the software delivery pipeline. That enables you to see bottlenecks and blockers early and act quickly to remove them and improve the overall system.

DevOptics lets you map all the dependencies of your software delivery processes from build to production.

Here is a JSON representation of above value stream template. Copy and paste the template into the Json editor of your value stream to get started with this template.

{ "phases": [ { "id": "dev", "name": "Dev (Build/Test)", "gates": [ { "id": "component_a", "name": "Component A", "master": "", "job": "", "feeds": "component_test_a" }, { "id": "component_b", "name": "Component B", "master": "", "job": "", "feeds": "component_test_b" }, { "id": "component_c", "name": "Component C", "master": " ", "job": "", "feeds": "component_test_c" }, { "id": "component_d", "name": "Component D", "master": "", "job": "", "feeds": "component_test_d" } ] }, { "id": "component_tests", "name": "Component Tests", "gates": [ { "id": "component_test_a", "name": "Component A", "master": "", "job": "", "feeds": "integration" }, { "id": "component_test_b", "name": "Component B", "master": "", "job": "", "feeds": "integration" }, { "id": "component_test_c", "name": "Component C", "master": "", "job": "", "feeds": "integration" }, { "id": "component_test_d", "name": "Component D", "master": "", "job": "", "feeds": "integration" } ] }, { "id": "system_integration", "name": "system Integration", "gates": [ { "id": "integration", "name": "Integration", "master": "", "job": "", "feeds": "integration_tests" } ] }, { "id": "system_tests", "name": "System Tests", "gates": [ { "id": "integration_tests", "name": "Integration Tests", "master": "", "job": "", "feeds": "staging_deploy" } ] }, { "id": "staging", "name": "Staging", "gates": [ { "id": "staging_deploy", "name": "Staging", "master": "", "job": "", "feeds": "production_deploy" } ] }, { "id": "release-promotion", "name": "Production", "gates": [ { "id": "production_deploy", "name": "Release", "master": "", "job": "", "type": "deployment" } ] } ] }

Template: Microservice system

When delivering your application through multiple loosely coupled microservices the delivery process of each service becomes simpler, but the overall system becomes more complex. It is important to understand how these services deliver features and where, and if there are blockers and bottlenecks.

DevOptics lets you map sub-streams of your overall value streams with multiple endpoints and visualize everything in one value stream.

Here is a JSON representation of above value stream template. Copy and paste the template into Json editor of your value stream to get started with this template.

{ "phases": [ { "name": "Build Services", "id": "build_services", "gates": [ { "id": "service_a_build", "name": "Service A - Build", "master": "", "job": "", "feeds": "service_a_test" }, { "id": "service_b_build", "name": "Service B - Build", "master": "", "job": "", "feeds": "service_b_test" }, { "id": "service_c_build", "name": "Service C - Build", "master": "", "job": "", "feeds": "service_c_test" } ] }, { "id": "tests", "name": "Tests", "gates": [ { "id": "service_a_test", "name": "Service A - Test", "master": "", "job": "", "feeds": "service_a_staging" }, { "id": "service_b_test", "name": "Service B - Test", "master": "", "job": "", "feeds": "service_b_staging" }, { "id": "service_c_test", "name": "Service C - Test", "master": "", "job": "", "feeds": "service_c_staging" } ] }, { "id": "staging_deploy", "name": "Staging Deploy", "gates": [ { "id": "service_a_staging", "name": "Service A - Staging Deploy", "master": "", "job": "", "feeds": "service_a_verification" }, { "id": "service_b_staging", "name": "Service B - Staging Deploy", "master": "", "job": "", "feeds": "service_b_verification" }, { "id": "service_c_staging", "name": "Service C - Staging Deploy", "master": "", "job": "", "feeds": "service_c_verification" } ] }, { "id": "verification", "name": "Verification", "gates": [ { "id": "service_a_verification", "name": "Service A - Staging Verification", "master": "", "job": "", "feeds": "service_a_prod" }, { "id": "service_b_verification", "name": "Service B - Staging Verification", "master": "", "job": "", "feeds": "service_b_prod" }, { "id": "service_c_verification", "name": "Service C - Staging Verification", "master": "", "job": "", "feeds": "service_c_prod" } ] }, { "name": "Production Deploy", "id": "production_deploy", "gates": [ { "id": "service_a_prod", "name": "Service A - Production Deploy", "master": "", "job": "", "feeds": null, "type": "deployment" }, { "id": "service_b_prod", "name": "Service B - Production Deploy", "master": "", "job": "", "feeds": null, "type": "deployment" }, { "id": "service_c_prod", "name": "Service C - Production Deploy", "master": "", "job": "", "feeds": null, "type": "deployment" } ] } ] }