Learning about Pipelines

A Jenkins Pipeline defines the tasks required to build, test, and deploy your software. It is a suite of plugins that supports implementing and integrating Pipelines into Jenkins.

Understanding the key concepts in Pipelines helps you deliver improved code efficiently and reliably.

Declarative Pipeline

Declarative Pipelines provide a structured hierarchical syntax to simplify the creation of Pipelines and the associated Jenkinsfiles. In its simplest form, a Pipeline runs on an agent and contains stages, while each stage contains steps that define specific actions. Here is an example:

pipeline {
    agent { (1)
        label ''
    stages {
        stage(‘Build’) { (2)
            steps { (3)
                sh ‘mvn install’


An agent section specifies where tasks in a Pipeline run. An agent must be defined at the top level inside the pipeline block to define the default agent for all stages in the Pipeline. An agent section may optionally be specified at the stage level, overriding the default for this stage and any child stages. In this example, the agent is specified with an empty label filter, meaning this Pipeline will run on any available agent.

The code snippet above does not show an agent running on Kubernetes. An agent running on Kubernetes is configured differently.


A stage represents a logical grouping of tasks in the Pipeline. Each stage may contain a steps section with steps to be executed, or stages to be executed sequentially, in parallel, or expanded into a parallel matrix.

It contains commands that run processes like Build, Deploy, or Tests. These commands are included in the steps section in the stage.

It is advisible to have descriptive names for a stage as these names are displayed in the UI and logs.

There can be one or more stage sections inside a stages section. At least one stage must be defined in the stages section.


The `steps`section contains a set of commands. Each command performs a specific action and is executed one-by-one.

pipeline {
    agent any
    stages {
        stage('Example') {
            steps {
                sh 'mvn compile'

For more information on Declarative Pipelines, see Using Declarative Pipeline syntax.

Scripted Pipeline

Scripted Pipeline is a more general form of Pipeline as Code. It is a Domain Specific Language (DSL) based on Apache Groovy. Most functionality provided by the Groovy language is made available in Scripted Pipelines. This means Scripted Pipelines can offer flexibility and extensibility to users.

However, the learning curve associated with Scripted Pipelines is very steep and it is possible to do things inside Pipelines that are better done in other tools called as part of Pipelines. CloudBees recommends the use of Declarative Pipelines rather than Scripted Pipelines.

For more information on Scripted Pipelines, see Using Scripted Pipeline syntax.

Freestyle projects

Freestyle projects are used to implement, develop, or run simple jobs. They can span multiple operations like building and running scripts.

CloudBees recommends the use of Pipeline projects instead of Freestyle projects. For information about converting a Freestyle project to a Declarative Pipeline please see Converting a Freestyle project to a Declarative Pipeline.

Multibranch Pipelines

A multibranch Pipeline can be used to automatically create Pipelines based on branches and pull requests in your repository. For more information on multi-branch Pipelines, see Managing Multibranch Pipeline options in template.yaml.

Pipeline as Code

Pipeline as Code provides a consistent way to build and deploy services and applications by treating them as code. It is used as a tool to execute underlying tasks for various stages of a software delivery lifecycle and takes away the implementation details.

It is a set of features that allow pipelined job processes to be defined with code, stored, and versioned in a source repository. These features allow the discovery, management, and running of jobs for multiple source repositories and branches, eliminating the need for manual job creation and allowing for change management at job level without using the UI.

Both types of Pipelines as described in this topic are Pipeline as Code. Before Pipelines, jobs were configured using the UI, and were part of the config.xml files rather than the code.

Benefits of Pipeline as Code

  • Pipeline as Code allows integration of your Pipeline with your code thereby making collaboration on it easier. When you update your Pipeline, your changes are automatically picked up.

  • Storing Pipelines in a version control system enables tracking of changes to the Pipeline. If a change to the Pipeline causes a build to break, then it can be fixed before it is merged or can easily be reverted. It can also be restored more easily.

Pipeline development utilities

The following utilities are available to help you develop your Pipelines:

  • Command-line Pipeline Linter (jenkins CLI): Command-line Pipeline Linter is a built-in command line interface that allows users and administrators to access Jenkins from a script or shell environment. This can be convenient for scripting of routine tasks, bulk updates, and troubleshooting.

  • "Replay" Pipeline runs with modifications: Use this tool to develop scripts incrementally. You can use the "Replay" tool to prepare and test your changes before committing them to Pipeline scripts.

For more information on any of these tools, please see Using Pipeline development tools.

Pipeline terminology

  • controller: A controller is a computer, VM, or container where Jenkins is installed and run. It is used to serve requests and handle build tasks.

  • agent: An agent is typically a machine, or container, which connects to a Jenkins controller and executes tasks when directed by the controller.

  • executor: An executor is a computational resource for running builds and performing operations. It can run on any controller or agent. An executor can be parallelized on a specific controller or agent.

    Using executors on a controller should be reserved to very specific cases as there are security and performance implications to doing so.
  • node: A node is an operating system or container running Jenkins as an agent. Most work a Pipeline performs is done in the context of one or more declared node steps. Confining the work inside of a node step does two things:

    1. Schedules the steps contained within the block to run by adding an item to the Jenkins queue. As soon as an executor is free on a node, the steps will run.

    2. Creates a workspace (a directory specific to that particular Pipeline) where work can be done on files checked out from source control.

  • step: A single task. Fundamentally, steps tell Jenkins what to do. For example, to execute the shell command make, use the sh step: sh 'make'.

    When a plugin extends the Pipeline DSL, that typically means the plugin has implemented a new step.

  • stage: A stage is a step for defining a conceptually distinct subset of the entire Pipeline, for example: "Build", "Test", and "Deploy". A stage is used by many plugins to visualize or present Jenkins Pipeline status/progress.

CloudBees proprietary features for Pipelines

CloudBees provides some additional features that enhance the functionality available in Pipelines.

  • Features that help in remote collaboration.

  • Features that help in bringing about consistency across Pipelines.

For details, please see the sections below.

Features for remote collaboration

CloudBees Pipelines have access to features that can be used to improve collaboration between different teams using different Pipelines.

Cross Team Collaboration

CloudBees’ Cross Team Collaboration provides the ability to publish an event from a Jenkins job that triggers any other Jenkins job on the same controller or different controllers that are listening for that event. For more information, see Cross Team Collaboration.

Cluster-wide job triggers

Cluster-wide job triggers provide the ability to trigger jobs across client controllers attached to the same CloudBees Operations Center. Depending on the scheme used to organize an Operations Center cluster, it may be necessary to trigger jobs that are on a remote client controller. For more information, see Cluster-wide job triggers.

External HTTP endpoints

External HTTP endpoints work with Cross Team Collaboration to enable external systems such as GitHub or Nexus to generate notification events for Pipelines. For more information, see External HTTP endpoints.

Features for Pipeline standardization

CloudBees Pipelines have access to features that can be used to bring consistency across Pipelines across your organization.

Pipeline Templates

Pipeline Templates can be used to define reusable Pipelines. This helps in bringing about consistency across your Pipelines. Pipeline Templates are specific to CloudBees CI. For more information on Pipeline Templates, see Pipeline Templates.

Pipeline Policies

Pipeline Policies are runtime validations that work for both Scripted and Declarative Pipelines and provide administrators a way to include warnings for or block the execution of Pipelines that do not comply with certain regulatory requirements, rules, or best practice guidelines. For more information on concepts related to Pipeline Policies, see Using Pipeline Policies.

Organization folders

Organization folders are a way to automatically create Multibranch projects for every repository in a GitHub organization or Bitbucket team. For more information, please see Organization folders.