This migration guide provides instructions on how to migrate your entire CloudBees Jenkins Enterprise 1.x installation over to a new CloudBees CI on modern cloud platforms installation.
Prerequisites
-
On your CloudBees Jenkins Enterprise 1.x installation’s operations center, ensure you have made a note of (and can easily access) all credentials configured globally and for credentials scoped to folders.
-
For each CloudBees Jenkins Enterprise controller (that is, each managed controller, team controller and client controller configured on your existing CloudBees Jenkins Enterprise 1.x installation):
-
Streamline your CloudBees Jenkins Enterprise controllers by identifying plugins which are no longer being used and uninstall them. This can be achieved by installing/using the Plugin Usage Plugin.
-
( Optional ) If you need to do so, (re-)organize any current projects and jobs (including those in subfolders) into folders based on distinct criteria, such as specific lines of business or teams within your organization.
-
-
( Optional ) Plugins on CloudBees Jenkins Enterprise 1.x can be installed to CloudBees CI either manually or automatically. To install these plugins automatically, install the jq command line tool on a machine with access to this new CloudBees CI operations center.
Process overview
The CloudBees Jenkins Enterprise 1.x to CloudBees CI migration process can be summarized as follows:
-
Migrate key operations center data from CloudBees Jenkins Enterprise 1.x to CloudBees CI. This consists of the following stages:
-
Migrating credentials from CloudBees Jenkins Enterprise 1.x to CloudBees CI.
-
-
Migrate CloudBees Jenkins Enterprise controllers to CloudBees CI.
Install CloudBees CI on modern cloud platforms
This migration process requires you to start with a clean CloudBees CI on modern cloud platforms installation on appropriate Kubernetes infrastructure.
-
Install CloudBees CI on modern cloud platforms for the appropriate Kubernetes platform, with all the managed controllers, team controllers and client controllers (connected to your CloudBees CI on modern cloud platforms instance) which you previously set up and configured on CloudBees Jenkins Enterprise 1.x.
CloudBees CI and CloudBees Jenkins Enterprise must be at the same version level. -
On your new CloudBees CI installation’s operations center, recreate any folders you had previously configured on CloudBees Jenkins Enterprise 1.x.
-
If you are using a Private Docker Registry, create a Kubernetes secret to pull images.
-
On the operations center and controllers of your new CloudBees CI installation, create Kubernetes shared pod templates and pod templates, respectively, which are equivalent to any agent templates you had previously configured on CloudBees Jenkins Enterprise 1.x].
When creating your Kubernetes pod templates, complete as many fields as possible - especially details that match those defined in the equivalent agent templates on CloudBees Jenkins Enterprise 1.x. Refer to the table below for details on these matching fields.
Migrate key operations center data
This section of the migration guide covers all migration aspects of operations center from your CloudBees Jenkins Enterprise 1.x instance over to your CloudBees CI instance.
The operations center migration process must be conducted first before the controllers can be migrated.
Migrate credentials
To migrate credentials from CloudBees Jenkins Enterprise 1.x to CloudBees CI, you can choose the following options:
Migrating credentials via Groovy script
To migrate credentials from CloudBees Jenkins Enterprise 1.x to CloudBees CI by scripting:
-
Download the
export.groovy
script on GitHub. -
Run
export.groovy
in the Script Console on the source instance. It will output an encoded message containing a flattened list of all system and folder credentials. Copy that encoded message. -
Download the
import.groovy
on GitHub. -
Paste the encoded message output from the
export.groovy
script as the value in theencoded
variable in theimport.groovy
script and execute it in the Script Console on the destination Jenkins. All the credentials and domains from the source Jenkins will now be imported to the system store of the destination Jenkins. For example://Paste the encoded message from the export.groovy script on the source Jenkins def encoded = [PASTE_ENCODED_MESSAGE_HERE] if (!encoded) { return }
Export credentials via command line
-
Set up the Jenkins CLI tool for each operations center of your CloudBees Jenkins Enterprise 1.x and new [CloudBees CI] instances.
-
Export credentials on your CloudBees Jenkins Enterprise 1.x operations center, in XML format, using the appropriate command/s:
-
For credentials scoped globally, use this command (specifying your actual
cje-1-x-url
value andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cje-1-x-url:port-number/cjoc/ list-credentials-as-xml system::system::jenkins > credentials.xml
-
If you have credentials scoped to a given folder, use this command:
shell% java -jar jenkins-cli.jar -s https://cje-1-x-url:port-number/cjoc/ list-credentials-as-xml folder::item::/full/name/of/folder > credentials-folder-name.xml
Run this command for every folder to which credentials have been scoped.
-
-
Since credentials are exported to these XML files with redacted secrets (to maintain their secrecy and security), locate each instance of
<secret-redacted/>
throughout all thecredentials.xml
and/orcredentials-folder-name.xml
files you exported and replace them with their correct secret values. -
Import credentials to your new CloudBees CI operations center, using the appropriate command/s:
-
For credentials scoped globally, use this command (specifying your actual
cloudbees-core-url
value andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cloudbees-core-url:port-number/cjoc/ import-credentials-as-xml system::system::jenkins < credentials.xml
-
If you had credentials scoped to a given folder in CloudBees Jenkins Enterprise 1.x, use this command for the equivalent folder in CloudBees CI:
shell% java -jar jenkins-cli.jar -s https://cloudbees-core-url:port-number/cjoc/ import-credentials-as-xml folder::item::/full/name/of/folder < credentials-folder-name.xml
Run this command for every equivalent folder (in your CloudBees CI operations center) to which credentials were scoped on your CloudBees Jenkins Enterprise 1.x operations center.
-
Ensure plugins are harmonized
To harmonize plugins between CloudBees Jenkins Enterprise 1.x and CloudBees CI, ensure that the operations center of your new CloudBees CI installation has the same plugins that were installed on the operations center of the CloudBees Jenkins Enterprise 1.x installation.
-
Ensure you are still set up to use the Jenkins CLI tool for each operations center of your CloudBees Jenkins Enterprise 1.x and new CloudBees CI instances.
-
Export the list of plugins on your CloudBees Jenkins Enterprise 1.x operations center, in JSON format, using this command (specifying your actual
cje-1-x-url
value andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cje-1-x-url:port-number/cjoc/ list-plugins-v2 --output json > plugins.json
-
On your new CloudBees CI operations center, install these plugins either manually or automatically via a command:
-
To install these plugins manually:
-
Run the same command you did on CloudBees Jenkins Enterprise 1.x (in the previous step) but on your new CloudBees CI instance instead (specifying your own
cloudbees-core-url
value andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cloudbees-core-url:port-number/cjoc/ list-plugins-v2 --output json > plugins-cloud-core.json
-
Compare the outputs of the
plugins.json
andplugins-cloud-core.json
files, and manually install the required missing plugins on your new CloudBees CI operations center.
-
-
To install these plugins automatically via a command:
-
Ensure you have installed the jq command line tool (mentioned in the Prerequisites above).
-
Force the installation of all plugins installed on the CloudBees Jenkins Enterprise 1.x operations center (listed in
plugins.json
), on the new CloudBees CI operations center, with this command:for plugin in $(cat plugins.json | jq -r .data[].id); do java -jar jenkins-cli.jar -noKeyAuth -s https://cloudbees-ci-url:port-number/cjoc/ install-plugin ${plugin}; done
-
-
Migrating from agent templates to Kubernetes (shared) pod templates
There are two options for migrating from CloudBees Jenkins Enterprise agent templates to shared Kubernetes pod templates:
Creating pod templates as YAML
Pods can be created as a YAML-based pipeline template. This process places the pod definitions into your pipeline and centralizes pod templates as code.
For more information, refer to the CloudBees Knowledge Base.
Convert agent templates to Kubernetes (shared) pod templates
The jenkins-agent config map is required when migrating to CloudBees CI on modern cloud platforms. See Managing agents—Kubernetes agents for details about how to use a custom Docker image to create a Kubernetes pod template that uses the jenkins-agent config map.
|
This is a complex process, which involves the following stages:
Export agent template definitions from CloudBees Jenkins Enterprise 1.x operations center and controllers
-
Ensure you are still set up to use the Jenkins CLI tool for the operations center and controllers of your CloudBees Jenkins Enterprise 1.x instance.
-
Export the definitions for all agent templates on your CloudBees Jenkins Enterprise 1.x operations center using the following command (specifying your actual
cje-1-x-url
value andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cje-1-x-url:port-number/cjoc/ agent-template > agent-templates.json
-
Export the definitions for all agent templates on your CloudBees Jenkins Enterprise 1.x controllers using the following command (specifying your actual
cje-1-x-url
andcontroller-name
values, andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cje-1-x-url:port-number/controller-name/ agent-template > agent-templates-controller-name.json
Run this command for every controller (that is, managed controllers and team controllers) on which agent templates have been defined. Example of an exported agent template definition file, showing a single agent template definition and all possible properties with sample values.{ "data": [ { "atm": { "constraints": [], "containerProperties": [ { "key": "volumes-from", "type": "ParameterSpec", "value": "certs" }, { "name": "...", "type": "EnvironmentVariableContainerProperty", "value": "..." }, { "containerPath": "...", "hostPath": "...", "readOnly": true/false, "type": "VolumeSpec", "volumeType": { "hostPath": "...", "type": "BindMount/Volume" } }, { "executable": true/false, "extract": true/false, "type": "URISpec", "value": "uri-value" }, { "forcePull": true/false, "type": "ForcePullContainerProperty" }, { "privilegeMode": true/false, "type": "PrivilegedModeContainerProperty" }, { "containerPort": port-number, "hostPort": port-number, "protocol": "TCP/UDP", "type": "PortMappingSpec" }, { "type": "CustomDockerShellProperty", "value": "/bin/sh -c" } ], "cpus": 0.1, "displayName": "...", "image": ".../...", "jnlpArgs": "", "jvmArgs": "...", "jvmMemory": 1024, "labelString": null, "memory": 2048, "name": "...", "remoteFS": "..." } } ], "version": "1" }
Export Kubernetes (shared) pod template definitions from CloudBees CI operations center and controllers
-
Ensure you are still set up to use the Jenkins CLI tool for the operations center and controllers of your new CloudBees CI instance.
-
Export the definitions for all recreated Kubernetes shared pod templates on your CloudBees CI operations center (that is, defined within a Kubernetes cloud configuration) using the following command (specifying your actual
cloudbees-core-url
value andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cloudbees-core-url:port-number/cjoc/ shared-pod-template > shared-pod-templates.json
-
Export the definitions for all recreated Kubernetes pod templates on your CloudBees CI controllers using the following command (specifying your actual
cloudbees-core-url
andcontroller-name
values, andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cloudbees-core-url:port-number/controller-name/ pod-template > pod-templates-controller-name.json
Run this command for every controller (that is, managed controllers and team controllers) on which Kubernetes pod templates have been defined. Example of an exported Kubernetes (shared) pod template definition file showing a single pod template definition and all possible properties with sample values.{ "data": [ { "podTemplate": { "label": "documentation", "name": "documentation-packging", "displayName": "Kubernetes Pod Template", "activeDeadlineSeconds": 44, "alwaysPullImage": false, "customWorkspaceVolumeEnabled": true, "args": "cat", "capOnlyOnAlivePods": true, "command": "/bin/sh -c", "idleMinutes": 1, "inheritFrom": "abc", "instanceCap": 2, "namespace": "doc-group", "nodeSelector": "77", "nodeUsageMode": "NORMAL", "privileged": false, "resourceLimitCpu": "1", "resourceLimitMemory": "1024", "resourceRequestCpu": "1", "resourceRequestMemory": "1024", "serviceAccount": "88", "slaveConnectTimeout": 99, "annotations": [ { "key": "var3", "value": "44" } ], "containers": [ { "alwaysPullImage": false, "args": "cat", "command": "/bin/sh -c", "envVars": [ { "type": "KeyValueEnvVar", "key": "env", "value": "22" }, { "type": "SecretEnvVar", "key": "secret_env", "secretKey": "YeahYeah", "secretName": "abc.corp.com" } ], "image": "java", "livenessProbe": { "execArgs": "inode", "failureThreshold": 16, "initialDelaySeconds": 10, "periodSeconds": 17, "successThreshold": 0, "timeoutSeconds": 15 }, "name": "java", "ports": [ { "containerPort": 6789, "hostPort": 9876, "name": "port6789" } ], "privileged": false, "resourceLimitCpu": "1", "resourceLimitMemory": "1024", "resourceRequestCpu": "1", "resourceRequestMemory": "1024", "shell": null, "ttyEnabled": true, "workingDir": "/home/jenkins" } ], "envVars": [ { "key": "VAR1", "type": "KeyValueEnvVar", "value": "42" }, { "key": "VAR2", "secretKey": "", "secretName": "derek-secret", "type": "SecretEnvVar" } ], "imagePullSecrets": [ { "name": "abc.corp.com" } ], "volume": { "memory": true, "type": "EmptyDirWorkspaceVolume" }, "volumes": [ { "hostPath": "/extra/workspace", "mountPath": "/workspace", "type": "HostPathVolume" } ], "yaml": "apiVersion: v1\nkind: Pod\nspec:" } } ], "version": "1" }
Ensure your Kubernetes (shared) pod template definitions match their equivalent agent template definitions
If necessary, modify your shared-pod-templates.json
and pod-templates-controller-name.json
files with the correct details from their equivalent/respective agent-templates.json
and agent-templates-controller-name.json
files.
CloudBees Jenkins Enterprise 1.x agent template |
CloudBees CI Kubernetes (shared) pod template |
|||
---|---|---|---|---|
UI |
JSON |
UI |
JSON |
Notes |
Display Name |
|
Name |
|
To ensure any instances of Furthermore, the Kubernetes (shared) pod template’s first container template’s name must be |
|
|
|||
Use when no label is specified |
N/A |
N/A |
N/A |
Configure the Defaults Provider Template Name value of the Kubernetes shared cloud configuration (not a Kubernetes pod template definition) to have the same |
Enable this agent template |
N/A |
N/A |
N/A |
No equivalent on CloudBees CI. |
CPU Shares |
|
|
|
In the Kubernetes (shared) pod template definition, specify both of these values in the container template as |
Memory (MB) |
|
|
|
In the Kubernetes (shared) pod template definition, make these values the same as Note that Kubernetes supports numerous methods for specifying memory. The unit "M" in Kubernetes, indicates the decimal mega (for example, 10002), whereas the unit "Mi", indicates mibi (for example, 10242). Mesos' "MB/GB" units map to Kubernetes' "Mi/Gi" units, respectively. |
JVM Memory (MB) |
|
|
|
In the Kubernetes (shared) pod template definition, make the value of |
JVM Arguments |
|
|
|
In the Kubernetes (shared) pod template definition, make the value of If you specified the JVM Memory (MB)/ |
JNLP Arguments |
|
|
|
In the Kubernetes (shared) pod template definition, make the value of If you specified the JVM Memory (MB)/ |
Remote root directory |
|
|
In the Kubernetes (shared) pod template definition, make this value the same as it was in the CloudBees Jenkins Enterprise 1.x agent template definition. |
|
Image |
|
|
In the Kubernetes (shared) pod template definition, make this value the same as Also, the Kubernetes (shared) pod template’s first container template’s name must be |
|
|
|
In the Kubernetes (shared) pod template definition, make this value the same as |
||
|
|
N/A |
N/A |
No equivalent on CloudBees CI. If you want to specify the URL for a private registry containing relevant images, read more about configuring secrets for a Kubernetes environment in the relevant Kubernetes documentation, and add specify the Kubernetes pod template’s value.
|
|
|
|
|
In the Kubernetes (shared) pod template definition, use the equivalent values utilized in the CloudBees Jenkins Enterprise 1.x agent template definition. |
|
|
In the Kubernetes (shared) pod template definition, make this value the same as |
||
|
|
N/A |
N/A |
No equivalent on CloudBees CI. It is not possible to supply additional parameters when launching Docker in a Kubernetes environment. |
|
|
|
|
In the Kubernetes (shared) pod template definition, use the equivalent values utilized in the CloudBees Jenkins Enterprise 1.x agent template definition. Note that each port mapping configuration in your CloudBees CI Kubernetes (shared) pod template definitions can be named. |
|
|
In the Kubernetes (shared) pod template definition, make this value the same as this Custom shell command/ |
||
N/A |
N/A |
N/A |
No equivalent on CloudBees CI. It is not possible to specify a User ID with which to run a container in a Kubernetes environment. |
|
|
|
|
|
In the Kubernetes (shared) pod template definition, use the equivalent values utilized in the CloudBees Jenkins Enterprise 1.x agent template definition. |
Import your updated Kubernetes (shared) pod template definitions to CloudBees CI operations center and controllers
-
Ensure you are still set up to use the Jenkins CLI tool for the operations center and controllers of your new CloudBees CI instance.
-
Import the updated definitions for all recreated Kubernetes shared pod templates on your CloudBees CI operations center (that is, defined within a Kubernetes cloud configuration) using the following command (specifying your actual
cloudbees-core-url
value andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cloudbees-core-url:port-number/cjoc/ shared-pod-template --update < shared-pod-templates.json
-
Import the updated definitions for all recreated Kubernetes pod templates on your CloudBees CI controllers using the following command (specifying your actual
cloudbees-core-url
andcontroller-name
values, andport-number
if necessary):shell% java -jar jenkins-cli.jar -s https://cloudbees-core-url:port-number/controller-name/ pod-template --update < pod-templates-controller-name.json
Run this command for every controller (that is, managed controllers and team controllers) on which Kubernetes pod templates have been defined.
Migrate controllers
Migrating controllers (managed controllers and team controllers) from CloudBees Jenkins Enterprise 1.x to your new CloudBees CI installation involves backing up your controllers on your CloudBees Jenkins Enterprise 1.x installation and restoring them to the new controllers you created as part of setting up your new CloudBees CI installation.
The operations center migration process must be conducted first before the controllers can be migrated. Additionally, all Webhooks MUST be Recreated After the Backup is Restored. |
-
Ensure you are still set up to use the Jenkins CLI tool for the controllers of your CloudBees Jenkins Enterprise 1.x installation and new CloudBees CI installation.
-
Configure the CloudBees CI controllers to start in quiet mode. Start the new instance in Quiet Mode.
The quiet-down.groovy
script referenced in this article can be found on GitHub. -
On your CloudBees Jenkins Enterprise 1.x installation back up each controller.
-
Do this through the
on your CJE 1.x controller. -
When configuring the Take Backup section of this project, select
and specify the following items in the Excludes list:-
monitoring/
-
caches/
-
operations-center-cloud*
-
operations-center-client*
-
plugins/operations-center-analytics-config*
-
plugins/operations-center-analytics-config/
-
plugins/operations-center-analytics-reporter*
-
plugins/operations-center-analytics-reporter/
-
plugins/palace-cloud*
-
plugins/palace-cloud/
-
plugins/tiger-tenant*
-
plugins/tiger-tenant/
-
jobs/shared-items*
-
-
A comma separated version of this list that can be useful for automation can be found on GitHub.
-
-
On your new CloudBees CI installation, for each respective new controller, restore each controller from the backup taken from CloudBees Jenkins Enterprise 1.x.
-
If you are using CloudBees CI, you can use the CloudBees Backup Plugin to restore from backup.
-
If you are running CloudBees CI on Kubernetes it is possible to restore from backup using a rescue pod.
-
A scripted example of restoring using a rescue pod can be found on GitHub.
-
-
Modifying pipelines that mount Docker sockets
Most CloudBees Jenkins Enterprise 1.x pipelines will run on CloudBees CI on modern cloud platforms; however, pipelines that mount Docker sockets must be modified.
If the Docker socket was used to build Docker images, you can use Kaniko (recommended approach) or the Maven or Gradle plugins to build Docker images. When building a Docker image with Kaniko, you have a pod with 2 containers:
-
The jnlp container, running the Jenkins agent.
-
The kaniko container, doing the actual image build.
By default, any command runs inside the jnlp container and the container
keyword is used to switch to another container during the pipeline execution. For more information, see Using Kaniko with CloudBees CI.
If the Docker socket was used to launch containers for testing tools like MySQL and Selenium, use a pod-based multi-container in the pipeline. To replace launching containers with a pod-based multi-container, users need to replace the imperative syntax to create and destroy the container with a declarative approach to adding containers in the pod definition. See Building custom Docker images in CloudBees CI below.
Building custom Docker images in CloudBees CI
To use a pod-based multi-container, replace the imperative syntax to create and destroy the container with a declarative approach to adding containers in the pod definition. The following approach will use Docker in Docker (DinD) by creating a new Pod Template
and including the required Container Template
.
Prerequisites for building custom Docker images in CloudBees CI
-
CloudBees CI on modern cloud platforms
There are some risks from using DinD and the “Docker socket” for building containers on CloudBees CI on modern cloud platforms. These risks include:
|
Pod Template
and including the required Container Template
:-
Access the operations center Center main page.
-
Select the All tab to see all views.
-
Open the Kubernetes shared cloud configuration page.
-
Select Add Pod Template and complete the following fields:
-
Name:
pod-dind
-
Labels:
pod-dind
-
Usage: Only build jobs with label expressions matching the node
-
-
Select Add Container.
-
Select one of the Docker DinD images.
-
Complete the following fields:
-
Name: dind
-
Docker image: docker:18.06.1-ce-dind
-
Working directory: /home/jenkins
-
Allocate pseudo-TTY: checked
-
Run in privileged mode: checked
-
-
Select Save.
-
Afterward, test the
Pod Template
by creating a new Pipeline job using the following code:podTemplate(){ node('pod-dind') { container('dind') { stage('Build My Docker Image') { sh 'docker info' sh 'touch Dockerfile' sh 'echo "FROM centos:7" > Dockerfile' sh "cat Dockerfile" sh "docker -v" sh "docker info" sh "docker build -t my-centos:1 ." } } } }
And you should get the following output:
[Pipeline] node Agent pod-dind-xwhnz is provisioned from template {K8S} Pod Template Agent specification [{K8S} Pod Template] (pod-dind): * [dind] docker:18.06.1-ce-dind(resourceRequestCpu: , resourceRequestMemory: , resourceLimitCpu: , resourceLimitMemory: ) Running on pod-dind-xwhnz in /home/jenkins/workspace/dind-test ... [Pipeline] sh [dind-test] Running shell script + docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.06.1-ce ... [Pipeline] sh [dind-test] Running shell script + docker build -t my-centos:1 . Sending build context to Docker daemon 2.048kB Step 1/1 : FROM centos:7 7: Pulling from library/centos aeb7866da422: Pulling fs layer aeb7866da422: Verifying Checksum aeb7866da422: Download complete aeb7866da422: Pull complete Digest: sha256:67dad89757a55bfdfabec8abd0e22f8c7c12a1856514726470228063ed86593b Status: Downloaded newer image for centos:7 ---> 75835a67d134 Successfully built 75835a67d134 Successfully tagged my-centos:1 [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // container [Pipeline] } [Pipeline] // node [Pipeline] } [Pipeline] // podTemplate [Pipeline] End of Pipeline Finished: SUCCESS