Provision and manage CloudBees CI across multiple Kubernetes clusters and cloud providers

7 minute read

CloudBees CI on modern cloud platforms can be provisioned and managed across multiple Kubernetes clusters and cloud providers. You can run operations center on a main Kubernetes cluster that manages controllers across other Kubernetes clusters. These clusters can be hosted on all the major cloud providers. You can set up a multicluster configuration to provision controllers in different clusters located in the same cloud provider or a different one.

You can provision, update, and manage CloudBees CI controllers on multiple Kubernetes clusters from a common operations center, which allows users to:

  • Separate divisions or business units who need to run workloads in individual clusters.

  • Leverage Helm charts and Configuration as Code to configure, provision, and update using version-controlled configuration files.

Common use cases for multicluster

  • Controller in multiple Kubernetes clusters

    In this use case, the operations center runs in one Kubernetes cluster. Some controllers run in the same Kubernetes cluster as the operations center. Any number of additional controllers run in a separate Kubernetes cluster.

    Each instance, both operations center and each controller, is tied to its own Kubernetes cluster and cannot move after it is provisioned.

  • Agents in separate Kubernetes clusters

    In this use case, agents are deployed in a separate Kubernetes cluster from the Kubernetes cluster where a controller is set up.

    The controller can be configured to schedule agents between several Kubernetes clouds. Note that no control mechanism is available to spread the load across different Kubernetes clusters because a pod template is tied to a Kubernetes cloud.

Networking considerations for multicluster

Keep in mind the following information about networking for provisioned controllers and agents:

  • When provisioning a controller on a remote Kubernetes cluster, the Kubernetes API must be accessible from the operations center.

    Upon startup, a controller connects back to the operations center through HTTP or HTTPS when using Websockets. If you are not using Websockets, it connects using inbound TCP. If the controller is located in a different Kubernetes cluster:

    • The operations center must provide its external hostname so that the controller can connect back.

    • If using WebSockets, the operations center HTTP / HTTPS port must be exposed.

    • If not using WebSockets, the operations center inbound TCP port must be exposed.

  • When provisioning an agent on a remote Kubernetes cluster, the Kubernetes API must be accessible from the managed controller.

    Upon startup, an agent connects back to its controller. If the agent is in a different Kubernetes cluster:

    • The controller must provide its external hostname so that the controller can connect back.

    • If using WebSockets, the controller HTTP / HTTPS port must be exposed.

    • If not using WebSockets, the controller inbound TCP port must be exposed.

  • In order for controllers to be accessible, each Kubernetes cluster must have its own Ingress controller installed, and the configured endpoint for each controller should be consistent with the DNS pointing to the Ingress controller of that Kubernetes cluster.

  • Bandwidth, latency, and reliability can be issues when running components in different regions or clouds.

    • operations center <> Controller connection is required for features like shared agents, licensing (every 72 hours) or SSO. An unstable connection can result in the inability to sign in, unavailable shared agents or an invalid license if the disconnected time is too long.

    • Controller <> agent connection is required to transfer commands and logs. An unstable connection can result in failed builds, unexpected delays, or timeouts while running builds.

About provisioning managed controllers in multiple Kubernetes clusters

To provision managed controllers in multiple Kubernetes clusters, you must do the following:

Preparing a namespace to install a managed controller

  1. To prepare a namespace to install a managed controller. use the following example values.yaml file:

    OperationsCenter:
        Enabled: false
    Master:
        Enabled: true
        OperationsCenterNamespace: ci (1)
    Agents:
        Enabled: true
    1Replace ci with the actual namespace where the operations center is installed. If the operations center is located in another cluster, it can be set to the same value as the current namespace, then an operations center service account must be created for authentication.
  2. Use the following example to perform the installation.

    export NAMESPACE=my-team (1)
    kubectl create namespace $NAMESPACE || true
    helm install ci-masters-$NAMESPACE cloudbees/cloudbees-core --namespace $NAMESPACE -f values.yaml
    1Replace my-team with the actual namespace where the managed controllers will be created.

Controller provisioning configuration

To provision controllers in their own projects, each controller must use a specific sub-domain. For example, if the operations center domain is cd.example.org and the URL is https://cd.example.org/cjoc/, a controller dev1 should use the sub-domain dev1.cd.example.org or dev1-cd.example.org. It is often preferable to use the latter if using a wild card certificates for domain example.org.

To configure each controller to use a specific sub-domain, set the 'controller URL Pattern' in the main Jenkins configuration page 'Manage Jenkins → Configure System' under 'Kubernetes controller Provisioning' advanced options. For example if the operations center domain is cd.example.org, the 'controller URL Pattern' would be https://*-cd.example.org/*/.

Provision controllers

The project for the controller resources can be configured as the default project for all managed controllers in the main operations center configuration screen with the 'namespace' parameter.

The project can also specify a specific managed controller in the controller configuration screen with the 'namespace' parameter.

Leave the namespace value empty to use the value defined by the Kubernetes endpoint.

Adding a ServiceAccount for operations center

  1. Create a ServiceAccount named cjoc in the namespace provided as .Master.OperationsCenterNamespace.

    kubectl create serviceaccount cjoc -n ci (1)
    1Replace ci with the actual value used for .Master.OperationsCenterNamespace
  2. Create a kubeconfig for the operations center service account:

    apiVersion: v1
    kind: Config
    clusters:
    - name: kubernetes
      cluster:
        certificate-authority-data: <CLUSTER_CA_BASE64_ENCODED> (1)
        server: <KUBE_APISERVER_ENDPOINT> (2)
    contexts:
    - name: cjoc
      context:
        cluster: kubernetes
        namespace: <NAMESPACE> (3)
        user: cjoc
    current-context: cjoc
    users:
    - name: cjoc
      user:
        token: <CJOC_SERVICEACCOUNT_TOKEN> (4)
    1Replace <CLUSTER_CA_BASE64_ENCODED> with the base64 encoded value of the Cluster Certificate Authority, see Kubernetes - Service Account Tokens.
    2Replace <KUBE_APISERVER_ENDPOINT> with the URL of the Kubernetes API Server.
    3Replace <CJOC_SERVICEACCOUNT_NAMESPACE> with the namespace where the cjoc ServiceAccount has been created.
    4Replace <CJOC_SERVICEACCOUNT_TOKEN> with the decoded value of the cjoc ServiceAccount token. See Kubernetes - Service Account Tokens.
    Another way of generating a kubeconfig file for the operations center service account is to use the following kubectl plugin: view-serviceaccount-kubeconfig, and run the command kubectl view-serviceaccount-kubeconfig cjoc -n ci
  3. Make sure it works before trying to use it in the following steps:

    KUBECONFIG=<PATH_TO_KUBECONFIG_FILE> kubectl version (1)
    1Replace <PATH_TO_KUBECONFIG_FILE> by the location of the kubeconfig file created.

Adding a new Kubernetes cluster endpoint

  1. In operations center, browse to Manage Jenkins > Configure System.

    In the section Kubernetes Controller Provisioning, the list of endpoints is displayed.

    The first one corresponds to the current cluster and namespace that operations center is running.

  2. Click Add.

    Figure 1. New Cluster Endpoint
    • Set the API endpoint URL. You’ll find it in your Cloud console, or alternatively in your kubeconfig file.

    • Pick a name to identify the cluster endpoint.

    • Set up credentials. To use a kubeconfig file, create a Secret file.

    • If needed, provide the server certificate. Or, disable the certificate check.

    • Pick a namespace. If left empty, it uses the name of the current namespace where operations center is deployed.

    • Set the Controller URL Pattern to the domain name configured to target the ingress controller on the Kubernetes cluster.

    • The Jenkins URL can be left empty, unless there is a specific link between clusters requiring access to operations center using a different name than the public one.

    • Select Use WebSocket to enable WebSockets for all managed controllers that use this endpoint. See Using WebSockets to connect controllers to operations center for details about WebSockets.

  3. Click Validate to make sure the connection to the remote Kubernetes cluster can be established and the credentials are valid.

  4. Click Save.

Adding a managed controller to a Kubernetes cluster

To add a new managed controller to a Kubernetes cluster:

  1. Select the cluster endpoint to use.

  2. Adjust the controller configuration to your needs.

  3. If you don’t want to use the default storage class from the cluster, set the storage class name through the dedicated field.

  4. If the Ingress Controller isn’t the default one, use the YAML field to set an alternate Ingress class.

    kind: "Ingress"
    metadata:
      annotations:
        kubernetes.io/ingress.class: 'nginx'
  5. Save and provision.

Deploying agents in a separate Kubernetes cluster from a managed controller

Follow these steps to provision Kubernetes agents on another cluster.

If you are performing the steps below in a Kubernetes Shared Cloud, the Kubernetes Shared Cloud can be shared with only one managed controller because the Kubernetes URL is injected using the DNS of the controller. In a multicluster environment, a Kubernetes Shared Cloud cannot be shared among multiple managed controllers.

To deploy agents in a separate Kubernetes cluster from a managed controller:

  1. Set up kubectl to point to the target (agent) cluster.

  2. Type the following command to install only serviceaccount/jenkins and the related role and role binding:

    helm install <helm deployment name> cloudbees/cloudbees-core --set OperationsCenter.Enabled=false
  3. Type the following command:

    The Krew plugin manager is required to run this command.
kubectl krew install view-serviceaccount-kubeconfig

+ . Type the following command:

+

kubectl view-serviceaccount-kubeconfig jenkins > /tmp/sa-jenkins.yaml

+ . On your managed controller, create Secret File credentials using the service account. . In Manage Jenkins > Manage Nodes and Clouds, select Configure Clouds. . Select Add a new cloud, and then select Kubernetes. . Enter a Name of your choice, and then select Kubernetes Cloud details. . In Credentials, add a Secret File credential type, and then upload the sa-jenkins.yaml file that you created in step 1. . Select WebSocket. . In Jenkins URL, enter the public (ingress) URL of the managed controller. . Leave all other fields blank. . Select Pod Templates > Add Pod Template. . Enter a pod template name. . Select Pod Template details. . In Labels, enter the label of the pod. NOTE: The Labels field of the pod template is the one used to select the template in a Pipeline, not the name. So it’s important to complete the Labels field. . Click Save.

TLS for remote controllers in a multicluster environment

The recommended approach to set up Ingress TLS for controllers in remote clusters is to add the secret to the namespace where the controller is deployed, and then add the TLS to each remote controller’s Ingress (through the YAML field).

Removing controllers from a cluster

If you remove a controller from operations center, the controller’s Persistent Volume Claim, which represents the JENKINS_HOME, is left behind.

To remove a controller from a cluster:

  1. Type the following command:

    kubectl delete pvc <pvc-name>
    The <pvc-name> is always jenkins-home-<master name>-0.

Troubleshooting multicluster and multicloud

  1. What should I do if my controller cannot connect back to the operations center?

    Use kubectl to retrieve the controller. Then, inspect where it tries to connect and how it matches the configuration that is defined in operations center.

  2. My controller starts and shows as connected in operations center. Why isn’t my controller browsable?

    This can be caused by either a DNS/Ingress misconfiguration or a URL misconfiguration.

In August 2020, the Jenkins project voted to replace the term master with controller. We have taken a pragmatic approach to cleaning these up, ensuring the least amount of downstream impact as possible. CloudBees is committed to ensuring a culture and environment of inclusiveness and acceptance - this includes ensuring the changes are not just cosmetic ones, but pervasive. As this change happens, please note that the term master has been replaced through the latest versions of the CloudBees documentation with controller (as in managed controller, client controller, team controller) except when still used in the UI or in code.