Provision and manage CloudBees CI across multiple Kubernetes clusters and cloud providers

CloudBees CI on modern cloud platforms can be provisioned and managed across multiple Kubernetes clusters and cloud providers. You can run Operations Center on a main Kubernetes cluster that manages masters across other Kubernetes clusters. These clusters can be hosted on all the major cloud providers. You can set up a multicluster configuration to provision masters in different clusters located in the same cloud provider or a different one.

You can provision, update, and manage CloudBees CI masters on multiple Kubernetes clusters from a common Operations Center, which allows users to:

  • Separate divisions or business units who need to run workloads in individual clusters.

  • Leverage Helm charts and Configuration as Code to configure, provision, and update using version-controlled configuration files.

Common use cases for multicluster

  • Masters in multiple Kubernetes clusters

    In this use case, the Operations Center runs in one Kubernetes cluster. Some masters run in the same Kubernetes cluster as the Operations Center. Any number of additional masters run in a separate Kubernetes cluster.

    Each instance, both Operations Center and each master, is tied to its own Kubernetes cluster and cannot move after it is provisioned.

  • Agents in separate Kubernetes clusters

    In this use case, agents are deployed in a separate Kubernetes cluster from the Kubernetes cluster where a master is set up.

    The master can be configured to schedule agents between several Kubernetes clouds. Note that no control mechanism is available to spread the load across different Kubernetes clusters because a pod template is tied to a Kubernetes cloud.

Networking considerations for multicluster

Keep in mind the following information about networking for provisioned masters and agents:

  • When provisioning a master on a remote Kubernetes cluster, the Kubernetes API must be accessible from the Operations Center.

    Upon startup, a master connects back to the Operations Center using inbound TCP. If the master is located in a different Kubernetes cluster, the Operations Center must provide its external hostname so that the master can connect back, and the Operations Center inbound TCP port must be exposed, as currently documented to connect external masters.

  • When provisioning an agent on a remote Kubernetes cluster, the Kubernetes API must be accessible from the Managed Master.

    Upon startup, an agent connects back to its master. If the agent is in a different Kubernetes cluster, the master must provide its external hostname so that the master can connect back, and the master inbound TCP port must be exposed.

    Note that websockets are supported for agents, but not for masters.

  • In order for masters to be accessible, each Kubernetes cluster must have its own Ingress controller installed, and the configured endpoint for each master should be consistent with the DNS pointing to the Ingress controller of that Kubernetes cluster.

  • Bandwidth, latency, and reliability can be issues when running components in different regions or clouds.

    • Operations Center <> Master connection is required for features like shared agents, licensing (every 72 hours) or SSO. An unstable connection can result in the inability to sign in, unavailable shared agents or an invalid license if the disconnected time is too long.

    • Master <> agent connection is required to transfer commands and logs. An unstable connection can result in failed builds, unexpected delays, or timeouts while running builds.

About provisioning Managed Masters in multiple Kubernetes clusters

To provision Managed Masters in multiple Kubernetes clusters, you must do the following:

Preparing a namespace to install a Managed Master

  1. To prepare a namespace to install a Managed Master. use the following example values.yaml file:

OperationsCenter:
    Enabled: false
Master:
    Enabled: true
    OperationsCenterNamespace: ci # <1>
Agents:
    Enabled: true
1Replace ci with the actual namespace where the Operations Center is installed. If the Operations Center is located in another cluster, it can be set to the same value as the current namespace to create an Operations Center service account that can be used for authentication.
  1. Use the following example to perform the installation.

export NAMESPACE=my-team # <1>
kubectl create namespace $NAMESPACE || true
helm install ci-masters-$NAMESPACE cloudbees/cloudbees-core --namespace $NAMESPACE -f values.yaml
1Replace my-team with the actual namespace where the Managed Masters will be created.

One way of generating a kubeconfig file for the Operations Center service account is to use the following kubectl plugin: view-serviceaccount-kubeconfig.

Make sure it works before trying to use it in the following steps.

Adding a new Kubernetes cluster endpoint

  1. In Operations Center, browse to Manage Jenkins > Configure System.

    In the section Kubernetes Master Provisioning, the list of endpoints is displayed.

    The first one corresponds to the current cluster and namespace that Operations Center is running.

  2. Click Add.

    Figure 1. New Cluster Endpoint
    • Set the API endpoint URL. You’ll find it in your Cloud console, or alternatively in your kubeconfig file.

    • Pick a name to identify the cluster endpoint.

    • Set up credentials. To use a kubeconfig file, create a Secret file.

    • If needed, provide the server certificate. Or, disable the certificate check.

    • Pick a namespace. If left empty, it uses the name of the current namespace where Operations Center is deployed.

    • Set the Master URL Pattern to the domain name configured to target the ingress controller on the Kubernetes cluster.

    • The Jenkins URL can be left empty, unless there is a specific link between clusters requiring access to Operations Center using a different name than the public one.

    • Select Use WebSocket to enable WebSockets for all Managed Masters that use this endpoint. See Using WebSockets to connect masters to Operations Center for details about WebSockets.

  3. Click Validate to make sure the connection to the remote Kubernetes cluster can be established and the credentials are valid.

  4. Click Save.

Adding a Managed Master to a Kubernetes cluster

To add a new Managed Master to a Kubernetes cluster:

  1. Select the cluster endpoint to use.

  2. Adjust the master configuration to your needs.

  3. If you don’t want to use the default storage class from the cluster, set the storage class name through the dedicated field.

  4. If the Ingress Controller isn’t the default one, use the YAML field to set an alternate Ingress class.

    kind: "Ingress"
    metadata:
      annotations:
        kubernetes.io/ingress.class: 'nginx'
  5. Save and provision.

Deploying agents in a separate Kubernetes cluster from a Managed Master

Follow these steps to provision Kubernetes agents on another cluster.

If you are performing the steps below in a Kubernetes Shared Cloud, the Kubernetes Shared Cloud can be shared with only one Managed Master because the Kubernetes URL is injected using the DNS of the master. In a multicluster environment, a Kubernetes Shared Cloud cannot be shared among multiple Managed Masters.

To deploy agents in a separate Kubernetes cluster from a Managed Master:

  1. Set up kubectl to point to the target (agent) cluster.

  2. Type the following command to install only serviceaccount/jenkins and the related role and role binding:

    helm install <helm deployment name> cloudbees/cloudbees-core --set OperationsCenter.Enabled=false
  3. Type the following command:

    The Krew plugin manager is required to run this command.
    kubectl krew install view-serviceaccount-kubeconfig
  4. Type the following command:

    kubectl view-serviceaccount-kubeconfig jenkins > /tmp/sa-jenkins.yaml
  5. On your Managed Master, create Secret File credentials using the service account.

  6. In Manage Jenkins > Manage Nodes and Clouds, select Configure Clouds.

  7. Select Add a new cloud, and then select Kubernetes.

  8. Enter a Name of your choice, and then select Kubernetes Cloud details.

  9. In Credentials, add a Secret File credential type, and then upload the sa-jenkins.yaml file that you created in step 1.

  10. Select WebSocket.

  11. In Jenkins URL, enter the public (ingress) URL of the Managed Master.

  12. Leave all other fields blank.

  13. Select Pod Templates > Add Pod Template.

  14. Enter a pod template name.

  15. Select Pod Template details.

  16. In Labels, enter the label of the pod. NOTE: The Labels field of the pod template is the one used to select the template in a Pipeline, not the name. So it’s important to complete the Labels field.

  17. Click Save.

TLS for remote masters in a multicluster environment

The recommended approach to set up Ingress TLS for masters in remote clusters is to add the secret to the namespace where the master is deployed, and then add the TLS to each remote master’s Ingress (through the YAML field).

Removing masters from a cluster

If you remove a master from Operations Center, the master’s Persistent Volume Claim, which represents the JENKINS_HOME, is left behind.

To remove a master from a cluster:

  1. Type the following command:

    kubectl delete pvc <pvc-name>
    The <pvc-name> is always jenkins-home-<master name>-0.

Troubleshooting multicluster and multicloud

  1. What should I do if my master cannot connect back to the Operations Center?

    Use kubectl to retrieve the master. Then, inspect where it tries to connect and how it matches the configuration that is defined in Operations Center.

  2. My master starts and shows as connected in Operations Center. Why isn’t my master browsable?

    This can be caused by either a DNS/Ingress misconfiguration or a URL misconfiguration.