Deploying CloudBees Core masters on multiple Kubernetes clusters

CloudBees Core allows users to run a ‘master of masters’ on a main Kubernetes cluster that manages masters across other Kubernetes clusters. These clusters can be hosted on all the major cloud providers. It can be configured to provision masters in different clusters located in the same cloud provider, or a different one.

You can provision, update, and manage CloudBees Core masters on multiple Kubernetes clusters from a common Operations Center, which allows for users to:

  • Separate divisions or business units who need to run workloads in individual clusters.

  • Leverage Helm charts and Configuration as Code to configure, provision, and update using version-controlled configuration files.

If using OpenShift, the ‘master of masters’ must be located on that Kubernetes cluster because of the security settings in OpenShift that are required to remain supportable (by them).

Configuring access to Operations Center from outside

You must ensure Operations Center TCP endpoint can be reached from outside the Kubernetes cluster it is running within.

Please refer to the installation guide matching your Kubernetes platform for network configuration:

Prerequisites for Kubernetes clusters

In order for a Kubernetes cluster to host Managed Masters, it must match CloudBees Core prerequisites.

In addition to these prerequisites, you must create some Kubernetes objects that will allow you to schedule a Managed Master.

The following procedure relies on the CloudBees Core Helm chart. It requires the Helm executable to be installed locally, but doesn’t require Tiller to be installed on the Kubernetes cluster.

You must set up the Kubernetes cluster you want to work with in your local Kubernetes context.

Adding a specific Namespace using Helm

The process for creating a new namespace using Helm is:

echo "Enter namespace where Operations Center is deployed:"
read OC_NAMESPACE
echo "Enter namespace to set up:"
read NAMESPACE
cat << EOF > values.yaml
OperationsCenter:
    Enabled: false
Master:
    Enabled: true
    OperationsCenterNamespace: $OC_NAMESPACE
Agents:
    Enabled: true
EOF
kubectl create namespace $NAMESPACE || true
helm fetch cloudbees/cloudbees-core --untar
helm template cloudbees-core-masters-$NAMESPACE cloudbees-core --namespace $NAMESPACE -f values.yaml | kubectl apply -n $NAMESPACE -f -

Adding a new Kubernetes cluster endpoint

  1. In Operations Center, browse to Manage Jenkins > Configure System.

    In the section Kubernetes Master Provisioning, the list of endpoints is displayed.

    The first one corresponds to the current cluster and namespace Operations Center is running.

  2. Click Add.

    new cluster endpoint
    Figure 1. New Cluster Endpoint
    • Set the API endpoint URL. You’ll find it in your Cloud console, or alternatively in your kubeconfig file.

    • Pick a name in order to identify the cluster endpoint.

    • Set up credentials.

    • If needed, provide the server certificate. Or disable the certificate check.

    • Pick a namespace. If left empty, it will use the name of the current namespace where Operations Center is deployed.

    • Set the Master URL Pattern to the domain name configured to target the ingress controller on the Kubernetes cluster.

    • The Jenkins URL can be left empty, unless there is a specific link between clusters requiring access to Operations Center using a different name than the public one.

  3. Click Validate to make sure the connection to the remote Kubernetes cluster can be established and the credentials are valid. Please correct any field required in case an error is reported.

  4. Click Save.

Creating a Managed Master

To create a new Managed Master:

  1. Select the cluster endpoint to use.

  2. Tweak the master configuration to your needs.

  3. If you don’t want to use the default storage class from the cluster, set the storage class name through the dedicated field.

  4. If the ingress controller isn’t the default one, use the yaml field to set an alternate ingress class.

    kind: "Ingress"
    metadata:
      annotations:
        kubernetes.io/ingress.class: 'nginx'
  5. Save and provision.

Creating a Kubernetes cloud from a remote cluster

Follow these steps to provision Kubernetes agents on another cluster.

To create a Kubernetes cloud from a remote cluster:

  1. Set up kubectl to point to the target (agent) cluster.

  2. Type the following command to install only serviceaccount/jenkins and the related role and role binding:

    helm install <helm deployment name> cloudbees/cloudbees-core --set OperationsCenter.Enabled=false
  3. Type the following command:

    The Krew plugin manager is required to run this command.
    kubectl krew install view-serviceaccount-kubeconfig
  4. Type the following command:

    kubectl view-serviceaccount-kubeconfig jenkins > /tmp/sa-jenkins.yaml
  5. On your Managed Master, create Secret File credentials using the service account.

  6. In Manage Jenkins > Manage Nodes and Clouds, select Configure Clouds.

  7. Select Add a new cloud, and then select Kubernetes.

  8. Enter a Name of your choice, and then select Kubernetes Cloud details.

  9. In Credentials, add a Secret File credential type, and then upload the sa-jenkins.yaml file that you created in step 1.

  10. Select WebSocket.

  11. In Jenkins URL, enter the public (ingress) URL of the Managed Master.

  12. Leave all other fields blank.

  13. Select Pod Templates > Add Pod Template.

  14. Enter a pod template name.

  15. Select Pod Template details.

  16. In Labels, enter the label of the pod. NOTE: The Labels field of the pod template is the one used to select the template in a pipeline, not the name. So it’s important to complete the Labels field.

  17. Click Save.