Provisioning managed controllers in multiple Kubernetes clusters

4 minute read

To provision managed controllers in multiple Kubernetes clusters, you must do the following:

Preparing a namespace to install a managed controller

  1. To prepare a namespace to install a managed controller. use the following example values.yaml file:

    OperationsCenter: Enabled: false Master: Enabled: true OperationsCenterNamespace: ci (1) Agents: Enabled: true
    1 Replace ci with the actual namespace where the operations center is installed. If the operations center is located in another cluster, it can be set to the same value as the current namespace, then an operations center service account must be created for authentication.
  2. Use the following example to perform the installation.

    export NAMESPACE=my-team (1) kubectl create namespace $NAMESPACE || true helm install ci-masters-$NAMESPACE cloudbees/cloudbees-core --namespace $NAMESPACE -f values.yaml
    1 Replace my-team with the actual namespace where the managed controllers will be created.

Controller provisioning configuration

To provision controllers in their own namespaces, each controller must use a specific sub-domain. For example, if the operations center domain is cd.example.org and the URL is https://cd.example.org/cjoc/, a controller dev1 should use the sub-domain dev1.cd.example.org or dev1-cd.example.org. It is often preferable to use the latter if using a wild card certificates for domain example.org.

To configure each controller to use a specific sub-domain, set the 'controller URL Pattern' in the main Jenkins configuration page 'Manage Jenkins → Configure System' under 'Kubernetes controller Provisioning' advanced options. For example if the operations center domain is cd.example.org, the 'controller URL Pattern' would be https://*-cd.example.org/*/.

Provision controllers

The namespace for the controller resources can be configured as the default namespace for all managed controllers in the main operations center configuration screen with the 'namespace' parameter.

The namespace can also specify a specific managed controller in the controller configuration screen with the 'namespace' parameter.

Leave the namespace value empty to use the value defined by the Kubernetes endpoint.

Adding a ServiceAccount for operations center

  1. Create a ServiceAccount named cjoc in the namespace provided as .Master.OperationsCenterNamespace.

    kubectl create serviceaccount cjoc -n ci (1)
    1 Replace ci with the actual value used for .Master.OperationsCenterNamespace.
  2. Create a kubeconfig for the operations center service account:

    apiVersion: v1 kind: Config clusters: - name: kubernetes cluster: certificate-authority-data: <CLUSTER_CA_BASE64_ENCODED> (1) server: <KUBE_APISERVER_ENDPOINT> (2) contexts: - name: cjoc context: cluster: kubernetes namespace: <NAMESPACE> (3) user: cjoc current-context: cjoc users: - name: cjoc user: token: <CJOC_SERVICEACCOUNT_TOKEN> (4)
    1 Replace <CLUSTER_CA_BASE64_ENCODED> with the base64 encoded value of the Cluster Certificate Authority, see Kubernetes - Service Account Tokens.
    2 Replace <KUBE_APISERVER_ENDPOINT> with the URL of the Kubernetes API Server.
    3 Replace <CJOC_SERVICEACCOUNT_NAMESPACE> with the namespace where the cjoc ServiceAccount has been created.
    4 Replace <CJOC_SERVICEACCOUNT_TOKEN> with the decoded value of the cjoc ServiceAccount token. Refer to Kubernetes - Service Account Tokens.
    Another way of generating a kubeconfig file for the operations center service account is to use the following kubectl plugin: view-serviceaccount-kubeconfig, and run the command kubectl view-serviceaccount-kubeconfig cjoc -n ci
  3. Make sure it works before trying to use it in the following steps:

    KUBECONFIG=<PATH_TO_KUBECONFIG_FILE> kubectl version (1)
    1 Replace <PATH_TO_KUBECONFIG_FILE> by the location of the kubeconfig file created.

Adding a new Kubernetes cluster endpoint

  1. In the operations center, browse to Manage Jenkins  Configure System.

    In the section Kubernetes Controller Provisioning, the list of endpoints is displayed.

    The first one corresponds to the current cluster and namespace that operations center is running.

  2. Select Add.

    new cluster endpoint
    Figure 1. New Cluster Endpoint
    • Set the API endpoint URL. You can find it in your Cloud console, or alternatively in your kubeconfig file.

    • Pick a name to identify the cluster endpoint.

    • Set up credentials. To use a kubeconfig file, create a Secret file.

    • If needed, provide the server certificate. Or, disable the certificate check.

    • Pick a namespace. If left empty, it uses the name of the current namespace where the operations center is deployed.

    • Set the Controller URL Pattern to the domain name configured to target the ingress controller on the Kubernetes cluster.

    • The Jenkins URL can be left empty, unless there is a specific link between clusters requiring access to the operations center using a different name than the public one.

    • Select Use WebSocket to enable WebSockets for all managed controllers that use this endpoint. See Using WebSockets to connect controllers to operations center for details about WebSockets.

  3. Select Validate to make sure the connection to the remote Kubernetes cluster can be established and the credentials are valid.

  4. Select Save.

Adding a managed controller to a Kubernetes cluster

To add a new managed controller to a Kubernetes cluster:

  1. Select the cluster endpoint to use.

  2. Adjust the controller configuration to your needs.

  3. If you do not want to use the default storage class from the cluster, set the storage class name through the dedicated field.

  4. If the Ingress Controller is not the default one, use the YAML field to set an alternate Ingress class.

    kind: "Ingress" metadata: annotations: kubernetes.io/ingress.class: 'nginx'
  5. Save and provision.

In August 2020, the Jenkins project voted to replace the term master with controller. We have taken a pragmatic approach to cleaning these up, ensuring the least amount of downstream impact as possible. CloudBees is committed to ensuring a culture and environment of inclusiveness and acceptance - this includes ensuring the changes are not just cosmetic ones, but pervasive. As this change happens, please note that the term master has been replaced through the latest versions of the CloudBees documentation with controller (as in managed controller, client controller, team controller) except when still used in the UI or in code.