Provisioning agents in a separate Kubernetes cluster from a managed controller

2 minute read

Follow these steps to provision Kubernetes agents on a separate cluster from the managed controller.

Starting with CloudBees CI version 2.263.4.1, the steps below can be used to define a Kubernetes Shared Cloud. When defining the Kubernetes Shared Cloud, leave the Jenkins URL empty so the controller that consumes the Shared Cloud can infer it based on its public URL.

To deploy agents in a separate Kubernetes cluster from the managed controller:

  1. Set up kubectl to point to the target agent cluster.

  2. Type the following command to install only serviceaccount/jenkins and the related role and role binding:

    helm install <helm deployment name> cloudbees/cloudbees-core --set OperationsCenter.Enabled=false

    The serviceaccount/jenkins has the required Kubernetes RBAC permissions to provision and manage pods in a namespace. This is required to provision Kubernetes agents in a specific namespace.

    These configurations may be adjusted for extended requirements, such as the creation/deletion of PVCs if you use the dynamic PVC feature of the Kubernetes plugin.

  3. Type the following command:

    The Krew plugin manager is required to run this command.
    kubectl krew install view-serviceaccount-kubeconfig
  4. Type the following command:

    kubectl view-serviceaccount-kubeconfig jenkins > /tmp/sa-jenkins.yaml
    • The kubectl view-serviceaccount-kubeconfig jenkins command is only valid for Kubernetes 1.21 and older.

    • For Kubernetes 1.22 and 1.23, kubectl view-serviceaccount-kubeconfig jenkins is only valid if the token controller is enabled by default. However, you can disable the token Kubernetes controller.

    • For Kubernetes 1.24 (and 1.22 or 1.23 if the token controller is disabled), you must generate the token manually by using this script:

      kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: jenkins-sa-token annotations: kubernetes.io/service-account.name: jenkins type: kubernetes.io/service-account-token EOF token=$(kubectl get secret jenkins-sa-token --template={{.data.token}} | base64 --decode) echo "Generated token: $token" kubectl config view --raw > jenkins-sa context=$(kubectl config current-context) cluster=$(kubectl config --kubeconfig jenkins-sa view -o jsonpath="{.contexts[?(@.name == '$context')].context.cluster}") user=$(kubectl config --kubeconfig jenkins-sa view -o jsonpath="{.contexts[?(@.context.cluster == '$cluster')].context.user}") kubectl config --kubeconfig jenkins-sa delete-user $user kubectl config --kubeconfig jenkins-sa set-credentials $user --token=$token
  5. Using the service account on your managed controller, in Manage Jenkins  Manage Nodes and Clouds, select Configure Clouds.

  6. Select Add a new cloud, and then select Kubernetes.

  7. Enter a Name of your choice, and then select Kubernetes Cloud details.

  8. In Credentials, add a Secret File credential type, and then upload the sa-jenkins.yaml file that you created in step 4.

  9. Select WebSocket.

  10. In Jenkins URL, enter the public ingress URL of the managed controller.

  11. Leave all other fields blank.

  12. Select Pod Templates  Add Pod Template.

  13. Enter a pod template name.

  14. Select Pod Template details.

  15. In Labels, enter the label of the pod.

    The Labels field of the pod template is used to select the template in a Pipeline, and is not the name. Therefore, it is important to enter the pod label in the Labels field.
  16. Select Save.