Managed controllers in specific Kubernetes namespaces

8 minute read

By default, managed controllers are created in the same namespace as the operations center instance.

If you want to create a managed controller in a different namespace, you need to pre-populate the namespace with the appropriate resources.

Preparing a namespace to install a managed controller

  1. To prepare a namespace to install a managed controller. use the following example values.yaml file:

    OperationsCenter:
        Enabled: false
    Master:
        Enabled: true
        OperationsCenterNamespace: ci (1)
    Agents:
        Enabled: true
    1Replace ci with the actual namespace where the operations center is installed. If the operations center is located in another cluster, it can be set to the same value as the current namespace, then an operations center service account must be created for authentication.
  2. Use the following example to perform the installation.

    export NAMESPACE=my-team (1)
    kubectl create namespace $NAMESPACE || true
    helm install ci-masters-$NAMESPACE cloudbees/cloudbees-core --namespace $NAMESPACE -f values.yaml
    1Replace my-team with the actual namespace where the managed controllers will be created.

Controller provisioning configuration

To provision controllers in their own projects, each controller must use a specific sub-domain. For example, if the operations center domain is cd.example.org and the URL is https://cd.example.org/cjoc/, a controller dev1 should use the sub-domain dev1.cd.example.org or dev1-cd.example.org. It is often preferable to use the latter if using a wild card certificates for domain example.org.

To configure each controller to use a specific sub-domain, set the 'controller URL Pattern' in the main Jenkins configuration page 'Manage Jenkins → Configure System' under 'Kubernetes controller Provisioning' advanced options. For example if the operations center domain is cd.example.org, the 'controller URL Pattern' would be https://*-cd.example.org/*/.

Provision controllers

The project for the controller resources can be configured as the default project for all managed controllers in the main operations center configuration screen with the 'namespace' parameter.

The project can also specify a specific managed controller in the controller configuration screen with the 'namespace' parameter.

Leave the namespace value empty to use the value defined by the Kubernetes endpoint.

Managed controllers in specific OpenShift projects

By default, managed controllers are created in the same projects that operations center is running in.

To create a managed controller in a specific OpenShift project, the project must be pre-created with the proper resources.

Those resources are:

  • The 'jenkins' ServiceAccount that will be used by the managed controller(s) to provision Jenkins agents.

  • The Role and RoleBinding of the 'jenkins' ServiceAccount

  • The Role and RoleBinding of operations center ServiceAccount to allow operations center to manage the controller resources

Red Hat recommends that OpenShift production clusters use the ovs-multitenant network plugin. This plugin makes it so no namespaces can reference each others services without going through a route exposed on the router.

If ovs-multitentant is enabled, then the project running operations center needs to be a global project to run managed controllers in other projects. Use the oc adm command below to make the project global; replace cloudbees with the name of your project.

oc adm pod-network make-projects-global cloudbees

Here is the definition of the 'jenkins' service account and associated Role and RoleBinding:

The RoleBinding namespace '<PROJECT-MASTER-X>' should be the newly created project name.
apiVersion: v1
kind: List
items:
 -
  kind: ServiceAccount
  apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: jenkins

 -
  kind: Role
  apiVersion: v1
  metadata:
    name: pods-all
  rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get","list","watch"]

 -
  kind: RoleBinding
  apiVersion: v1
  metadata:
    name: jenkins
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: pods-all
    # The new project name
    namespace: <PROJECT-MASTER-X>
  subjects:
  - kind: ServiceAccount
    name: jenkins
    namespace: <PROJECT-MASTER-X>

To create a managed controller in a specific OpenShift project, operations center must have the Role privileges to do so.

The RoleBinding namespace '<PROJECT-MASTER-X>' should be the newly created project name.

The RoleBinding must specify the namespace in which the cjoc ServiceAccount is defined (in the following example, cje).

apiVersion: v1
kind: List
items:
 -
  kind: Role
  apiVersion: v1
  metadata:
    name: master-management
  rules:
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get","list","watch"]
  - apiGroups: ["apps"]
    resources: ["statefulsets"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: ["route.openshift.io",""]
    resources: ["routes"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: ["route.openshift.io"]
    resources: ["routes/custom-host"]
    verbs: ["create"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["list"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["get","list","watch"]

 -
  kind: RoleBinding
  apiVersion: v1
  metadata:
    name: cjoc
  roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: Role
    name: master-management
    namespace: <PROJECT-MASTER-X>
  subjects:
  - kind: ServiceAccount
    name: cjoc
    # cjoc service account project name
    namespace: cje

Optionally, you can give operations center the privileges to list namespaces so that the user can select the project/namespace instead of typing the namespace in. To accomplish this, operations center must have the ClusterRole privileges to do so.

The ClusterRoleBinding must specify the namespace in which the cjoc ServiceAccount is defined (in the following example, cje).
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cjoc-ns-management
rules:
- apiGroups: [""]
  resources: ["namespaces"]
  verbs: ["list"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cjoc-ns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cjoc-ns-management
subjects:
- kind: ServiceAccount
  name: cjoc
  # cjoc service account namespace
  namespace: cje

Managing controllers with operations center

When new teams join an organization, or existing teams start a new project, CloudBees CI makes it easy to provision a fully managed and access controlled controller per team. In CloudBees CI, a controller is referred to as a managed controller.

Administrators can provision managed controllers from a standardized template or they can allow team leads to provision their own managed controllers "on-demand." The number of controllers an environment can handle is limited only by the capacity of the cluster.

Upgrading managed controllers

To update a managed controller’s version, the administrator needs to update the version of the Docker image for the managed controller. Once this version is updated, the managed controller and its default plugins will be upgraded to the latest versions defined in the image, while any non-default plugins will be left at their current versions.

Once the Docker image definition is updated, an administrator needs to restart the instance so the managed controller can begin using its upgraded components.

Bulk upgrades

When a cluster serves many teams and contains many controllers, an administrator can save time and greatly reduce the overhead of upgrading those controllers by creating a repeatable task to automate this process. This can be achieved by defining a cluster operation in operations center.

To create a task to perform a bulk upgrade:

  1. Sign in to operations center as an administrator.

  2. Create a New Item for Cluster Operations.

  3. Select the managed controllers cluster operation

  4. Select the pre-configured upgrade pattern that you want to use.

  5. In the Uses Docker Image, pick a Docker image that is used by the cluster’s controllers as the upgrade target. Any controller in the cluster using the selected image will be affected by this cluster operation.

  6. In the Steps section, select Backup Controller and configure the options as per Using the CloudBees Backup plugin

  7. In the Steps section, select Update Docker Image and then pick the new Docker to which you want to bulk-upgrade the targeted controllers.

  8. Add a Reprovision step to restart the targeted controllers.

Once these settings are configured, the administrator can run the cluster operation to perform a bulk upgrade of their cluster’s controllers, or schedule the operation to run at a later time.

Operating Managed and client controllers

When new teams join an organization, or existing teams start a new project, CloudBees CI makes it easy to provision a fully managed and access controlled Jenkins controller per team. In CloudBees CI, a Jenkins controller is referred to as a managed controller.

Administrators can provision managed controllers from a standardized template or they can allow team leads to provision their own managed controllers "on-demand." The number of controllers a CloudBees CI environment can handle is limited only by the capacity of the cluster.

Setting the default namespace for new managed controllers and team controllers

To set the default namespace for any new managed controller or team controller, go to Manage Jenkins > Configure Controller Provisioning on your operations center, then set the namespace to your preferred value.

Adding client controllers

Occasionally administrators will need to connect existing controllers to a CloudBees CI cluster, such as in the case of a team requiring a controller on Windows. Existing controllers that are connected to operations center lack key benefits of managed controllers like high availability and automatic agent management. Whenever possible, administrators should use a managed controller with CloudBees CI rather than connecting an existing controller.

client controllers are monitored by operations center just as managed controllers are monitored. Administrators can see the status of all their managed controllers, team controllers, and client controllers from the operations center controllers page. client controllers can receive configuration updates from operations center with configuration snippets. client controllers can share agents hosted in the cluster, offloading the burden of agent management from teams.

client controllers do not have the high availability features of managed controllers.

Note: The existing client controller and operations center must both accept JNLP requests.

To add a client controller, log into CloudBees CI and navigate to the operations center dashboard. Click the New Item option in the left-hand menu and provide the following information:

  • Item name: the name of the existing client controller to connect to the cluster

  • Select client controller. Just as with managed controllers, client controllers offer some customization and configuration options on creation:

  • On-controller executors: Number of builds to execute concurrently on the client controller itself. Default setting is 2.

  • Email addresses: Contact information for the administrator responsible for maintaining the client controller.

Once these settings are saved, operations center will attempt to connect the client controller to the CloudBees CI cluster.

Verify that operations center and the existing client controller can communicate with each other over both HTTP and JNLP ports. Host and port to use for JNLP is advertised through HTTP headers by each Jenkins controller.

You can connect an existing client controller to operations center by giving that client controller the TLS certificate for operations center, typically through the Configure Global Security page in operations center. For more information, see How to programmatically connect a client controller to CJOC.

If you are connecting multiple client controllers to your cluster, it is a good idea to automate that task using shared configurations.

Once the client controller is connected, administrators should configure security and access controls for the controller.

Upgrading managed controllers

To update a managed controller’s version, the administrator needs to update the version of the Docker image for the managed controller. Once this version is updated, the managed controller and its default plugins will be upgraded to the latest versions defined in the the image, while any non-default plugins will be left at their current versions.

Once the Docker image definition is updated, the CloudBees CI administrator will need to restart the instance so the managed controller can begin using its upgraded components.

Bulk-upgrading managed controllers

When a CloudBees CI cluster serves many teams and contains many controllers, an administrator can save time and greatly reduce the overhead of upgrading those controllers by creating a repeatable task to automate this process. In CloudBees CI, this can be achieved by defining a cluster operation in operations center.

To create this task, the administrator will first need to log into operations center, then create a New Item of the type Cluster Operations. The administrator will then needs to select the managed controllers cluster operation, and will then be given a set of pre-configured upgrade patterns to choose from.

The administrator will then need to specify which controllers to target by using the filter Uses Docker Image and picking a Docker image used by their cluster’s controllers as the upgrade target for this operation. Any controllers in the cluster using the selected image will be affected by this cluster operation.

In the Steps section, the administrator should select Update Docker Image and pick the new Docker image to bulk-upgrade the targeted controllers to. Next, the administrator should add a Reprovision step to restart the targeted controllers.

Once these settings are configured, the administrator can run the cluster operation to perform a bulk upgrade of their cluster’s controllers, or schedule the operation to run at a later time.

Quiet start

There may be times during an upgrade or other maintenance when it is best to have Jenkins start, but launch no projects. For example, if an upgrade is being performed in multiple steps, the intermediate steps may not be fully configured to run projects successfully. The "quiet start plugin" can immediately place the Jenkins server in "quieting down" state on startup.

Enable "quiet start" by checking the box in Manage Jenkins Quiet Restart. When the server is restarted, it will be in the "quieting down" state. An administrator can cancel that state using the regular UI.

Uncheck the box in Manage Jenkins Quiet Restart when maintenance is complete. Projects will start as usual on server restart.