Pre-installation requirements for Kubernetes

2 minute read

For details about supported platforms for CloudBees CI on modern cloud platforms, such as supported Kubernetes, Helm, and NFS versions, refer to Supported platforms for CloudBees CI on modern cloud platforms.

Installing cbsupport

Before installing CloudBees CI on modern cloud platforms, you should install the cbsupport command-line tool. You can use cbsupport to collect data commonly required for supporting CloudBees products. You can then send that data to the CloudBees Support team to troubleshoot any issues with your CloudBees CI on modern cloud platforms solution.

For instructions on how to install cbsupport, refer to the cbsupport CLI documentation.

Kubernetes requirements

The following items are required to install CloudBees CI on modern cloud platforms on Kubernetes:

  • A Kubernetes client with the currently supported version of Kubernetes, installed and configured on your local computer or a bastion host. Beta releases are not supported.

  • A cluster running a currently supported version of Kubernetes. Beta releases are not supported.

    • The cluster must have network access to container images (public Docker Hub or a private Docker Registry).

  • A namespace in the cluster (provided by your admin) with permissions to create Role and RoleBinding objects.

  • Kubernetes cluster Default Storage Class defined and ready to use.

    • Refer to the relevant Reference Architecture for AWS or On-premise - Storage Requirements section for more information.

Storage requirements

Dynamic provisioning is required to create persistent volumes. If you don’t enable dynamic provisioning, you will have to manually create a persistent volume.

Because Jenkins is highly dependent upon the filesystem, the underlying storage provider must provide minimal input/output operations per second (IOPS) and latency.

Storage considerations for CloudBees CI High Availability

You must have a storage class set up with ReadWriteMany access mode. CloudBees recommends configuring a hosted network filesystem.

If you prefer not to configure a hosted network filesystem, you can run NFS inside the cluster for non-production workloads, backed by the cluster’s default (ReadWriteOnce) persistent disk:

helm install --wait \ --namespace nfs \ --create-namespace \ --set persistence.enabled=true \ --set persistence.size=200Gi \ --repo https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/ \ --version 1.8.0 \ nfs nfs-server-provisioner

To request NFS for both the operations center and managed controllers:

OperationsCenter: Persistence: StorageClass: nfs

Ingress requirements

CloudBees CI on modern cloud platforms requires an Ingress controller and has been tested using the Kubernetes NGINX Ingress Community version. Ingress-nginx is the only supported controller.

CloudBees CI creates one Ingress object for the operations center and one for each controller.

If you use an unsupported Ingress controller, you may need to add additional configurations for domains, hostnames, WebSocket, or TCP pass-through. CloudBees documentation can help you with that, but CloudBees does not support this kind of controller. In this case, you must install and configure your Ingress controller and adjust your CloudBees CI chart values according to your situation.

If you plan to provide High Availability (active/active), the load balancer must be configured to enable sticky sessions or session affinity.