The sections below provide detailed Kubernetes cluster and storage requirements for both non-production and production environments.
Non-production environment
Refer to the tables below for more information about configuring cluster and storage requirements in a non-production environment.
Database configuration
For a non-production environment installation, the CloudBees CD/RO Helm chart automatically installs a MariaDB database instance as a built-in database.
Cluster capacity
The table below specifies the default memory and CPU requested for each CloudBees CD/RO component in a non-production environment.
CloudBees CD/RO component/service | CPU | Memory |
---|---|---|
CloudBees CD/RO server |
Per replica: 1.5 CPU |
Per replica: 6 GiB (Memory assigned to CloudBees CD/RO JVM: 4 GiB) |
Web server |
0.25 CPU |
256 MiB |
CloudBees Analytics server |
Per replica: 0.1 CPU |
Per replica: 2 GiB (Memory assigned to CloudBees Analytics JVM: 1 GiB) |
Repository server |
0.25 CPU |
512 MiB (Memory assigned to Repository JVM: 512 MiB) |
CloudBees CD/RO agent (bound) 1 |
0.25 CPU |
512 MiB (Memory assigned to Agent JVM: 256 MiB) |
Bound agents are internal CloudBees CD/RO components
The CloudBees CD/RO bound agent ( If operations other than CloudBees CD/RO internal operations run bound agents, CloudBees CD/RO performance may become unpredictable. Additionally, system requirements for CloudBees CD/RO instances assume that bound agents are used exclusively by CloudBees CD/RO, and are not reliable for instances where user jobs are also running. |
Production environment
Refer to the tables below for more information about configuring cluster and storage requirements in a production environment.
Database configuration
In a production environment installation, you must configure a separate database for your installation of CloudBees CD/RO before you initiate the installation. For current database server and sizing specifications, refer to Database configuration recommendations.
The database can be installed inside or outside the cluster. Update the database
section in the cloudbees-cd-defaults.yaml
file that is specified when installing the cloudbees-flow
chart. Refer to Database values.
ZooKeeper requirements
ZooKeeper is a centralized service used in CloudBees CD/RO to store configuration data and synchronize group services. Your clients must be compatible with the CloudBees CD/RO ZooKeeper server version. CloudBees CD/RO currently uses ZooKeeper v3.8.4.
|
The following requirements apply to CloudBees CD/RO versions:
-
CloudBees CD/RO v2024.06.0 and later uses v3.8.4. Your platform must be compatible with ZooKeeper server v3.8.4.
-
CloudBees CD/RO v2023.12.0 to v2024.03.0 uses v3.8.3. Your platform must be compatible with ZooKeeper server v3.8.3.
-
CloudBees CD/RO server v10.5.0 to v2023.10.0 is only compatible with ZooKeeper server v3.8.0.
ZooKeeper has specific OS limitations and JDK version requirements. For more information, refer to Apache ZooKeeper. |
Cluster capacity
The tables below specify the default memory and CPU requested for each CloudBees CD/RO component in the production environment per deployment size.
The following guidelines are a recommended starting point. You may need to scale up, based on your environment and needs. |
Small to mid-range deployments
A small to mid-range deployment is defined as:
-
<1K jobs per day
-
<100 running pipelines per day
-
<50K job steps
CloudBees CD/RO component/service | CPU | Memory |
---|---|---|
CloudBees CD/RO server |
Per replica: 2 CPU |
Per replica (minimum): 16 GiB (Memory assigned to CloudBees CD/RO JVM: 85%) Per replica (recommended): 32 GiB (Memory assigned to CloudBees CD/RO JVM: 30 GiB) Local storage per replica: 20 GB |
Web server |
0.25 CPU |
256 MiB |
CloudBees Analytics server |
Per replica: 0.1 CPU |
Per replica: 2 GiB (Memory assigned to CloudBees Analytics JVM: 1 GiB) |
Repository server |
0.25 CPU |
512 MiB (Memory assigned to Repository JVM: 512 MiB) |
CloudBees CD/RO agent (bound) 1 |
0.25 CPU |
512 MiB (Memory assigned to Agent JVM: 256 MiB) |
CloudBees CD/RO agent (worker) |
0.25 CPU |
512 MiB (Memory assigned to Agent JVM: 64 MiB) |
ZooKeeper |
Per replica: 0.25 CPU |
Per replica: 1 GiB |
Bound agents are internal CloudBees CD/RO components
The CloudBees CD/RO bound agent ( If operations other than CloudBees CD/RO internal operations run bound agents, CloudBees CD/RO performance may become unpredictable. Additionally, system requirements for CloudBees CD/RO instances assume that bound agents are used exclusively by CloudBees CD/RO, and are not reliable for instances where user jobs are also running. |
Very large deployments
A very large deployment is defined as:
-
~ 100K jobs per day
-
~ 2000 running pipelines per day
-
~ 5M job steps per day
CloudBees CD/RO component/service | Instance CPU | Instance memory | Max JVM heap | Local storage | Replicas |
---|---|---|---|---|---|
CloudBees CD/RO server |
16 |
128 GiB |
85% |
60 GiB |
3 |
CloudBees CD/RO agent |
1 |
1 GiB |
64 MiB |
5 GiB |
2 |
Web server |
1 |
512 MiB |
n/a |
n/a |
2 |
Analytics server |
4 |
16 GiB |
n/a |
20 GiB |
3 |
Repository server |
0.25 |
1 GiB |
512 MiB |
20 GiB |
1 |
CloudBees CD/RO bound agent 1 |
0.25 |
1 GiB |
256 MiB |
n/a |
1 |
Zookeeper |
0.25 |
1 GiB |
n/a |
n/a |
3 |
Bound agents are internal CloudBees CD/RO components
The CloudBees CD/RO bound agent ( If operations other than CloudBees CD/RO internal operations run bound agents, CloudBees CD/RO performance may become unpredictable. Additionally, system requirements for CloudBees CD/RO instances assume that bound agents are used exclusively by CloudBees CD/RO, and are not reliable for instances where user jobs are also running. |
Persistent storage
CloudBees CD/RO components need varying levels of persistent storage.
-
Non-shared storage: A disk or file system that cannot be shared between two pods or two nodes. Examples include AWS EBS, GCP Persistent Disk, and Azure Disk.
-
Shared storage: A disk or file system that can be shared between two pods or two nodes. Examples include AWS EFS, GCP NFS Server, and Azure Filestore.
Consult the tables below for storage requirements.
CloudBees CD/RO component/service | Storage type | Storage amount | Mount point |
---|---|---|---|
CloudBees Analytics server |
|
Per replica: 4, 10 GiB |
|
CloudBees CD/RO Agent (worker) |
|
5GiB |
|
Zookeeper |
|
CloudBees CD/RO component/service | Storage type | Storage amount | Mount point |
---|---|---|---|
CloudBees CD/RO Server CloudBees CD/RO Agent (bound) |
|
5 GiB |
|
Repository Server |
|
20 GiB |
|
CloudBees CD/RO server persistent volume provisioning
Update the storage
section in the cloudbees-cd-defaults.yaml
file specified when installing the cloudbees-flow
chart. Refer to Server storage values for details.
CloudBees CD/RO agent persistent volume provisioning
Update the storage
section in the cloudbees-cd-agent-defaults.yaml
file specified when installing the cloudbees-flow-agent
chart. Refer to Agent storage values for details.
Creating persistent shared storage
Shared storage is required for multiple scenarios, such as sharing plugins within clusters. The following instructions describe how to persistent shared storage on:
To share plugins across your cluster, you must have persistent shared storage configured and allow adequate values for your plugins directory. |
Google Cloud Platform
To get started, clone the cloudbees-example
repo and use Google Cloud Filestore to create a fully-managed NFS file server on Google Cloud.
Find information and scripts to provision a Google Cloud Filestore instance in the cloudbees-examples
repo.
Amazon EKS
On Amazon EKS, shared storage is handled by Amazon’s Elastic File System (EFS). Depending on your EKS version, you can use:
-
For EKS 1.21 or higher, EFS Container Storage Interface (CSI) driver. For more information, refer to the Amazon EFS CSI driver documentation.
-
For EKS 1.21 or below, the instructions found in Provisioning shared storage for Amazon EKS.
Provisioning shared storage for Amazon EKS
Before you start:
-
The following instructions are supported only for EKS version 1.21 or below.
-
Python 2 version 2.7+ or Python 3 version 3.4+ is required.
-
The AWS CLI is required. If you do not have it installed, run:
pip3 install awscli
-
Your AWS instance must be configured for AWS CLI. If you have not, run:
aws configure AWS Access Key ID: <AWS Access Key ID> AWS Secret Access Key: <AWS Secret Access Key> Default region name: <region> Default output format: <text|table|json>
To provision shared storage in EFS on AWS:
-
Clone the
cloudbees-example
repo and navigate to the eks directory. -
Run
efs-provision.sh
:./efs-provision.sh --action <create|delete> \ --efs-name <name> \ --vpc-id <vpc-id> \ --region <region>
Where:
-
action
: Required.create
ordelete
EFS shared storage. -
efs-name
: Required. The name for the EFS instance. -
vpc-id
: Required. Your VPC ID. -
region
: Required. The region you want to make it available for.
Additional options include:
-
performance-mode
= (Optional) The performance mode you wish to use eithergeneralPurpose
ormaxIO
. -
throughput-mode
= (Optional) The throughput mode you wish to use eitherprovisioned
orbursting
. -
throughput
= (Required, if--throughput
isprovisioned
) Rate in Mbps.
-
-
After running the script successfully, the terminal displays commands to deploy the Helm chart.
-
Deploy Helm chart
helm repo add stable https://kubernetes-charts.storage.googleapis.com/ helm repo update helm install <name> stable/efs-provisioner \ --set efsProvisioner.efsFileSystemId=<file system id> \ --set efsProvisioner.awsRegion=<region> \ --set efsProvisioner.dnsName=<filesystem ip>
This creates a storage class named
aws-efs
. Use this storage class during CloudBees CD/RO installation with the following parameters to create PVCs. -
Set the storage class for
serverPlugins
in your CloudBees CD/RO installation with:--set storage.volumes.serverPlugins.storageClass=aws-efs
-
(Optional) By default, the storage class volume is
5Gi
. If you have a large plugins catalog, you can increase the allocation with:--set storage.volumes.serverPlugins.storage=10Gi
-
Red Hat OpenShift
For your convenience, sample code to support persistent volumes and persistent volume claim configuration is available here.
-
Storage class
-
The default storage class used by CloudBees CD/RO must be set to
volumeBindingMode: immediate
. This is achieved by modifying the yaml file for the existing default storage class or by creating a new, default storage class.
-
-
Using Dynamic NFS Client Provisioner with NFS server
Install NFS Client provisioner, if not already installed. Then, use the NFS Client provisioner to dynamically create a persistent volume claim (PVC), as follows.
helm repo add stable https://charts.helm.sh/stable helm repo update helm install nfs-provisioner stable/nfs-client-provisioner \ --namespace kube-system \ --set nfs.server=<NFS_HOST> \ --set nfs.path=<NFS_PATH>
where
<NFS_HOST>
is the NFS server host, and<NFS_PATH>
is the NFS mounted path.This creates a storage class with the name,
nfs-client
. Use this storage class during CloudBees CD/RO installation with the following parameters to create PVCs.-
CloudBees CD/RO installations need to set
--set storage.volumes.serverPlugins.storageClass=nfs-client --set storage.volumes.serverPlugins.storage=10Gi
-
Optionally, configure CloudBees CD/RO bound agents to enable the ability to create and mount a PVC by setting:
--set storage.volumes.boundAgentStorage.enabled=true
-
-
Using Existing NFS shared PVC
-
Create PV using NFS:
oc apply -n <namespace> -f pv.yaml
where
pv.yaml
is as follows:## pv.yaml ### apiVersion: v1 kind: PersistentVolume metadata: name: flow-server-shared spec: capacity: storage: 15Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: nfs-client nfs: path: <nfs-mount-path> server: <nfs-host>
-
Create PVC manually using NFS:
oc apply -n <namespace> -f pvc.yaml
where
pvc.yaml
is as follows:## pvc.yaml ## apiVersion: v1 kind: PersistentVolumeClaim metadata: name: flow-server-shared spec: accessModes: - ReadWriteMany storageClassName: nfs-client volumeName: flow-server-shared resources: requests: storage: 15Gi
-
Once you have PV and PVC created in <namespace>
, use them during CloudBees CD/RO installations in OpenShift with the setting below.
CloudBees CD/RO installations need to set
--set storage.volumes.serverPlugins.name=my-shared-pvc --set storage.volumes.serverPlugins.existingClaim=true
Make sure NFS is configured with proper permissions to allow the CloudBees CD/RO server read-write access:
chmod 770 <NFS_PATH>
Additional Red Hat OpenShift requirements
-
An OpenShift project with permissions to create
Role
andRoleBinding
objects. -
A container configured to run in a non-privileged mode. Refer to Custom settings required for Red Hat OpenShift installations for details.
AWS EKS Fargate
AWS Fargate allows you to create a compute engine that allows you to build applications without managing infrastructure. Build web applications, microservices, and APIs using containerized workloads without the need to define server types, or plan and time your application and workload scaling. You can also control your pods using Fargate profiles.
Fargate limitations
There are some limitations to using EKS Fargate as documented in on Amazon’s Fargate introduction. Some limitations to consider when using Fargate with CloudBees CD/RO include:
-
Privileged containers are not supported on Fargate.
-
AWS EKS Fargate does not support any other persistent storage than the AWS EFS CSI driver.
-
MariaDB does not work with EFS as PersistentVolume, so Demo Mode installation is not supported.
Fargate pre-requisites
To use Fargate with CloudBees CD/RO, you must perform the following pre-requisite tasks:
-
Create an EKS cluster with a Fargate profile.
-
Follow the steps from AWS Load Balancer Controller to install the AWS Load Balancer controller in the EKS cluster.
Use EKS version 1.19 or later, and AWS Load Balancer Controller 2.2.0 or later is required.
-
Create a certificate using the AWS Certificate Manager for the required domain name. For example,
cloudbees-cd.example.com
. -
Set up External-DNS. Read the tutorial to set it up on AWS.
-
Follow the steps from Amazon EFS CSI driver - Amazon EKS to install EFS-CSI and enable
efs-sc
storage class. -
Create static persistent volumes in your EKS Fargate cluster to use in CloudBees CD/RO deployment. Refer to Running stateful workloads with Amazon EKS on AWS Fargate using Amazon EFS.
CloudBees CD/RO requires a minimum of 4 persistent volumes to run in cluster mode. Therefore, you should create 4 static persistent volumes with the correct permissions. -
Create an EFS Access point:
aws efs create-access-point --file-system-id <EFS_ID> \ --posix-user "Uid=1000,Gid=1000,SecondaryGids=1000,1000" \ --root-directory "Path=/<EFS-Mount-PATH>,CreationInfo={OwnerUid=1000,OwnerGid=1000,Permissions=0770}" \ --output text --query "AccessPointId" example: EFS_FS_ACCESS_POINT_ID=$(aws efs create-access-point --file-system-id fs-0761c22d156958823 \ --posix-user "Uid=1000,Gid=1000,SecondaryGids=1000,1000" \ --root-directory "Path=/cdropv-1,CreationInfo={OwnerUid=1000,OwnerGid=1000,Permissions=0770}" \ --output text --query "AccessPointId") echo $EFS_FS_ACCESS_POINT_ID
-
Create a status persistent volume using the following example template with EFS-File-System-ID and EFS_FS_ACCESS_POINT_ID that were generated from the steps above:
apiVersion: v1 kind: PersistentVolume metadata: name: cdropv-1 spec: capacity: storage: 10 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-0761c22d156958823::fsap-0a98caadf4a4553f4 # volumeHandle: <EFS_ID>::<EFS_FS_ACCESS_POINT_ID>
-
Create 4 static persistent volumes to run CloudBees CD/RO successfully on AWS EKS Fargate.
Configuring CloudBees CD/RO for AWS Fargate
To support Fargate on CloudBees CD/RO you must perform the following steps:
-
Set your platform as EKS in the
pv.yaml
configuration:# Configure platfrom as eks to configure alb controller as ingress platform: eks
-
Update all storage classes to efs-sc (EFS CSI enabled storage class):
# persistent storage configuration storage: volumes: serverPlugins: name: flow-server-shared accessMode: ReadWriteMany storageClass: efs-sc repositoryStorage: storage: 10Gi storageClass: efs-sc accessMode: ReadWriteMany analyticsStorage: storageClass: efs-sc accessMode: ReadWriteMany boundAgentStorage: storageClass: efs-sc accessMode: ReadWriteMany
-
Privileged containers aren’t supported on Fargate, so disable volume permissions:
#--------------------------------------------- # Volume configuration section #--------------------------------------------- volumePermissions: enabled: false
-
Privileged containers aren’t supported on Fargate, so disable
sysctlInitContainer
. -
Fargate takes time to launch pods, so increase the initial time for liveness and readiness probes:
analytics: sysctlInitContainer: enabled: false healthProbeLivenessInitialDelaySeconds: 600 healthProbeReadinessInitialDelaySeconds: 200
-
Use EFS CSI storage class with zookeeper and use 1 replica:
#--------------------------------------------- # Zookeeper configuration section #--------------------------------------------- zookeeper: persistence: enabled: true storageClass: efs-sc accessMode: ReadWriteMany replicaCount: 1
-
Disable
nginx-ingress
for Kubernetes versions 1.21 and earlier. Disableingress-nginx
for Kubernetes versions 1.22 and later. AWS EKS Fargate only supports an ALB Ingress controller:#--------------------------------------------- # nginx-ingress/ingress-nginx configuration section # Fargate uses the ALB ingress controller. Disable the nginx-ingress and ingress-nginx controllers. #--------------------------------------------- nginx-ingress: enabled: false ingress-nginx: enabled: false
-
Configure flow-ingress with ALB ingress controller as mentioned in pre-requisites:
#--------------------------------------------- # Flow ingress configuration section #--------------------------------------------- ingress: enabled: true host: cloudbees-cd.example.com # use alb controller from pre-requisite section class: alb # use acm certificates from pre-requisite section annotations: alb.ingress.kubernetes.io/certificate-arn: "arn:aws:acm:us-east-1:XXXXX:certificate/XXXX"
-
After configuration, during installation, set timeout to 4000s to allow Fargate to deploy all components:
helm upgrade releaseName cloudbees/cloudbees-flow \ -f valuesFile --namespace nameSpace --timeout 4000s