Cluster and storage requirements

4 minute readReference
  • Kubernetes cluster and Helm: see Kubernetes cluster requirements.

  • Database instance

    • For a production environment installation, see Database instance for details.

    • For a non-production environment installation, the CloudBees CD/RO Helm chart automatically installs a MariaDB database instance as a built-in database.

Non-production environment

Cluster capacity

The table below specifies the default memory and CPU requested for each CloudBees CD/RO component in a non-production environment.

CloudBees CD/RO component/serviceCPUMemory

CloudBees CD/RO Server

Per replica: 1.5 CPU, Default number of replicas: 1

Per replica: 6 GiB (Memory assigned to CloudBees CD/RO JVM: 4 GiB)

Web Server

0.25 CPU

256 MiB

CloudBees Analytics Server

Per replica: 0.1 CPU, Default number of replicas: 1

Per replica: 2 GiB (Memory assigned to Elasticsearch JVM: 1 GiB)

Repository Server

0.25 CPU

512 MiB (Memory assigned to Repository JVM: 512 MiB)

CloudBees CD/RO Agent (Bound)

0.25 CPU

512 MiB (Memory assigned to Agent JVM: 256 MiB)

Storage

Consult the table below for default storage requirements for a non-production environment.

CloudBees CD/RO component/serviceStorage typeStorage amount

CloudBees CD/RO Server

ReadWriteOnce

5 GiB

CloudBees Analytics Server

ReadWriteOnce

10 GiB

Repository Server

ReadWriteOnce

10 GiB

Production environment

Cluster capacity

For current cluster capaccity and sizing specifications, see Kubernetes cluster requirements.

Separate database instance

You must configure a separate database for your installation of CloudBees CD/RO before you initiate the installation. For current database server and sizing specifications, see Database server specifications.

Database configuration

The database can be installed inside or outside the cluster. Update the database section in the cloudbees-cd-defaults.yaml file that is specified when installing the cloudbees-flow chart. See the Database Values section in Configuration values.

Persistent Storage

CloudBees CD/RO components need varying levels of persistent storage.

  • Non-shared storage: A disk or file system that cannot be shared between two pods or two nodes. Examples include AWS EBS, GCP Persistent Disk, and Azure Disk.

  • Shared storage: A disk or file system that can be shared between two pods or two nodes. Examples include AWS EFS, GCP NFS Server, and Azure Filestore.

Consult the table below for storage requirements.

CloudBees CD/RO component/serviceStorage typeStorage amountMount point

Shared storage

CloudBees CD/RO Server

CloudBees CD/RO Agent (bound)

ReadWriteMany

5 GiB

/plugin-data

Repository Server

ReadWriteMany

20 GiB

/repository-data

Non-shared storage

CloudBees Analytics Server

ReadWriteOnce

Per replica: 4, 10 GiB Default number of replicas: 3

/elasticsearch-data

CloudBees CD/RO Agent (worker)

ReadWriteMany

5GiB

/workspace

Zookeeper

/var/lib/zookeeper

CloudBees CD/RO server persistent volume provisioning

Update the storage section in the cloudbees-cd-defaults.yaml file specified when installing the cloudbees-flow chart. See Storage values for details.

CloudBees CD/RO agent persistent volume provisioning

Update the storage section in the cloudbees-cd-agent-defaults.yaml file specified when installing the cloudbees-flow-agent chart. See Storage values for details.

Creating persistent shared storage

Google Cloud Platform or Amazon EKS

To get started, clone the cloudbees-example repo and then use the following instructions.

  • Google Cloud Platform: Use Filestore to create a fully-managed NFS file server on Google Cloud.

    Find information and scripts to provision a Google Cloud Filestore instance in the cloudbees-examples repo.

  • Amazon EKI: Use Amazon Elastic File System (EFS) to create fully-managed persistent storage.

    Find information and scripts to provision an EFS instance in the cloudbees-examples repo.

Red Hat OpenShift

For your convenience, sample code to support persistent volumes and persistent volume claim configuration is available here.

  • Storage class

    • The default storage class used by CloudBees CD/RO must be set to volumeBindingMode: immediate. This is achieved by modifying the yaml file for the existing default storage class or by creating a new, default storage class.

  • Using Dynamic NFS Client Provisioner with NFS server

    Install NFS Client provisioner, if not already installed. Then, use the NFS Client provisioner to dynamically create a persistent volume claim (PVC), as follows.

    helm repo add stable https://charts.helm.sh/stable
    helm repo update
    helm install nfs-provisioner stable/nfs-client-provisioner \
    --namespace kube-system \
    --set nfs.server=<NFS_HOST> \
    --set nfs.path=<NFS_PATH>

    where <NFS_HOST> is the NFS server host, and <NFS_PATH> is the NFS mounted path.

    This creates a storage class with the name, nfs-client. Use this storage class during CloudBees CD/RO installation with the following parameters to create PVCs.

    • CloudBees CD/RO installations need to set

      --set storage.volumes.serverPlugins.storageClass=nfs-client
      --set storage.volumes.serverPlugins.storage=10Gi
    • Optionally, configure CloudBees CD/RO bound agents to enable the ability to create and mount a PVC by setting:

      --set storage.volumes.boundAgentStorage.enabled=true
  • Using Existing NFS shared PVC

    1. Create PV using NFS:

      oc apply -n <namespace> -f pv.yaml

      where pv.yaml is as follows:

      ## pv.yaml  ###
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: flow-server-shared
      spec:
        capacity:
          storage: 15Gi
        accessModes:
          - ReadWriteMany
        persistentVolumeReclaimPolicy: Retain
        storageClassName: nfs-client
        nfs:
          path: <nfs-mount-path>
          server: <nfs-host>
    2. Create PVC manually using NFS:

      oc apply -n <namespace> -f pvc.yaml

      where pvc.yaml is as follows:

      ## pvc.yaml ##
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: flow-server-shared
      spec:
        accessModes:
          - ReadWriteMany
        storageClassName: nfs-client
        volumeName: flow-server-shared
        resources:
          requests:
            storage: 15Gi

Once you have PV and PVC created in <namespace>, use them during CloudBees CD/RO installations in OpenShift with the setting below.

CloudBees CD/RO installations need to set

--set storage.volumes.serverPlugins.name=my-shared-pvc
--set storage.volumes.serverPlugins.existingClaim=true

Make sure NFS is configured with proper permissions to allow the CloudBees CD/RO server read-write access:

chmod 770 <NFS_PATH>

Additional Red Hat OpenShift requirements